Science.gov

Sample records for evolutionary model-based algorithm

  1. Evolutionary pattern search algorithms

    SciTech Connect

    Hart, W.E.

    1995-09-19

    This paper defines a class of evolutionary algorithms called evolutionary pattern search algorithms (EPSAs) and analyzes their convergence properties. This class of algorithms is closely related to evolutionary programming, evolutionary strategie and real-coded genetic algorithms. EPSAs are self-adapting systems that modify the step size of the mutation operator in response to the success of previous optimization steps. The rule used to adapt the step size can be used to provide a stationary point convergence theory for EPSAs on any continuous function. This convergence theory is based on an extension of the convergence theory for generalized pattern search methods. An experimental analysis of the performance of EPSAs demonstrates that these algorithms can perform a level of global search that is comparable to that of canonical EAs. We also describe a stopping rule for EPSAs, which reliably terminated near stationary points in our experiments. This is the first stopping rule for any class of EAs that can terminate at a given distance from stationary points.

  2. Active Processor Scheduling Using Evolutionary Algorithms

    DTIC Science & Technology

    2002-12-01

    xiii Active Processor Scheduling Using Evolutionary Algorithms I. Introduction A distributed system offers the ability to run applications across...calculations are made. This model is sometimes referred to as a form of the island model of evolutionary computation because each population is evolved... Evolutionary Algorithms for Solving Multi-Objective Problems. Genetic Algorithms and Evolutionary Computation , New York: Kluwer Academic Publishers, 2002

  3. Evolving evolutionary algorithms using linear genetic programming.

    PubMed

    Oltean, Mihai

    2005-01-01

    A new model for evolving Evolutionary Algorithms is proposed in this paper. The model is based on the Linear Genetic Programming (LGP) technique. Every LGP chromosome encodes an EA which is used for solving a particular problem. Several Evolutionary Algorithms for function optimization, the Traveling Salesman Problem and the Quadratic Assignment Problem are evolved by using the considered model. Numerical experiments show that the evolved Evolutionary Algorithms perform similarly and sometimes even better than standard approaches for several well-known benchmarking problems.

  4. Algorithmic Mechanism Design of Evolutionary Computation

    PubMed Central

    Pei, Yan

    2015-01-01

    We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm. PMID:26257777

  5. Scheduling Earth Observing Satellites with Evolutionary Algorithms

    NASA Technical Reports Server (NTRS)

    Globus, Al; Crawford, James; Lohn, Jason; Pryor, Anna

    2003-01-01

    We hypothesize that evolutionary algorithms can effectively schedule coordinated fleets of Earth observing satellites. The constraints are complex and the bottlenecks are not well understood, a condition where evolutionary algorithms are often effective. This is, in part, because evolutionary algorithms require only that one can represent solutions, modify solutions, and evaluate solution fitness. To test the hypothesis we have developed a representative set of problems, produced optimization software (in Java) to solve them, and run experiments comparing techniques. This paper presents initial results of a comparison of several evolutionary and other optimization techniques; namely the genetic algorithm, simulated annealing, squeaky wheel optimization, and stochastic hill climbing. We also compare separate satellite vs. integrated scheduling of a two satellite constellation. While the results are not definitive, tests to date suggest that simulated annealing is the best search technique and integrated scheduling is superior.

  6. A consensus opinion model based on the evolutionary game

    NASA Astrophysics Data System (ADS)

    Yang, Han-Xin

    2016-08-01

    We propose a consensus opinion model based on the evolutionary game. In our model, both of the two connected agents receive a benefit if they have the same opinion, otherwise they both pay a cost. Agents update their opinions by comparing payoffs with neighbors. The opinion of an agent with higher payoff is more likely to be imitated. We apply this model in scale-free networks with tunable degree distribution. Interestingly, we find that there exists an optimal ratio of cost to benefit, leading to the shortest consensus time. Qualitative analysis is obtained by examining the evolution of the opinion clusters. Moreover, we find that the consensus time decreases as the average degree of the network increases, but increases with the noise introduced to permit irrational choices. The dependence of the consensus time on the network size is found to be a power-law form. For small or larger ratio of cost to benefit, the consensus time decreases as the degree exponent increases. However, for moderate ratio of cost to benefit, the consensus time increases with the degree exponent. Our results may provide new insights into opinion dynamics driven by the evolutionary game theory.

  7. Automatic design of decision-tree algorithms with evolutionary algorithms.

    PubMed

    Barros, Rodrigo C; Basgalupp, Márcio P; de Carvalho, André C P L F; Freitas, Alex A

    2013-01-01

    This study reports the empirical analysis of a hyper-heuristic evolutionary algorithm that is capable of automatically designing top-down decision-tree induction algorithms. Top-down decision-tree algorithms are of great importance, considering their ability to provide an intuitive and accurate knowledge representation for classification problems. The automatic design of these algorithms seems timely, given the large literature accumulated over more than 40 years of research in the manual design of decision-tree induction algorithms. The proposed hyper-heuristic evolutionary algorithm, HEAD-DT, is extensively tested using 20 public UCI datasets and 10 microarray gene expression datasets. The algorithms automatically designed by HEAD-DT are compared with traditional decision-tree induction algorithms, such as C4.5 and CART. Experimental results show that HEAD-DT is capable of generating algorithms which are significantly more accurate than C4.5 and CART.

  8. Aerodynamic Shape Optimization using an Evolutionary Algorithm

    NASA Technical Reports Server (NTRS)

    Hoist, Terry L.; Pulliam, Thomas H.

    2003-01-01

    A method for aerodynamic shape optimization based on an evolutionary algorithm approach is presented and demonstrated. Results are presented for a number of model problems to access the effect of algorithm parameters on convergence efficiency and reliability. A transonic viscous airfoil optimization problem-both single and two-objective variations is used as the basis for a preliminary comparison with an adjoint-gradient optimizer. The evolutionary algorithm is coupled with a transonic full potential flow solver and is used to optimize the inviscid flow about transonic wings including multi-objective and multi-discipline solutions that lead to the generation of pareto fronts. The results indicate that the evolutionary algorithm approach is easy to implement, flexible in application and extremely reliable.

  9. Aerodynamic Shape Optimization using an Evolutionary Algorithm

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.; Pulliam, Thomas H.; Kwak, Dochan (Technical Monitor)

    2003-01-01

    A method for aerodynamic shape optimization based on an evolutionary algorithm approach is presented and demonstrated. Results are presented for a number of model problems to access the effect of algorithm parameters on convergence efficiency and reliability. A transonic viscous airfoil optimization problem, both single and two-objective variations, is used as the basis for a preliminary comparison with an adjoint-gradient optimizer. The evolutionary algorithm is coupled with a transonic full potential flow solver and is used to optimize the inviscid flow about transonic wings including multi-objective and multi-discipline solutions that lead to the generation of pareto fronts. The results indicate that the evolutionary algorithm approach is easy to implement, flexible in application and extremely reliable.

  10. Synthesis of logic circuits with evolutionary algorithms

    SciTech Connect

    JONES,JAKE S.; DAVIDSON,GEORGE S.

    2000-01-26

    In the last decade there has been interest and research in the area of designing circuits with genetic algorithms, evolutionary algorithms, and genetic programming. However, the ability to design circuits of the size and complexity required by modern engineering design problems, simply by specifying required outputs for given inputs has as yet eluded researchers. This paper describes current research in the area of designing logic circuits using an evolutionary algorithm. The goal of the research is to improve the effectiveness of this method and make it a practical aid for design engineers. A novel method of implementing the algorithm is introduced, and results are presented for various multiprocessing systems. In addition to evolving standard arithmetic circuits, work in the area of evolving circuits that perform digital signal processing tasks is described.

  11. Evolutionary Algorithm for Calculating Available Transfer Capability

    NASA Astrophysics Data System (ADS)

    Šošić, Darko; Škokljev, Ivan

    2013-09-01

    The paper presents an evolutionary algorithm for calculating available transfer capability (ATC). ATC is a measure of the transfer capability remaining in the physical transmission network for further commercial activity over and above already committed uses. In this paper, MATLAB software is used to determine the ATC between any bus in deregulated power systems without violating system constraints such as thermal, voltage, and stability constraints. The algorithm is applied on IEEE 5 bus system and on IEEE 30 bus system.

  12. Knowledge Guided Evolutionary Algorithms in Financial Investing

    ERIC Educational Resources Information Center

    Wimmer, Hayden

    2013-01-01

    A large body of literature exists on evolutionary computing, genetic algorithms, decision trees, codified knowledge, and knowledge management systems; however, the intersection of these computing topics has not been widely researched. Moving through the set of all possible solutions--or traversing the search space--at random exhibits no control…

  13. Bell-Curve Based Evolutionary Optimization Algorithm

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; Laba, K.; Kincaid, R.

    1998-01-01

    The paper presents an optimization algorithm that falls in the category of genetic, or evolutionary algorithms. While the bit exchange is the basis of most of the Genetic Algorithms (GA) in research and applications in America, some alternatives, also in the category of evolutionary algorithms, but use a direct, geometrical approach have gained popularity in Europe and Asia. The Bell-Curve Based Evolutionary Algorithm (BCB) is in this alternative category and is distinguished by the use of a combination of n-dimensional geometry and the normal distribution, the bell-curve, in the generation of the offspring. The tool for creating a child is a geometrical construct comprising a line connecting two parents and a weighted point on that line. The point that defines the child deviates from the weighted point in two directions: parallel and orthogonal to the connecting line, the deviation in each direction obeying a probabilistic distribution. Tests showed satisfactory performance of BCB. The principal advantage of BCB is its controllability via the normal distribution parameters and the geometrical construct variables.

  14. Turbopump Performance Improved by Evolutionary Algorithms

    NASA Technical Reports Server (NTRS)

    Oyama, Akira; Liou, Meng-Sing

    2002-01-01

    The development of design optimization technology for turbomachinery has been initiated using the multiobjective evolutionary algorithm under NASA's Intelligent Synthesis Environment and Revolutionary Aeropropulsion Concepts programs. As an alternative to the traditional gradient-based methods, evolutionary algorithms (EA's) are emergent design-optimization algorithms modeled after the mechanisms found in natural evolution. EA's search from multiple points, instead of moving from a single point. In addition, they require no derivatives or gradients of the objective function, leading to robustness and simplicity in coupling any evaluation codes. Parallel efficiency also becomes very high by using a simple master-slave concept for function evaluations, since such evaluations often consume the most CPU time, such as computational fluid dynamics. Application of EA's to multiobjective design problems is also straightforward because EA's maintain a population of design candidates in parallel. Because of these advantages, EA's are a unique and attractive approach to real-world design optimization problems.

  15. A Note on Evolutionary Algorithms and Its Applications

    ERIC Educational Resources Information Center

    Bhargava, Shifali

    2013-01-01

    This paper introduces evolutionary algorithms with its applications in multi-objective optimization. Here elitist and non-elitist multiobjective evolutionary algorithms are discussed with their advantages and disadvantages. We also discuss constrained multiobjective evolutionary algorithms and their applications in various areas.

  16. Evolutionary algorithms and multi-agent systems

    NASA Astrophysics Data System (ADS)

    Oh, Jae C.

    2006-05-01

    This paper discusses how evolutionary algorithms are related to multi-agent systems and the possibility of military applications using the two disciplines. In particular, we present a game theoretic model for multi-agent resource distribution and allocation where agents in the environment must help each other to survive. Each agent maintains a set of variables representing actual friendship and perceived friendship. The model directly addresses problems in reputation management schemes in multi-agent systems and Peer-to-Peer distributed systems. We present algorithms based on evolutionary game process for maintaining the friendship values as well as a utility equation used in each agent's decision making. For an application problem, we adapted our formal model to the military coalition support problem in peace-keeping missions. Simulation results show that efficient resource allocation and sharing with minimum communication cost is achieved without centralized control.

  17. Automated Antenna Design with Evolutionary Algorithms

    NASA Technical Reports Server (NTRS)

    Hornby, Gregory S.; Globus, Al; Linden, Derek S.; Lohn, Jason D.

    2006-01-01

    Current methods of designing and optimizing antennas by hand are time and labor intensive, and limit complexity. Evolutionary design techniques can overcome these limitations by searching the design space and automatically finding effective solutions. In recent years, evolutionary algorithms have shown great promise in finding practical solutions in large, poorly understood design spaces. In particular, spacecraft antenna design has proven tractable to evolutionary design techniques. Researchers have been investigating evolutionary antenna design and optimization since the early 1990s, and the field has grown in recent years as computer speed has increased and electromagnetic simulators have improved. Two requirements-compliant antennas, one for ST5 and another for TDRS-C, have been automatically designed by evolutionary algorithms. The ST5 antenna is slated to fly this year, and a TDRS-C phased array element has been fabricated and tested. Such automated evolutionary design is enabled by medium-to-high quality simulators and fast modern computers to evaluate computer-generated designs. Evolutionary algorithms automate cut-and-try engineering, substituting automated search though millions of potential designs for intelligent search by engineers through a much smaller number of designs. For evolutionary design, the engineer chooses the evolutionary technique, parameters and the basic form of the antenna, e.g., single wire for ST5 and crossed-element Yagi for TDRS-C. Evolutionary algorithms then search for optimal configurations in the space defined by the engineer. NASA's Space Technology 5 (ST5) mission will launch three small spacecraft to test innovative concepts and technologies. Advanced evolutionary algorithms were used to automatically design antennas for ST5. The combination of wide beamwidth for a circularly-polarized wave and wide impedance bandwidth made for a challenging antenna design problem. From past experience in designing wire antennas, we chose to

  18. Stochastic Evolutionary Algorithms for Planning Robot Paths

    NASA Technical Reports Server (NTRS)

    Fink, Wolfgang; Aghazarian, Hrand; Huntsberger, Terrance; Terrile, Richard

    2006-01-01

    A computer program implements stochastic evolutionary algorithms for planning and optimizing collision-free paths for robots and their jointed limbs. Stochastic evolutionary algorithms can be made to produce acceptably close approximations to exact, optimal solutions for path-planning problems while often demanding much less computation than do exhaustive-search and deterministic inverse-kinematics algorithms that have been used previously for this purpose. Hence, the present software is better suited for application aboard robots having limited computing capabilities (see figure). The stochastic aspect lies in the use of simulated annealing to (1) prevent trapping of an optimization algorithm in local minima of an energy-like error measure by which the fitness of a trial solution is evaluated while (2) ensuring that the entire multidimensional configuration and parameter space of the path-planning problem is sampled efficiently with respect to both robot joint angles and computation time. Simulated annealing is an established technique for avoiding local minima in multidimensional optimization problems, but has not, until now, been applied to planning collision-free robot paths by use of low-power computers.

  19. Intervals in evolutionary algorithms for global optimization

    SciTech Connect

    Patil, R.B.

    1995-05-01

    Optimization is of central concern to a number of disciplines. Interval Arithmetic methods for global optimization provide us with (guaranteed) verified results. These methods are mainly restricted to the classes of objective functions that are twice differentiable and use a simple strategy of eliminating a splitting larger regions of search space in the global optimization process. An efficient approach that combines the efficient strategy from Interval Global Optimization Methods and robustness of the Evolutionary Algorithms is proposed. In the proposed approach, search begins with randomly created interval vectors with interval widths equal to the whole domain. Before the beginning of the evolutionary process, fitness of these interval parameter vectors is defined by evaluating the objective function at the center of the initial interval vectors. In the subsequent evolutionary process the local optimization process returns an estimate of the bounds of the objective function over the interval vectors. Though these bounds may not be correct at the beginning due to large interval widths and complicated function properties, the process of reducing interval widths over time and a selection approach similar to simulated annealing helps in estimating reasonably correct bounds as the population evolves. The interval parameter vectors at these estimated bounds (local optima) are then subjected to crossover and mutation operators. This evolutionary process continues for predetermined number of generations in the search of the global optimum.

  20. Predicting polymeric crystal structures by evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Zhu, Qiang; Sharma, Vinit; Oganov, Artem R.; Ramprasad, Ramamurthy

    2014-10-01

    The recently developed evolutionary algorithm USPEX proved to be a tool that enables accurate and reliable prediction of structures. Here we extend this method to predict the crystal structure of polymers by constrained evolutionary search, where each monomeric unit is treated as a building block with fixed connectivity. This greatly reduces the search space and allows the initial structure generation with different sequences and packings of these blocks. The new constrained evolutionary algorithm is successfully tested and validated on a diverse range of experimentally known polymers, namely, polyethylene, polyacetylene, poly(glycolic acid), poly(vinyl chloride), poly(oxymethylene), poly(phenylene oxide), and poly (p-phenylene sulfide). By fixing the orientation of polymeric chains, this method can be further extended to predict the structures of complex linear polymers, such as all polymorphs of poly(vinylidene fluoride), nylon-6 and cellulose. The excellent agreement between predicted crystal structures and experimentally known structures assures a major role of this approach in the efficient design of the future polymeric materials.

  1. A theoretical comparison of evolutionary algorithms and simulated annealing

    SciTech Connect

    Hart, W.E.

    1995-08-28

    This paper theoretically compares the performance of simulated annealing and evolutionary algorithms. Our main result is that under mild conditions a wide variety of evolutionary algorithms can be shown to have greater performance than simulated annealing after a sufficiently large number of function evaluations. This class of EAs includes variants of evolutionary strategie and evolutionary programming, the canonical genetic algorithm, as well as a variety of genetic algorithms that have been applied to combinatorial optimization problems. The proof of this result is based on a performance analysis of a very general class of stochastic optimization algorithms, which has implications for the performance of a variety of other optimization algorithm.

  2. Evolutionary algorithm for metabolic pathways synthesis.

    PubMed

    Gerard, Matias F; Stegmayer, Georgina; Milone, Diego H

    2016-06-01

    Metabolic pathway building is an active field of research, necessary to understand and manipulate the metabolism of organisms. There are different approaches, mainly based on classical search methods, to find linear sequences of reactions linking two compounds. However, an important limitation of these methods is the exponential increase of search trees when a large number of compounds and reactions is considered. Besides, such models do not take into account all substrates for each reaction during the search, leading to solutions that lack biological feasibility in many cases. This work proposes a new evolutionary algorithm that allows searching not only linear, but also branched metabolic pathways, formed by feasible reactions that relate multiple compounds simultaneously. Tests performed using several sets of reactions show that this algorithm is able to find feasible linear and branched metabolic pathways.

  3. Performance Comparison Of Evolutionary Algorithms For Image Clustering

    NASA Astrophysics Data System (ADS)

    Civicioglu, P.; Atasever, U. H.; Ozkan, C.; Besdok, E.; Karkinli, A. E.; Kesikoglu, A.

    2014-09-01

    Evolutionary computation tools are able to process real valued numerical sets in order to extract suboptimal solution of designed problem. Data clustering algorithms have been intensively used for image segmentation in remote sensing applications. Despite of wide usage of evolutionary algorithms on data clustering, their clustering performances have been scarcely studied by using clustering validation indexes. In this paper, the recently proposed evolutionary algorithms (i.e., Artificial Bee Colony Algorithm (ABC), Gravitational Search Algorithm (GSA), Cuckoo Search Algorithm (CS), Adaptive Differential Evolution Algorithm (JADE), Differential Search Algorithm (DSA) and Backtracking Search Optimization Algorithm (BSA)) and some classical image clustering techniques (i.e., k-means, fcm, som networks) have been used to cluster images and their performances have been compared by using four clustering validation indexes. Experimental test results exposed that evolutionary algorithms give more reliable cluster-centers than classical clustering techniques, but their convergence time is quite long.

  4. Barnacle cement: a polymerization model based on evolutionary concepts.

    PubMed

    Dickinson, Gary H; Vega, Irving E; Wahl, Kathryn J; Orihuela, Beatriz; Beyley, Veronica; Rodriguez, Eva N; Everett, Richard K; Bonaventura, Joseph; Rittschof, Daniel

    2009-11-01

    Enzymes and biochemical mechanisms essential to survival are under extreme selective pressure and are highly conserved through evolutionary time. We applied this evolutionary concept to barnacle cement polymerization, a process critical to barnacle fitness that involves aggregation and cross-linking of proteins. The biochemical mechanisms of cement polymerization remain largely unknown. We hypothesized that this process is biochemically similar to blood clotting, a critical physiological response that is also based on aggregation and cross-linking of proteins. Like key elements of vertebrate and invertebrate blood clotting, barnacle cement polymerization was shown to involve proteolytic activation of enzymes and structural precursors, transglutaminase cross-linking and assembly of fibrous proteins. Proteolytic activation of structural proteins maximizes the potential for bonding interactions with other proteins and with the surface. Transglutaminase cross-linking reinforces cement integrity. Remarkably, epitopes and sequences homologous to bovine trypsin and human transglutaminase were identified in barnacle cement with tandem mass spectrometry and/or western blotting. Akin to blood clotting, the peptides generated during proteolytic activation functioned as signal molecules, linking a molecular level event (protein aggregation) to a behavioral response (barnacle larval settlement). Our results draw attention to a highly conserved protein polymerization mechanism and shed light on a long-standing biochemical puzzle. We suggest that barnacle cement polymerization is a specialized form of wound healing. The polymerization mechanism common between barnacle cement and blood may be a theme for many marine animal glues.

  5. Wind farm optimization using evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Ituarte-Villarreal, Carlos M.

    In recent years, the wind power industry has focused its efforts on solving the Wind Farm Layout Optimization (WFLO) problem. Wind resource assessment is a pivotal step in optimizing the wind-farm design and siting and, in determining whether a project is economically feasible or not. In the present work, three (3) different optimization methods are proposed for the solution of the WFLO: (i) A modified Viral System Algorithm applied to the optimization of the proper location of the components in a wind-farm to maximize the energy output given a stated wind environment of the site. The optimization problem is formulated as the minimization of energy cost per unit produced and applies a penalization for the lack of system reliability. The viral system algorithm utilized in this research solves three (3) well-known problems in the wind-energy literature; (ii) a new multiple objective evolutionary algorithm to obtain optimal placement of wind turbines while considering the power output, cost, and reliability of the system. The algorithm presented is based on evolutionary computation and the objective functions considered are the maximization of power output, the minimization of wind farm cost and the maximization of system reliability. The final solution to this multiple objective problem is presented as a set of Pareto solutions and, (iii) A hybrid viral-based optimization algorithm adapted to find the proper component configuration for a wind farm with the introduction of the universal generating function (UGF) analytical approach to discretize the different operating or mechanical levels of the wind turbines in addition to the various wind speed states. The proposed methodology considers the specific probability functions of the wind resource to describe their proper behaviors to account for the stochastic comportment of the renewable energy components, aiming to increase their power output and the reliability of these systems. The developed heuristic considers a

  6. Evolutionary algorithm for vehicle driving cycle generation.

    PubMed

    Perhinschi, Mario G; Marlowe, Christopher; Tamayo, Sergio; Tu, Jun; Wayne, W Scott

    2011-09-01

    Modeling transit bus emissions and fuel economy requires a large amount of experimental data over wide ranges of operational conditions. Chassis dynamometer tests are typically performed using representative driving cycles defined based on vehicle instantaneous speed as sequences of "microtrips", which are intervals between consecutive vehicle stops. Overall significant parameters of the driving cycle, such as average speed, stops per mile, kinetic intensity, and others, are used as independent variables in the modeling process. Performing tests at all the necessary combinations of parameters is expensive and time consuming. In this paper, a methodology is proposed for building driving cycles at prescribed independent variable values using experimental data through the concatenation of "microtrips" isolated from a limited number of standard chassis dynamometer test cycles. The selection of the adequate "microtrips" is achieved through a customized evolutionary algorithm. The genetic representation uses microtrip definitions as genes. Specific mutation, crossover, and karyotype alteration operators have been defined. The Roulette-Wheel selection technique with elitist strategy drives the optimization process, which consists of minimizing the errors to desired overall cycle parameters. This utility is part of the Integrated Bus Information System developed at West Virginia University.

  7. Superelement model based parallel algorithm for vehicle dynamics

    NASA Astrophysics Data System (ADS)

    Agrawal, O. P.; Danhof, K. J.; Kumar, R.

    1994-05-01

    This paper presents a superelement model based parallel algorithm for a planar vehicle dynamics. The vehicle model is made up of a chassis and two suspension systems each of which consists of an axle-wheel assembly and two trailing arms. In this model, the chassis is treated as a Cartesian element and each suspension system is treated as a superelement. The parameters associated with the superelements are computed using an inverse dynamics technique. Suspension shock absorbers and the tires are modeled by nonlinear springs and dampers. The Euler-Lagrange approach is used to develop the system equations of motion. This leads to a system of differential and algebraic equations in which the constraints internal to superelements appear only explicitly. The above formulation is implemented on a multiprocessor machine. The numerical flow chart is divided into modules and the computation of several modules is performed in parallel to gain computational efficiency. In this implementation, the master (parent processor) creates a pool of slaves (child processors) at the beginning of the program. The slaves remain in the pool until they are needed to perform certain tasks. Upon completion of a particular task, a slave returns to the pool. This improves the overall response time of the algorithm. The formulation presented is general which makes it attractive for a general purpose code development. Speedups obtained in the different modules of the dynamic analysis computation are also presented. Results show that the superelement model based parallel algorithm can significantly reduce the vehicle dynamics simulation time.

  8. A Review of Surrogate Assisted Multiobjective Evolutionary Algorithms

    PubMed Central

    Díaz-Manríquez, Alan; Toscano, Gregorio; Barron-Zambrano, Jose Hugo; Tello-Leal, Edgar

    2016-01-01

    Multiobjective evolutionary algorithms have incorporated surrogate models in order to reduce the number of required evaluations to approximate the Pareto front of computationally expensive multiobjective optimization problems. Currently, few works have reviewed the state of the art in this topic. However, the existing reviews have focused on classifying the evolutionary multiobjective optimization algorithms with respect to the type of underlying surrogate model. In this paper, we center our focus on classifying multiobjective evolutionary algorithms with respect to their integration with surrogate models. This interaction has led us to classify similar approaches and identify advantages and disadvantages of each class. PMID:27382366

  9. A Review of Surrogate Assisted Multiobjective Evolutionary Algorithms.

    PubMed

    Díaz-Manríquez, Alan; Toscano, Gregorio; Barron-Zambrano, Jose Hugo; Tello-Leal, Edgar

    2016-01-01

    Multiobjective evolutionary algorithms have incorporated surrogate models in order to reduce the number of required evaluations to approximate the Pareto front of computationally expensive multiobjective optimization problems. Currently, few works have reviewed the state of the art in this topic. However, the existing reviews have focused on classifying the evolutionary multiobjective optimization algorithms with respect to the type of underlying surrogate model. In this paper, we center our focus on classifying multiobjective evolutionary algorithms with respect to their integration with surrogate models. This interaction has led us to classify similar approaches and identify advantages and disadvantages of each class.

  10. Comparing Evolutionary Strategies on a Biobjective Cultural Algorithm

    PubMed Central

    Lagos, Carolina; Crawford, Broderick; Cabrera, Enrique; Rubio, José-Miguel; Paredes, Fernando

    2014-01-01

    Evolutionary algorithms have been widely used to solve large and complex optimisation problems. Cultural algorithms (CAs) are evolutionary algorithms that have been used to solve both single and, to a less extent, multiobjective optimisation problems. In order to solve these optimisation problems, CAs make use of different strategies such as normative knowledge, historical knowledge, circumstantial knowledge, and among others. In this paper we present a comparison among CAs that make use of different evolutionary strategies; the first one implements a historical knowledge, the second one considers a circumstantial knowledge, and the third one implements a normative knowledge. These CAs are applied on a biobjective uncapacitated facility location problem (BOUFLP), the biobjective version of the well-known uncapacitated facility location problem. To the best of our knowledge, only few articles have applied evolutionary multiobjective algorithms on the BOUFLP and none of those has focused on the impact of the evolutionary strategy on the algorithm performance. Our biobjective cultural algorithm, called BOCA, obtains important improvements when compared to other well-known evolutionary biobjective optimisation algorithms such as PAES and NSGA-II. The conflicting objective functions considered in this study are cost minimisation and coverage maximisation. Solutions obtained by each algorithm are compared using a hypervolume S metric. PMID:25254257

  11. A nonlinear regression model-based predictive control algorithm.

    PubMed

    Dubay, R; Abu-Ayyad, M; Hernandez, J M

    2009-04-01

    This paper presents a unique approach for designing a nonlinear regression model-based predictive controller (NRPC) for single-input-single-output (SISO) and multi-input-multi-output (MIMO) processes that are common in industrial applications. The innovation of this strategy is that the controller structure allows nonlinear open-loop modeling to be conducted while closed-loop control is executed every sampling instant. Consequently, the system matrix is regenerated every sampling instant using a continuous function providing a more accurate prediction of the plant. Computer simulations are carried out on nonlinear plants, demonstrating that the new approach is easily implemented and provides tight control. Also, the proposed algorithm is implemented on two real time SISO applications; a DC motor, a plastic injection molding machine and a nonlinear MIMO thermal system comprising three temperature zones to be controlled with interacting effects. The experimental closed-loop responses of the proposed algorithm were compared to a multi-model dynamic matrix controller (MPC) with improved results for various set point trajectories. Good disturbance rejection was attained, resulting in improved tracking of multi-set point profiles in comparison to multi-model MPC.

  12. Development of antibiotic regimens using graph based evolutionary algorithms.

    PubMed

    Corns, Steven M; Ashlock, Daniel A; Bryden, Kenneth M

    2013-12-01

    This paper examines the use of evolutionary algorithms in the development of antibiotic regimens given to production animals. A model is constructed that combines the lifespan of the animal and the bacteria living in the animal's gastro-intestinal tract from the early finishing stage until the animal reaches market weight. This model is used as the fitness evaluation for a set of graph based evolutionary algorithms to assess the impact of diversity control on the evolving antibiotic regimens. The graph based evolutionary algorithms have two objectives: to find an antibiotic treatment regimen that maintains the weight gain and health benefits of antibiotic use and to reduce the risk of spreading antibiotic resistant bacteria. This study examines different regimens of tylosin phosphate use on bacteria populations divided into Gram positive and Gram negative types, with a focus on Campylobacter spp. Treatment regimens were found that provided decreased antibiotic resistance relative to conventional methods while providing nearly the same benefits as conventional antibiotic regimes. By using a graph to control the information flow in the evolutionary algorithm, a variety of solutions along the Pareto front can be found automatically for this and other multi-objective problems.

  13. A Hybrid Evolutionary Algorithm for Wheat Blending Problem

    PubMed Central

    Bonyadi, Mohammad Reza; Michalewicz, Zbigniew; Barone, Luigi

    2014-01-01

    This paper presents a hybrid evolutionary algorithm to deal with the wheat blending problem. The unique constraints of this problem make many existing algorithms fail: either they do not generate acceptable results or they are not able to complete optimization within the required time. The proposed algorithm starts with a filtering process that follows predefined rules to reduce the search space. Then the linear-relaxed version of the problem is solved using a standard linear programming algorithm. The result is used in conjunction with a solution generated by a heuristic method to generate an initial solution. After that, a hybrid of an evolutionary algorithm, a heuristic method, and a linear programming solver is used to improve the quality of the solution. A local search based posttuning method is also incorporated into the algorithm. The proposed algorithm has been tested on artificial test cases and also real data from past years. Results show that the algorithm is able to find quality results in all cases and outperforms the existing method in terms of both quality and speed. PMID:24707222

  14. Comparison of evolutionary algorithms for LPDA antenna optimization

    NASA Astrophysics Data System (ADS)

    Lazaridis, Pavlos I.; Tziris, Emmanouil N.; Zaharis, Zaharias D.; Xenos, Thomas D.; Cosmas, John P.; Gallion, Philippe B.; Holmes, Violeta; Glover, Ian A.

    2016-08-01

    A novel approach to broadband log-periodic antenna design is presented, where some of the most powerful evolutionary algorithms are applied and compared for the optimal design of wire log-periodic dipole arrays (LPDA) using Numerical Electromagnetics Code. The target is to achieve an optimal antenna design with respect to maximum gain, gain flatness, front-to-rear ratio (F/R) and standing wave ratio. The parameters of the LPDA optimized are the dipole lengths, the spacing between the dipoles, and the dipole wire diameters. The evolutionary algorithms compared are the Differential Evolution (DE), Particle Swarm (PSO), Taguchi, Invasive Weed (IWO), and Adaptive Invasive Weed Optimization (ADIWO). Superior performance is achieved by the IWO (best results) and PSO (fast convergence) algorithms.

  15. A survey on evolutionary algorithm based hybrid intelligence in bioinformatics.

    PubMed

    Li, Shan; Kang, Liying; Zhao, Xing-Ming

    2014-01-01

    With the rapid advance in genomics, proteomics, metabolomics, and other types of omics technologies during the past decades, a tremendous amount of data related to molecular biology has been produced. It is becoming a big challenge for the bioinformatists to analyze and interpret these data with conventional intelligent techniques, for example, support vector machines. Recently, the hybrid intelligent methods, which integrate several standard intelligent approaches, are becoming more and more popular due to their robustness and efficiency. Specifically, the hybrid intelligent approaches based on evolutionary algorithms (EAs) are widely used in various fields due to the efficiency and robustness of EAs. In this review, we give an introduction about the applications of hybrid intelligent methods, in particular those based on evolutionary algorithm, in bioinformatics. In particular, we focus on their applications to three common problems that arise in bioinformatics, that is, feature selection, parameter estimation, and reconstruction of biological networks.

  16. A Survey on Evolutionary Algorithm Based Hybrid Intelligence in Bioinformatics

    PubMed Central

    Li, Shan; Zhao, Xing-Ming

    2014-01-01

    With the rapid advance in genomics, proteomics, metabolomics, and other types of omics technologies during the past decades, a tremendous amount of data related to molecular biology has been produced. It is becoming a big challenge for the bioinformatists to analyze and interpret these data with conventional intelligent techniques, for example, support vector machines. Recently, the hybrid intelligent methods, which integrate several standard intelligent approaches, are becoming more and more popular due to their robustness and efficiency. Specifically, the hybrid intelligent approaches based on evolutionary algorithms (EAs) are widely used in various fields due to the efficiency and robustness of EAs. In this review, we give an introduction about the applications of hybrid intelligent methods, in particular those based on evolutionary algorithm, in bioinformatics. In particular, we focus on their applications to three common problems that arise in bioinformatics, that is, feature selection, parameter estimation, and reconstruction of biological networks. PMID:24729969

  17. Evolutionary algorithm for optimization of nonimaging Fresnel lens geometry.

    PubMed

    Yamada, N; Nishikawa, T

    2010-06-21

    In this study, an evolutionary algorithm (EA), which consists of genetic and immune algorithms, is introduced to design the optical geometry of a nonimaging Fresnel lens; this lens generates the uniform flux concentration required for a photovoltaic cell. Herein, a design procedure that incorporates a ray-tracing technique in the EA is described, and the validity of the design is demonstrated. The results show that the EA automatically generated a unique geometry of the Fresnel lens; the use of this geometry resulted in better uniform flux concentration with high optical efficiency.

  18. Supervised and unsupervised discretization methods for evolutionary algorithms

    SciTech Connect

    Cantu-Paz, E

    2001-01-24

    This paper introduces simple model-building evolutionary algorithms (EAs) that operate on continuous domains. The algorithms are based on supervised and unsupervised discretization methods that have been used as preprocessing steps in machine learning. The basic idea is to discretize the continuous variables and use the discretization as a simple model of the solutions under consideration. The model is then used to generate new solutions directly, instead of using the usual operators based on sexual recombination and mutation. The algorithms presented here have fewer parameters than traditional and other model-building EAs. They expect that the proposed algorithms that use multivariate models scale up better to the dimensionality of the problem than existing EAs.

  19. Evolutionary algorithms applied to reliable communication network design

    NASA Astrophysics Data System (ADS)

    Nesmachnow, Sergio; Cancela, Hector; Alba, Enrique

    2007-10-01

    Several evolutionary algorithms (EAs) applied to a wide class of communication network design problems modelled under the generalized Steiner problem (GSP) are evaluated. In order to provide a fault-tolerant design, a solution to this problem consists of a preset number of independent paths linking each pair of potentially communicating terminal nodes. This usually requires considering intermediate non-terminal nodes (Steiner nodes), which are used to ensure path redundancy, while trying to minimize the overall cost. The GSP is an NP-hard problem for which few algorithms have been proposed. This article presents a comparative study of pure and hybrid EAs applied to the GSP, codified over MALLBA, a general purpose library for combinatorial optimization. The algorithms were tested on several GSPs, and asset efficient numerical results are reported for both serial and distributed models of the evaluated algorithms.

  20. Efficient and scalable Pareto optimization by evolutionary local selection algorithms.

    PubMed

    Menczer, F; Degeratu, M; Street, W N

    2000-01-01

    Local selection is a simple selection scheme in evolutionary computation. Individual fitnesses are accumulated over time and compared to a fixed threshold, rather than to each other, to decide who gets to reproduce. Local selection, coupled with fitness functions stemming from the consumption of finite shared environmental resources, maintains diversity in a way similar to fitness sharing. However, it is more efficient than fitness sharing and lends itself to parallel implementations for distributed tasks. While local selection is not prone to premature convergence, it applies minimal selection pressure to the population. Local selection is, therefore, particularly suited to Pareto optimization or problem classes where diverse solutions must be covered. This paper introduces ELSA, an evolutionary algorithm employing local selection and outlines three experiments in which ELSA is applied to multiobjective problems: a multimodal graph search problem, and two Pareto optimization problems. In all these experiments, ELSA significantly outperforms other well-known evolutionary algorithms. The paper also discusses scalability, parameter dependence, and the potential distributed applications of the algorithm.

  1. Evolutionary Algorithms for Boolean Functions in Diverse Domains of Cryptography.

    PubMed

    Picek, Stjepan; Carlet, Claude; Guilley, Sylvain; Miller, Julian F; Jakobovic, Domagoj

    2016-01-01

    The role of Boolean functions is prominent in several areas including cryptography, sequences, and coding theory. Therefore, various methods for the construction of Boolean functions with desired properties are of direct interest. New motivations on the role of Boolean functions in cryptography with attendant new properties have emerged over the years. There are still many combinations of design criteria left unexplored and in this matter evolutionary computation can play a distinct role. This article concentrates on two scenarios for the use of Boolean functions in cryptography. The first uses Boolean functions as the source of the nonlinearity in filter and combiner generators. Although relatively well explored using evolutionary algorithms, it still presents an interesting goal in terms of the practical sizes of Boolean functions. The second scenario appeared rather recently where the objective is to find Boolean functions that have various orders of the correlation immunity and minimal Hamming weight. In both these scenarios we see that evolutionary algorithms are able to find high-quality solutions where genetic programming performs the best.

  2. Fast stochastic algorithm for simulating evolutionary population dynamics

    NASA Astrophysics Data System (ADS)

    Tsimring, Lev; Hasty, Jeff; Mather, William

    2012-02-01

    Evolution and co-evolution of ecological communities are stochastic processes often characterized by vastly different rates of reproduction and mutation and a coexistence of very large and very small sub-populations of co-evolving species. This creates serious difficulties for accurate statistical modeling of evolutionary dynamics. In this talk, we introduce a new exact algorithm for fast fully stochastic simulations of birth/death/mutation processes. It produces a significant speedup compared to the direct stochastic simulation algorithm in a typical case when the total population size is large and the mutation rates are much smaller than birth/death rates. We illustrate the performance of the algorithm on several representative examples: evolution on a smooth fitness landscape, NK model, and stochastic predator-prey system.

  3. [A new algorithm for NIR modeling based on manifold learning].

    PubMed

    Hong, Ming-Jian; Wen, Zhi-Yu; Zhang, Xiao-Hong; Wen, Quan

    2009-07-01

    Manifold learning is a new kind of algorithm originating from the field of machine learning to find the intrinsic dimensionality of numerous and complex data and to extract most important information from the raw data to develop a regression or classification model. The basic assumption of the manifold learning is that the high-dimensional data measured from the same object using some devices must reside on a manifold with much lower dimensions determined by a few properties of the object. While NIR spectra are characterized by their high dimensions and complicated band assignment, the authors may assume that the NIR spectra of the same kind of substances with different chemical concentrations should reside on a manifold with much lower dimensions determined by the concentrations, according to the above assumption. As one of the best known algorithms of manifold learning, locally linear embedding (LLE) further assumes that the underlying manifold is locally linear. So, every data point in the manifold should be a linear combination of its neighbors. Based on the above assumptions, the present paper proposes a new algorithm named least square locally weighted regression (LS-LWR), which is a kind of LWR with weights determined by the least squares instead of a predefined function. Then, the NIR spectra of glucose solutions with various concentrations are measured using a NIR spectrometer and LS-LWR is verified by predicting the concentrations of glucose solutions quantitatively. Compared with the existing algorithms such as principal component regression (PCR) and partial least squares regression (PLSR), the LS-LWR has better predictability measured by the standard error of prediction (SEP) and generates an elegant model with good stability and efficiency.

  4. The algorithmic anatomy of model-based evaluation.

    PubMed

    Daw, Nathaniel D; Dayan, Peter

    2014-11-05

    Despite many debates in the first half of the twentieth century, it is now largely a truism that humans and other animals build models of their environments and use them for prediction and control. However, model-based (MB) reasoning presents severe computational challenges. Alternative, computationally simpler, model-free (MF) schemes have been suggested in the reinforcement learning literature, and have afforded influential accounts of behavioural and neural data. Here, we study the realization of MB calculations, and the ways that this might be woven together with MF values and evaluation methods. There are as yet mostly only hints in the literature as to the resulting tapestry, so we offer more preview than review.

  5. Using Entropy for Parameter Analysis of Evolutionary Algorithms

    NASA Astrophysics Data System (ADS)

    Smit, Selmar K.; Eiben, Agoston E.

    Evolutionary algorithms (EA) form a rich class of stochastic search methods that share the basic principles of incrementally improving the quality of a set of candidate solutions by means of variation and selection (Eiben and Smith 2003, De Jong 2006). Such variation and selection operators often require parameters to be specified. Finding a good set of parameter values is a nontrivial problem in itself. Furthermore, some EA parameters are more relevant than others in the sense that choosing different values for them affects EA performance more than for the other parameters. In this chapter we explain the notion of entropy and discuss how entropy can disclose important information about EA parameters, in particular, about their relevance. We describe an algorithm that is able to estimate the entropy of EA parameters and we present a case study, based on extensive experimentation, to demonstrate the usefulness of this approach and some interesting insights that are gained.

  6. Maximizing Submodular Functions under Matroid Constraints by Evolutionary Algorithms.

    PubMed

    Friedrich, Tobias; Neumann, Frank

    2015-01-01

    Many combinatorial optimization problems have underlying goal functions that are submodular. The classical goal is to find a good solution for a given submodular function f under a given set of constraints. In this paper, we investigate the runtime of a simple single objective evolutionary algorithm called (1 + 1) EA and a multiobjective evolutionary algorithm called GSEMO until they have obtained a good approximation for submodular functions. For the case of monotone submodular functions and uniform cardinality constraints, we show that the GSEMO achieves a (1 - 1/e)-approximation in expected polynomial time. For the case of monotone functions where the constraints are given by the intersection of K ≥ 2 matroids, we show that the (1 + 1) EA achieves a (1/k + δ)-approximation in expected polynomial time for any constant δ > 0. Turning to nonmonotone symmetric submodular functions with k ≥ 1 matroid intersection constraints, we show that the GSEMO achieves a 1/((k + 2)(1 + ε))-approximation in expected time O(n(k + 6)log(n)/ε.

  7. Multiobjective Optimization of Rocket Engine Pumps Using Evolutionary Algorithm

    NASA Technical Reports Server (NTRS)

    Oyama, Akira; Liou, Meng-Sing

    2001-01-01

    A design optimization method for turbopumps of cryogenic rocket engines has been developed. Multiobjective Evolutionary Algorithm (MOEA) is used for multiobjective pump design optimizations. Performances of design candidates are evaluated by using the meanline pump flow modeling method based on the Euler turbine equation coupled with empirical correlations for rotor efficiency. To demonstrate the feasibility of the present approach, a single stage centrifugal pump design and multistage pump design optimizations are presented. In both cases, the present method obtains very reasonable Pareto-optimal solutions that include some designs outperforming the original design in total head while reducing input power by one percent. Detailed observation of the design results also reveals some important design criteria for turbopumps in cryogenic rocket engines. These results demonstrate the feasibility of the EA-based design optimization method in this field.

  8. Neural-Network-Biased Genetic Algorithms for Materials Design: Evolutionary Algorithms That Learn.

    PubMed

    Patra, Tarak K; Meenakshisundaram, Venkatesh; Hung, Jui-Hsiang; Simmons, David S

    2017-02-13

    Machine learning has the potential to dramatically accelerate high-throughput approaches to materials design, as demonstrated by successes in biomolecular design and hard materials design. However, in the search for new soft materials exhibiting properties and performance beyond those previously achieved, machine learning approaches are frequently limited by two shortcomings. First, because they are intrinsically interpolative, they are better suited to the optimization of properties within the known range of accessible behavior than to the discovery of new materials with extremal behavior. Second, they require large pre-existing data sets, which are frequently unavailable and prohibitively expensive to produce. Here we describe a new strategy, the neural-network-biased genetic algorithm (NBGA), for combining genetic algorithms, machine learning, and high-throughput computation or experiment to discover materials with extremal properties in the absence of pre-existing data. Within this strategy, predictions from a progressively constructed artificial neural network are employed to bias the evolution of a genetic algorithm, with fitness evaluations performed via direct simulation or experiment. In effect, this strategy gives the evolutionary algorithm the ability to "learn" and draw inferences from its experience to accelerate the evolutionary process. We test this algorithm against several standard optimization problems and polymer design problems and demonstrate that it matches and typically exceeds the efficiency and reproducibility of standard approaches including a direct-evaluation genetic algorithm and a neural-network-evaluated genetic algorithm. The success of this algorithm in a range of test problems indicates that the NBGA provides a robust strategy for employing informatics-accelerated high-throughput methods to accelerate materials design in the absence of pre-existing data.

  9. A hierarchical evolutionary algorithm for multiobjective optimization in IMRT

    PubMed Central

    Holdsworth, Clay; Kim, Minsun; Liao, Jay; Phillips, Mark H.

    2010-01-01

    Purpose: The current inverse planning methods for intensity modulated radiation therapy (IMRT) are limited because they are not designed to explore the trade-offs between the competing objectives of tumor and normal tissues. The goal was to develop an efficient multiobjective optimization algorithm that was flexible enough to handle any form of objective function and that resulted in a set of Pareto optimal plans. Methods: A hierarchical evolutionary multiobjective algorithm designed to quickly generate a small diverse Pareto optimal set of IMRT plans that meet all clinical constraints and reflect the optimal trade-offs in any radiation therapy plan was developed. The top level of the hierarchical algorithm is a multiobjective evolutionary algorithm (MOEA). The genes of the individuals generated in the MOEA are the parameters that define the penalty function minimized during an accelerated deterministic IMRT optimization that represents the bottom level of the hierarchy. The MOEA incorporates clinical criteria to restrict the search space through protocol objectives and then uses Pareto optimality among the fitness objectives to select individuals. The population size is not fixed, but a specialized niche effect, domination advantage, is used to control the population and plan diversity. The number of fitness objectives is kept to a minimum for greater selective pressure, but the number of genes is expanded for flexibility that allows a better approximation of the Pareto front. Results: The MOEA improvements were evaluated for two example prostate cases with one target and two organs at risk (OARs). The population of plans generated by the modified MOEA was closer to the Pareto front than populations of plans generated using a standard genetic algorithm package. Statistical significance of the method was established by compiling the results of 25 multiobjective optimizations using each method. From these sets of 12–15 plans, any random plan selected from a MOEA

  10. Performance Analysis of Evolutionary Algorithms for Steiner Tree Problems.

    PubMed

    Lai, Xinsheng; Zhou, Yuren; Xia, Xiaoyun; Zhang, Qingfu

    2016-12-13

    The Steiner tree problem (STP) aims to determine some Steiner nodes such that the minimum spanning tree over these Steiner nodes and a given set of special nodes has the minimum weight, which is NP-hard. STP includes several important cases. The Steiner tree problem in graphs (GSTP) is one of them. Many heuristics have been proposed for STP, and some of them have proved to be performance guarantee approximation algorithms for this problem. Since evolutionary algorithms (EAs) are general and popular randomized heuristics, it is significant to investigate the performance of EAs for STP. Several empirical investigations have shown that EAs are efficient for STP. However, up to now, there is no theoretical work on the performance of EAs for STP. In this paper, we reveal that the (1 + 1) EA achieves 3/2-approximation ratio for STP in a special class of quasi-bipartite graphs in expected runtime O(r(r + s - 1) ⋅ wmax), where r, s and wmax are respectively the number of Steiner nodes, the number of special nodes and the largest weight among all edges in the input graph. We also show that the (1 + 1) EA is better than two other heuristics on two GSTP instances, and the (1 + 1) EA may be inefficient on a constructed GSTP instance.

  11. Multidisciplinary Multiobjective Optimal Design for Turbomachinery Using Evolutionary Algorithm

    NASA Technical Reports Server (NTRS)

    2005-01-01

    This report summarizes Dr. Lian s efforts toward developing a robust and efficient tool for multidisciplinary and multi-objective optimal design for turbomachinery using evolutionary algorithms. This work consisted of two stages. The first stage (from July 2003 to June 2004) Dr. Lian focused on building essential capabilities required for the project. More specifically, Dr. Lian worked on two subjects: an enhanced genetic algorithm (GA) and an integrated optimization system with a GA and a surrogate model. The second stage (from July 2004 to February 2005) Dr. Lian formulated aerodynamic optimization and structural optimization into a multi-objective optimization problem and performed multidisciplinary and multi-objective optimizations on a transonic compressor blade based on the proposed model. Dr. Lian s numerical results showed that the proposed approach can effectively reduce the blade weight and increase the stage pressure ratio in an efficient manner. In addition, the new design was structurally safer than the original design. Five conference papers and three journal papers were published on this topic by Dr. Lian.

  12. The hierarchical fair competition (HFC) framework for sustainable evolutionary algorithms.

    PubMed

    Hu, Jianjun; Goodman, Erik; Seo, Kisung; Fan, Zhun; Rosenberg, Rondal

    2005-01-01

    Many current Evolutionary Algorithms (EAs) suffer from a tendency to converge prematurely or stagnate without progress for complex problems. This may be due to the loss of or failure to discover certain valuable genetic material or the loss of the capability to discover new genetic material before convergence has limited the algorithm's ability to search widely. In this paper, the Hierarchical Fair Competition (HFC) model, including several variants, is proposed as a generic framework for sustainable evolutionary search by transforming the convergent nature of the current EA framework into a non-convergent search process. That is, the structure of HFC does not allow the convergence of the population to the vicinity of any set of optimal or locally optimal solutions. The sustainable search capability of HFC is achieved by ensuring a continuous supply and the incorporation of genetic material in a hierarchical manner, and by culturing and maintaining, but continually renewing, populations of individuals of intermediate fitness levels. HFC employs an assembly-line structure in which subpopulations are hierarchically organized into different fitness levels, reducing the selection pressure within each subpopulation while maintaining the global selection pressure to help ensure the exploitation of the good genetic material found. Three EAs based on the HFC principle are tested - two on the even-10-parity genetic programming benchmark problem and a real-world analog circuit synthesis problem, and another on the HIFF genetic algorithm (GA) benchmark problem. The significant gain in robustness, scalability and efficiency by HFC, with little additional computing effort, and its tolerance of small population sizes, demonstrates its effectiveness on these problems and shows promise of its potential for improving other existing EAs for difficult problems. A paradigm shift from that of most EAs is proposed: rather than trying to escape from local optima or delay convergence at a

  13. An Allele Real-Coded Quantum Evolutionary Algorithm Based on Hybrid Updating Strategy.

    PubMed

    Zhang, Yu-Xian; Qian, Xiao-Yi; Peng, Hui-Deng; Wang, Jian-Hui

    2016-01-01

    For improving convergence rate and preventing prematurity in quantum evolutionary algorithm, an allele real-coded quantum evolutionary algorithm based on hybrid updating strategy is presented. The real variables are coded with probability superposition of allele. A hybrid updating strategy balancing the global search and local search is presented in which the superior allele is defined. On the basis of superior allele and inferior allele, a guided evolutionary process as well as updating allele with variable scale contraction is adopted. And H ε gate is introduced to prevent prematurity. Furthermore, the global convergence of proposed algorithm is proved by Markov chain. Finally, the proposed algorithm is compared with genetic algorithm, quantum evolutionary algorithm, and double chains quantum genetic algorithm in solving continuous optimization problem, and the experimental results verify the advantages on convergence rate and search accuracy.

  14. Weapon Release Scheduling from Multiple-Bay Aircraft using Multi-Objective Evolutionary Algorithms

    DTIC Science & Technology

    2005-03-01

    Morgan Kaufmann, San Mateo, CA, 1993. URL citeseer.ist.psu.edu/fang93promising.html. 32. Fogel, David. Introduction to evolutionary computation , chapter 1...Aircraft using Multi-Objective Evolutionary Algorithms THESIS Presented to the Faculty Department of Electrical and Computer Engineering Graduate School...8 2.2.2 Schedule Metrics . . . . . . . . . . . . . . . . 9 2.2.3 Algorithms . . . . . . . . . . . . . . . . . . . 10 2.3 Evolutionary Computation

  15. GX-Means: A model-based divide and merge algorithm for geospatial image clustering

    SciTech Connect

    Vatsavai, Raju; Symons, Christopher T; Chandola, Varun; Jun, Goo

    2011-01-01

    One of the practical issues in clustering is the specification of the appropriate number of clusters, which is not obvious when analyzing geospatial datasets, partly because they are huge (both in size and spatial extent) and high dimensional. In this paper we present a computationally efficient model-based split and merge clustering algorithm that incrementally finds model parameters and the number of clusters. Additionally, we attempt to provide insights into this problem and other data mining challenges that are encountered when clustering geospatial data. The basic algorithm we present is similar to the G-means and X-means algorithms; however, our proposed approach avoids certain limitations of these well-known clustering algorithms that are pertinent when dealing with geospatial data. We compare the performance of our approach with the G-means and X-means algorithms. Experimental evaluation on simulated data and on multispectral and hyperspectral remotely sensed image data demonstrates the effectiveness of our algorithm.

  16. Hybrid Tuning of an Evolutionary Algorithm for Sensor Allocation

    DTIC Science & Technology

    2011-06-01

    13. SUPPLEMENTARY NOTES 2011 IEEE Conference on Evolutionary Computation , 5-8 June, New Orleans, LA. 14. ABSTRACT The application of evolutionary ...i.e. metrics) through multi-objective optimization and its capability to address non-linear classes of optimization problem. Evolutionary computation ...Yilmaz, B. N. Mcquay, H. Yu, A. S. Wu, and J. C. Sciortino, “Evolving sensor suites for enemy radar detection,” in Genetic and Evolutionary Computation

  17. A multiagent evolutionary algorithm for constraint satisfaction problems.

    PubMed

    Liu, Jing; Zhong, Weicai; Jiao, Licheng

    2006-02-01

    With the intrinsic properties of constraint satisfaction problems (CSPs) in mind, we divide CSPs into two types, namely, permutation CSPs and nonpermutation CSPs. According to their characteristics, several behaviors are designed for agents by making use of the ability of agents to sense and act on the environment. These behaviors are controlled by means of evolution, so that the multiagent evolutionary algorithm for constraint satisfaction problems (MAEA-CSPs) results. To overcome the disadvantages of the general encoding methods, the minimum conflict encoding is also proposed. Theoretical analyzes show that MAEA-CSPs has a linear space complexity and converges to the global optimum. The first part of the experiments uses 250 benchmark binary CSPs and 79 graph coloring problems from the DIMACS challenge to test the performance of MAEA-CSPs for nonpermutation CSPs. MAEA-CSPs is compared with six well-defined algorithms and the effect of the parameters is analyzed systematically. The second part of the experiments uses a classical CSP, n-queen problems, and a more practical case, job-shop scheduling problems (JSPs), to test the performance of MAEA-CSPs for permutation CSPs. The scalability of MAEA-CSPs along n for n-queen problems is studied with great care. The results show that MAEA-CSPs achieves good performance when n increases from 10(4) to 10(7), and has a linear time complexity. Even for 10(7)-queen problems, MAEA-CSPs finds the solutions by only 150 seconds. For JSPs, 59 benchmark problems are used, and good performance is also obtained.

  18. A Novel Diversity-Based Replacement Strategy for Evolutionary Algorithms.

    PubMed

    Segura, Carlos; Coello Coello, Carlos A; Segredo, Eduardo; Aguirre, Arturo Hernandez

    2016-12-01

    Premature convergence is one of the best-known drawbacks that affects the performance of evolutionary algorithms. An alternative for dealing with this problem is to explicitly try to maintain proper diversity. In this paper, a new replacement strategy that preserves useful diversity is presented. The novelty of our method is that it combines the idea of transforming a single-objective problem into a multiobjective one, by considering diversity as an explicit objective, with the idea of adapting the balance induced between exploration and exploitation to the various optimization stages. Specifically, in the initial phases, larger amounts of diversity are accepted. The diversity measure considered in this paper is based on calculating distances to the closest surviving individual. Analyses with a multimodal function better justify the design decisions and provide greater insight into the working operation of the proposal. Computational results with a packing problem that was proposed in a popular contest illustrate the usefulness of the proposal. The new method significantly improves on the best results known to date for this problem and compares favorably against a large number of state-of-the-art schemes.

  19. Optimizing quantum gas production by an evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Lausch, T.; Hohmann, M.; Kindermann, F.; Mayer, D.; Schmidt, F.; Widera, A.

    2016-05-01

    We report on the application of an evolutionary algorithm (EA) to enhance performance of an ultra-cold quantum gas experiment. The production of a ^{87}rubidium Bose-Einstein condensate (BEC) can be divided into fundamental cooling steps, specifically magneto-optical trapping of cold atoms, loading of atoms to a far-detuned crossed dipole trap, and finally the process of evaporative cooling. The EA is applied separately for each of these steps with a particular definition for the feedback, the so-called fitness. We discuss the principles of an EA and implement an enhancement called differential evolution. Analyzing the reasons for the EA to improve, e.g., the atomic loading rates and increase the BEC phase-space density, yields an optimal parameter set for the BEC production and enables us to reduce the BEC production time significantly. Furthermore, we focus on how additional information about the experiment and optimization possibilities can be extracted and how the correlations revealed allow for further improvement. Our results illustrate that EAs are powerful optimization tools for complex experiments and exemplify that the application yields useful information on the dependence of these experiments on the optimized parameters.

  20. EVO—Evolutionary algorithm for crystal structure prediction

    NASA Astrophysics Data System (ADS)

    Bahmann, Silvia; Kortus, Jens

    2013-06-01

    We present EVO—an evolution strategy designed for crystal structure search and prediction. The concept and main features of biological evolution such as creation of diversity and survival of the fittest have been transferred to crystal structure prediction. EVO successfully demonstrates its applicability to find crystal structures of the elements of the 3rd main group with their different spacegroups. For this we used the number of atoms in the conventional cell and multiples of it. Running EVO with different numbers of carbon atoms per unit cell yields graphite as the lowest energy structure as well as a diamond-like structure, both in one run. Our implementation also supports the search for 2D structures and was able to find a boron sheet with structural features so far not considered in literature. Program summaryProgram title: EVO Catalogue identifier: AEOZ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOZ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 23488 No. of bytes in distributed program, including test data, etc.: 1830122 Distribution format: tar.gz Programming language: Python. Computer: No limitations known. Operating system: Linux. RAM: Negligible compared to the requirements of the electronic structure programs used Classification: 7.8. External routines: Quantum ESPRESSO (http://www.quantum-espresso.org/), GULP (https://projects.ivec.org/gulp/) Nature of problem: Crystal structure search is a global optimisation problem in 3N+3 dimensions where N is the number of atoms in the unit cell. The high dimensional search space is accompanied by an unknown energy landscape. Solution method: Evolutionary algorithms transfer the main features of biological evolution to use them in global searches. The combination of the "survival of the fittest" (deterministic) and the

  1. Model-based evolutionary analysis: the natural history of phage-shock stress response.

    PubMed

    Huvet, Maxime; Toni, Tina; Tan, Hui; Jovanovic, Goran; Engl, Christoph; Buck, Martin; Stumpf, Michael P H

    2009-08-01

    The evolution of proteins is inseparably linked to their function. Because most biological processes involve a number of different proteins, it may become impossible to study the evolutionary properties of proteins in isolation. In the present article, we show how simple mechanistic models of biological processes can complement conventional comparative analyses of biological traits. We use the specific example of the phage-shock stress response, which has been well characterized in Escherichia coli, to elucidate patterns of gene sharing and sequence conservation across bacterial species.

  2. Evolutionary food web model based on body masses gives realistic networks with permanent species turnover

    NASA Astrophysics Data System (ADS)

    Allhoff, K. T.; Ritterskamp, D.; Rall, B. C.; Drossel, B.; Guill, C.

    2015-06-01

    The networks of predator-prey interactions in ecological systems are remarkably complex, but nevertheless surprisingly stable in terms of long term persistence of the system as a whole. In order to understand the mechanism driving the complexity and stability of such food webs, we developed an eco-evolutionary model in which new species emerge as modifications of existing ones and dynamic ecological interactions determine which species are viable. The food-web structure thereby emerges from the dynamical interplay between speciation and trophic interactions. The proposed model is less abstract than earlier evolutionary food web models in the sense that all three evolving traits have a clear biological meaning, namely the average body mass of the individuals, the preferred prey body mass, and the width of their potential prey body mass spectrum. We observed networks with a wide range of sizes and structures and high similarity to natural food webs. The model networks exhibit a continuous species turnover, but massive extinction waves that affect more than 50% of the network are not observed.

  3. Evolutionary food web model based on body masses gives realistic networks with permanent species turnover

    PubMed Central

    Allhoff, K. T.; Ritterskamp, D.; Rall, B. C.; Drossel, B.; Guill, C.

    2015-01-01

    The networks of predator-prey interactions in ecological systems are remarkably complex, but nevertheless surprisingly stable in terms of long term persistence of the system as a whole. In order to understand the mechanism driving the complexity and stability of such food webs, we developed an eco-evolutionary model in which new species emerge as modifications of existing ones and dynamic ecological interactions determine which species are viable. The food-web structure thereby emerges from the dynamical interplay between speciation and trophic interactions. The proposed model is less abstract than earlier evolutionary food web models in the sense that all three evolving traits have a clear biological meaning, namely the average body mass of the individuals, the preferred prey body mass, and the width of their potential prey body mass spectrum. We observed networks with a wide range of sizes and structures and high similarity to natural food webs. The model networks exhibit a continuous species turnover, but massive extinction waves that affect more than 50% of the network are not observed. PMID:26042870

  4. Fuzzy evolutionary algorithm to solve chromosomes conflict and its application to lecture schedule problems

    NASA Astrophysics Data System (ADS)

    Marwati, Rini; Yulianti, Kartika; Pangestu, Herny Wulandari

    2016-02-01

    A fuzzy evolutionary algorithm is an integration of an evolutionary algorithm and a fuzzy system. In this paper, we present an application of a genetic algorithm to a fuzzy evolutionary algorithm to detect and to solve chromosomes conflict. A chromosome conflict is identified by existence of any two genes in a chromosome that has the same values as two genes in another chromosome. Based on this approach, we construct an algorithm to solve a lecture scheduling problem. Time codes, lecture codes, lecturer codes, and room codes are defined as genes. They are collected to become chromosomes. As a result, the conflicted schedule turns into chromosomes conflict. Built in the Delphi program, results show that the conflicted lecture schedule problem is solvable by this algorithm.

  5. Complex Network Clustering by a Multi-objective Evolutionary Algorithm Based on Decomposition and Membrane Structure

    NASA Astrophysics Data System (ADS)

    Ju, Ying; Zhang, Songming; Ding, Ningxiang; Zeng, Xiangxiang; Zhang, Xingyi

    2016-09-01

    The field of complex network clustering is gaining considerable attention in recent years. In this study, a multi-objective evolutionary algorithm based on membranes is proposed to solve the network clustering problem. Population are divided into different membrane structures on average. The evolutionary algorithm is carried out in the membrane structures. The population are eliminated by the vector of membranes. In the proposed method, two evaluation objectives termed as Kernel J-means and Ratio Cut are to be minimized. Extensive experimental studies comparison with state-of-the-art algorithms proves that the proposed algorithm is effective and promising.

  6. Complex Network Clustering by a Multi-objective Evolutionary Algorithm Based on Decomposition and Membrane Structure

    PubMed Central

    Ju, Ying; Zhang, Songming; Ding, Ningxiang; Zeng, Xiangxiang; Zhang, Xingyi

    2016-01-01

    The field of complex network clustering is gaining considerable attention in recent years. In this study, a multi-objective evolutionary algorithm based on membranes is proposed to solve the network clustering problem. Population are divided into different membrane structures on average. The evolutionary algorithm is carried out in the membrane structures. The population are eliminated by the vector of membranes. In the proposed method, two evaluation objectives termed as Kernel J-means and Ratio Cut are to be minimized. Extensive experimental studies comparison with state-of-the-art algorithms proves that the proposed algorithm is effective and promising. PMID:27670156

  7. Can't See the Forest: Using an Evolutionary Algorithm to Produce an Animated Artwork

    NASA Astrophysics Data System (ADS)

    Trist, Karen; Ciesielski, Vic; Barile, Perry

    We describe an artist's journey of working with an evolutionary algorithm to create an artwork suitable for exhibition in a gallery. Software based on the evolutionary algorithm produces animations which engage the viewer with a target image slowly emerging from a random collection of greyscale lines. The artwork consists of a grid of movies of eucalyptus tree targets. Each movie resolves with different aesthetic qualities, tempo and energy. The artist exercises creative control by choice of target and values for evolutionary and drawing parameters.

  8. Optimization of a Quantum Cascade Laser Operating in the Terahertz Frequency Range Using a Multiobjective Evolutionary Algorithm

    DTIC Science & Technology

    2004-06-01

    Range Using A Multiobjective Evolutionary Algorithm 1. Introduction Half of the 2000 Nobel Prize in Physics was awarded to Zhores Alferov and Herbert...Representing the Structure of an Evolutionary Algorithm [57] 3.2.1 Genetic Algorithms. The introduction of genetic algorithms occurred in Adaptation in...Highly Reliable Communications Networks.”. 22. Eiben , A. E. Evolutionary exploration of the search spaces, 178–188. Springer-Verlag, 1996. 23. Esaki, L

  9. Why don’t you use Evolutionary Algorithms in Big Data?

    NASA Astrophysics Data System (ADS)

    Stanovov, Vladimir; Brester, Christina; Kolehmainen, Mikko; Semenkina, Olga

    2017-02-01

    In this paper we raise the question of using evolutionary algorithms in the area of Big Data processing. We show that evolutionary algorithms provide evident advantages due to their high scalability and flexibility, their ability to solve global optimization problems and optimize several criteria at the same time for feature selection, instance selection and other data reduction problems. In particular, we consider the usage of evolutionary algorithms with all kinds of machine learning tools, such as neural networks and fuzzy systems. All our examples prove that Evolutionary Machine Learning is becoming more and more important in data analysis and we expect to see the further development of this field especially in respect to Big Data.

  10. Performance comparison of some evolutionary algorithms on job shop scheduling problems

    NASA Astrophysics Data System (ADS)

    Mishra, S. K.; Rao, C. S. P.

    2016-09-01

    Job Shop Scheduling as a state space search problem belonging to NP-hard category due to its complexity and combinational explosion of states. Several naturally inspire evolutionary methods have been developed to solve Job Shop Scheduling Problems. In this paper the evolutionary methods namely Particles Swarm Optimization, Artificial Intelligence, Invasive Weed Optimization, Bacterial Foraging Optimization, Music Based Harmony Search Algorithms are applied and find tuned to model and solve Job Shop Scheduling Problems. To compare about 250 Bench Mark instances have been used to evaluate the performance of these algorithms. The capabilities of each these algorithms in solving Job Shop Scheduling Problems are outlined.

  11. Model-based spectral estimation of Doppler signals using parallel genetic algorithms.

    PubMed

    Solano González, J; Rodríguez Vázquez, K; García Nocetti, D F

    2000-05-01

    Conventional spectral analysis methods use a fast Fourier transform (FFT) on consecutive or overlapping windowed data segments. For Doppler ultrasound signals, this approach suffers from an inadequate frequency resolution due to the time segment duration and the non-stationarity characteristics of the signals. Parametric or model-based estimators can give significant improvements in the time-frequency resolution at the expense of a higher computational complexity. This work describes an approach which implements in real-time a parametric spectral estimator method using genetic algorithms (GAs) in order to find the optimum set of parameters for the adaptive filter that minimises the error function. The aim is to reduce the computational complexity of the conventional algorithm by using the simplicity associated to GAs and exploiting its parallel characteristics. This will allow the implementation of higher order filters, increasing the spectrum resolution, and opening a greater scope for using more complex methods.

  12. Hybrid Robust Multi-Objective Evolutionary Optimization Algorithm

    DTIC Science & Technology

    2009-03-10

    Orlando, FL, November 15-19, 2009. 2. Optimizing Concentrations of Alloying Elements and Tempering of Corrosion Resistant Aluminum Alloys (with...Optimization of Corrosion Resistant Aluminum Alloys ", M.Sc. degree in Mechanical Engineering, Florida International University, Miami, FL, expected...International Journal of Thermophysical Properties Research. 5. Evolutionary Wavelet Neural Network for Multidimensional Function Estimation in

  13. Model-based neural network algorithm for coffee ripeness prediction using Helios UAV aerial images

    NASA Astrophysics Data System (ADS)

    Furfaro, R.; Ganapol, B. D.; Johnson, L. F.; Herwitz, S.

    2005-10-01

    Over the past few years, NASA has had a great interest in exploring the feasibility of using Unmanned Aerial Vehicles (UAVs), equipped with multi-spectral imaging systems, as long-duration platform for crop monitoring. To address the problem of predicting the ripeness level of the Kauai coffee plantation field using UAV aerial images, we proposed a neural network algorithm based on a nested Leaf-Canopy radiative transport Model (LCM2). A model-based, multi-layer neural network using backpropagation has been designed and trained to learn the functional relationship between the airborne reflectance and the percentage of ripe, over-ripe and under-ripe cherries present in the field. LCM2 was used to generate samples of the desired map. Post-processing analysis and tests on synthetic coffee field data showed that the network has accurately learn the map. A new Domain Projection Technique (DPT) was developed to deal with situations where the measured reflectance fell outside the training set. DPT projected the reflectance into the domain forcing the network to provide a physical solution. Tests were conducted to estimate the error bound. The synergistic combination of neural network algorithms and DPT lays at the core of a more complex algorithm designed to process UAV images. The application of the algorithm to real airborne images shows predictions consistent with post-harvesting data and highlights the potential of the overall methodology.

  14. Estimating the ratios of the stationary distribution values for Markov chains modeling evolutionary algorithms.

    PubMed

    Mitavskiy, Boris; Cannings, Chris

    2009-01-01

    The evolutionary algorithm stochastic process is well-known to be Markovian. These have been under investigation in much of the theoretical evolutionary computing research. When the mutation rate is positive, the Markov chain modeling of an evolutionary algorithm is irreducible and, therefore, has a unique stationary distribution. Rather little is known about the stationary distribution. In fact, the only quantitative facts established so far tell us that the stationary distributions of Markov chains modeling evolutionary algorithms concentrate on uniform populations (i.e., those populations consisting of a repeated copy of the same individual). At the same time, knowing the stationary distribution may provide some information about the expected time it takes for the algorithm to reach a certain solution, assessment of the biases due to recombination and selection, and is of importance in population genetics to assess what is called a "genetic load" (see the introduction for more details). In the recent joint works of the first author, some bounds have been established on the rates at which the stationary distribution concentrates on the uniform populations. The primary tool used in these papers is the "quotient construction" method. It turns out that the quotient construction method can be exploited to derive much more informative bounds on ratios of the stationary distribution values of various subsets of the state space. In fact, some of the bounds obtained in the current work are expressed in terms of the parameters involved in all the three main stages of an evolutionary algorithm: namely, selection, recombination, and mutation.

  15. Modeling the performance of evolutionary algorithms on the root identification problem: a case study with PBIL and CHC algorithms.

    PubMed

    Yeguas, Enrique; Joan-Arinyo, Robert; Victoria Luz N, Mar A

    2011-01-01

    The availability of a model to measure the performance of evolutionary algorithms is very important, especially when these algorithms are applied to solve problems with high computational requirements. That model would compute an index of the quality of the solution reached by the algorithm as a function of run-time. Conversely, if we fix an index of quality for the solution, the model would give the number of iterations to be expected. In this work, we develop a statistical model to describe the performance of PBIL and CHC evolutionary algorithms applied to solve the root identification problem. This problem is basic in constraint-based, geometric parametric modeling, as an instance of general constraint-satisfaction problems. The performance model is empirically validated over a benchmark with very large search spaces.

  16. Parameterized runtime analyses of evolutionary algorithms for the planar euclidean traveling salesperson problem.

    PubMed

    Sutton, Andrew M; Neumann, Frank; Nallaperuma, Samadhi

    2014-01-01

    Parameterized runtime analysis seeks to understand the influence of problem structure on algorithmic runtime. In this paper, we contribute to the theoretical understanding of evolutionary algorithms and carry out a parameterized analysis of evolutionary algorithms for the Euclidean traveling salesperson problem (Euclidean TSP). We investigate the structural properties in TSP instances that influence the optimization process of evolutionary algorithms and use this information to bound their runtime. We analyze the runtime in dependence of the number of inner points k. In the first part of the paper, we study a [Formula: see text] EA in a strictly black box setting and show that it can solve the Euclidean TSP in expected time [Formula: see text] where A is a function of the minimum angle [Formula: see text] between any three points. Based on insights provided by the analysis, we improve this upper bound by introducing a mixed mutation strategy that incorporates both 2-opt moves and permutation jumps. This strategy improves the upper bound to [Formula: see text]. In the second part of the paper, we use the information gained in the analysis to incorporate domain knowledge to design two fixed-parameter tractable (FPT) evolutionary algorithms for the planar Euclidean TSP. We first develop a [Formula: see text] EA based on an analysis by M. Theile, 2009, "Exact solutions to the traveling salesperson problem by a population-based evolutionary algorithm," Lecture notes in computer science, Vol. 5482 (pp. 145-155), that solves the TSP with k inner points in [Formula: see text] generations with probability [Formula: see text]. We then design a [Formula: see text] EA that incorporates a dynamic programming step into the fitness evaluation. We prove that a variant of this evolutionary algorithm using 2-opt mutation solves the problem after [Formula: see text] steps in expectation with a cost of [Formula: see text] for each fitness evaluation.

  17. A multiobjective evolutionary algorithm to find community structures based on affinity propagation

    NASA Astrophysics Data System (ADS)

    Shang, Ronghua; Luo, Shuang; Zhang, Weitong; Stolkin, Rustam; Jiao, Licheng

    2016-07-01

    Community detection plays an important role in reflecting and understanding the topological structure of complex networks, and can be used to help mine the potential information in networks. This paper presents a Multiobjective Evolutionary Algorithm based on Affinity Propagation (APMOEA) which improves the accuracy of community detection. Firstly, APMOEA takes the method of affinity propagation (AP) to initially divide the network. To accelerate its convergence, the multiobjective evolutionary algorithm selects nondominated solutions from the preliminary partitioning results as its initial population. Secondly, the multiobjective evolutionary algorithm finds solutions approximating the true Pareto optimal front through constantly selecting nondominated solutions from the population after crossover and mutation in iterations, which overcomes the tendency of data clustering methods to fall into local optima. Finally, APMOEA uses an elitist strategy, called "external archive", to prevent degeneration during the process of searching using the multiobjective evolutionary algorithm. According to this strategy, the preliminary partitioning results obtained by AP will be archived and participate in the final selection of Pareto-optimal solutions. Experiments on benchmark test data, including both computer-generated networks and eight real-world networks, show that the proposed algorithm achieves more accurate results and has faster convergence speed compared with seven other state-of-art algorithms.

  18. DOPGA: a new fitness assignment scheme for multi-objective evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Ufuk Ergul, Engin; Eminoglu, Ilyas

    2014-03-01

    In this article, a new fitness assignment scheme to evaluate the Pareto-optimal solutions for multi-objective evolutionary algorithms is proposed. The proposed DOmination Power of an individual Genetic Algorithm (DOPGA) method can order the individuals in a form in which each individual (the so-called solution) could have a unique rank. With this new method, a multi-objective problem can be treated as if it were a single-objective problem without drastically deviating from the Pareto definition. In DOPGA, relative position of a solution is embedded into the fitness assignment procedures. We compare the performance of the algorithm with two benchmark evolutionary algorithms (Strength Pareto Evolutionary Algorithm (SPEA) and Strength Pareto Evolutionary Algorithm 2 (SPEA2)) on 12 unconstrained bi-objective and one tri-objective test problems. DOPGA significantly outperforms SPEA on all test problems. DOPGA performs better than SPEA2 in terms of convergence metric on all test problems. Also, Pareto-optimal solutions found by DOPGA spread better than SPEA2 on eight of 13 test problems.

  19. A quantum-behaved evolutionary algorithm based on the Bloch spherical search

    NASA Astrophysics Data System (ADS)

    Li, Panchi

    2014-04-01

    In order to enhance the optimization ability of the quantum evolutionary algorithms, a new quantum-behaved evolutionary algorithm is proposed. In this algorithm, the search mechanism is established based on the Bloch sphere. First, the individuals are expressed by qubits described on the Bloch sphere, then the rotation axis is established by Pauli matrixes, and the evolution search is realized by rotating qubits on the Bloch sphere about the rotating axis. In order to avoid premature convergence, the mutation of individuals is achieved by the Hadamard gates. Such rotation can make the current qubit approximate the target qubit along with the great circle on the Bloch sphere, which can accelerate optimization process. Taking the function extreme value optimization as an example, the experimental results show that the proposed algorithm is obviously superior to other similar algorithms.

  20. Scheduling Earth Observing Fleets Using Evolutionary Algorithms: Problem Description and Approach

    NASA Technical Reports Server (NTRS)

    Globus, Al; Crawford, James; Lohn, Jason; Morris, Robert; Clancy, Daniel (Technical Monitor)

    2002-01-01

    We describe work in progress concerning multi-instrument, multi-satellite scheduling. Most, although not all, Earth observing instruments currently in orbit are unique. In the relatively near future, however, we expect to see fleets of Earth observing spacecraft, many carrying nearly identical instruments. This presents a substantially new scheduling challenge. Inspired by successful commercial applications of evolutionary algorithms in scheduling domains, this paper presents work in progress regarding the use of evolutionary algorithms to solve a set of Earth observing related model problems. Both the model problems and the software are described. Since the larger problems will require substantial computation and evolutionary algorithms are embarrassingly parallel, we discuss our parallelization techniques using dedicated and cycle-scavenged workstations.

  1. Evolutionary Algorithms Approach to the Solution of Damage Detection Problems

    NASA Astrophysics Data System (ADS)

    Salazar Pinto, Pedro Yoajim; Begambre, Oscar

    2010-09-01

    In this work is proposed a new Self-Configured Hybrid Algorithm by combining the Particle Swarm Optimization (PSO) and a Genetic Algorithm (GA). The aim of the proposed strategy is to increase the stability and accuracy of the search. The central idea is the concept of Guide Particle, this particle (the best PSO global in each generation) transmits its information to a particle of the following PSO generation, which is controlled by the GA. Thus, the proposed hybrid has an elitism feature that improves its performance and guarantees the convergence of the procedure. In different test carried out in benchmark functions, reported in the international literature, a better performance in stability and accuracy was observed; therefore the new algorithm was used to identify damage in a simple supported beam using modal data. Finally, it is worth noting that the algorithm is independent of the initial definition of heuristic parameters.

  2. The new image segmentation algorithm using adaptive evolutionary programming and fuzzy c-means clustering

    NASA Astrophysics Data System (ADS)

    Liu, Fang

    2011-06-01

    Image segmentation remains one of the major challenges in image analysis and computer vision. Fuzzy clustering, as a soft segmentation method, has been widely studied and successfully applied in mage clustering and segmentation. The fuzzy c-means (FCM) algorithm is the most popular method used in mage segmentation. However, most clustering algorithms such as the k-means and the FCM clustering algorithms search for the final clusters values based on the predetermined initial centers. The FCM clustering algorithms does not consider the space information of pixels and is sensitive to noise. In the paper, presents a new fuzzy c-means (FCM) algorithm with adaptive evolutionary programming that provides image clustering. The features of this algorithm are: 1) firstly, it need not predetermined initial centers. Evolutionary programming will help FCM search for better center and escape bad centers at local minima. Secondly, the spatial distance and the Euclidean distance is also considered in the FCM clustering. So this algorithm is more robust to the noises. Thirdly, the adaptive evolutionary programming is proposed. The mutation rule is adaptively changed with learning the useful knowledge in the evolving process. Experiment results shows that the new image segmentation algorithm is effective. It is providing robustness to noisy images.

  3. An Evolutionary Algorithm to Generate Ellipsoid Detectors for Negative Selection

    DTIC Science & Technology

    2005-03-21

    Von Zuben [21] have both implemented an AIS using the network immune model. Timmis and Neal [91] applied the model to unsupervised machine learning...and de Castro and Von Zuben [21] applied the model to the problem of clustering and filtering unlabelled numerical data sets. Danger theory is young in...algorithm, clonal selection, is described in the next section. 2.6.1 Clonal Selection. De Castro and Von Zuben produced a clonal selection algorithm

  4. A kinetic model-based algorithm to classify NGS short reads by their allele origin.

    PubMed

    Marinoni, Andrea; Rizzo, Ettore; Limongelli, Ivan; Gamba, Paolo; Bellazzi, Riccardo

    2015-02-01

    Genotyping Next Generation Sequencing (NGS) data of a diploid genome aims to assign the zygosity of identified variants through comparison with a reference genome. Current methods typically employ probabilistic models that rely on the pileup of bases at each locus and on a priori knowledge. We present a new algorithm, called Kimimila (KInetic Modeling based on InforMation theory to Infer Labels of Alleles), which is able to assign reads to alleles by using a distance geometry approach and to infer the variant genotypes accurately, without any kind of assumption. The performance of the model has been assessed on simulated and real data of the 1000 Genomes Project and the results have been compared with several commonly used genotyping methods, i.e., GATK, Samtools, VarScan, FreeBayes and Atlas2. Despite our algorithm does not make use of a priori knowledge, the percentage of correctly genotyped variants is comparable to these algorithms. Furthermore, our method allows the user to split the reads pool depending on the inferred allele origin.

  5. Model based on a quantum algorithm to study the evolution of an epidemics.

    PubMed

    León, A; Pozo, J

    2007-03-01

    A model based on a quantum algorithm is used to study the spread of HIV virus and to predict infection rates on individuals who are not aware of their particular condition. The model makes an analogy between quantum systems and individuals who are infected by the disease. Individuals are represented by two-level quantum systems (quantum "bit"), and the interactions among individuals who cause the infection are represented by unitary transformations. The population is divided into categories according to their behaviour, and the interactions among those individuals in the same category and those in different categories are simulated. The objective is to obtain statistical data on the number of infected individuals depending on the time for every category and for the entire population. Besides, we analyse the impact of the evolution of the disease on individuals who have not knowledge of their specific sanitary condition.

  6. Evolutionary Processes in the Development of Errors in Subtraction Algorithms

    ERIC Educational Resources Information Center

    Fernandez, Ricardo Lopez; Garcia, Ana B. Sanchez

    2008-01-01

    The study of errors made in subtraction is a research subject approached from different theoretical premises that affect different components of the algorithmic process as triggers of their generation. In the following research an attempt has been made to investigate the typology and nature of errors which occur in subtractions and their evolution…

  7. First principles prediction of amorphous phases using evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Nahas, Suhas; Gaur, Anshu; Bhowmick, Somnath

    2016-07-01

    We discuss the efficacy of evolutionary method for the purpose of structural analysis of amorphous solids. At present, ab initio molecular dynamics (MD) based melt-quench technique is used and this deterministic approach has proven to be successful to study amorphous materials. We show that a stochastic approach motivated by Darwinian evolution can also be used to simulate amorphous structures. Applying this method, in conjunction with density functional theory based electronic, ionic and cell relaxation, we re-investigate two well known amorphous semiconductors, namely silicon and indium gallium zinc oxide. We find that characteristic structural parameters like average bond length and bond angle are within ˜2% of those reported by ab initio MD calculations and experimental studies.

  8. Experiments with a Parallel Multi-Objective Evolutionary Algorithm for Scheduling

    NASA Technical Reports Server (NTRS)

    Brown, Matthew; Johnston, Mark D.

    2013-01-01

    Evolutionary multi-objective algorithms have great potential for scheduling in those situations where tradeoffs among competing objectives represent a key requirement. One challenge, however, is runtime performance, as a consequence of evolving not just a single schedule, but an entire population, while attempting to sample the Pareto frontier as accurately and uniformly as possible. The growing availability of multi-core processors in end user workstations, and even laptops, has raised the question of the extent to which such hardware can be used to speed up evolutionary algorithms. In this paper we report on early experiments in parallelizing a Generalized Differential Evolution (GDE) algorithm for scheduling long-range activities on NASA's Deep Space Network. Initial results show that significant speedups can be achieved, but that performance does not necessarily improve as more cores are utilized. We describe our preliminary results and some initial suggestions from parallelizing the GDE algorithm. Directions for future work are outlined.

  9. Designing a parallel evolutionary algorithm for inferring gene networks on the cloud computing environment

    PubMed Central

    2014-01-01

    Background To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. Results This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Conclusions Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel

  10. Predicting patchy particle crystals: Variable box shape simulations and evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Bianchi, Emanuela; Doppelbauer, Günther; Filion, Laura; Dijkstra, Marjolein; Kahl, Gerhard

    2012-06-01

    We consider several patchy particle models that have been proposed in literature and we investigate their candidate crystal structures in a systematic way. We compare two different algorithms for predicting crystal structures: (i) an approach based on Monte Carlo simulations in the isobaric-isothermal ensemble and (ii) an optimization technique based on ideas of evolutionary algorithms. We show that the two methods are equally successful and provide consistent results on crystalline phases of patchy particle systems.

  11. Model-based Layer Estimation using a Hybrid Genetic/Gradient Search Optimization Algorithm

    SciTech Connect

    Chambers, D; Lehman, S; Dowla, F

    2007-05-17

    A particle swarm optimization (PSO) algorithm is combined with a gradient search method in a model-based approach for extracting interface positions in a one-dimensional multilayer structure from acoustic or radar reflections. The basic approach is to predict the reflection measurement using a simulation of one-dimensional wave propagation in a multi-layer, evaluate the error between prediction and measurement, and then update the simulation parameters to minimize the error. Gradient search methods alone fail due to the number of local minima in the error surface close to the desired global minimum. The PSO approach avoids this problem by randomly sampling the region of the error surface around the global minimum, but at the cost of a large number of evaluations of the simulator. The hybrid approach uses the PSO at the beginning to locate the general area around the global minimum then switches to the gradient search method to zero in on it. Examples of the algorithm applied to the detection of interior walls of a building from reflected ultra-wideband radar signals are shown. Other possible applications are optical inspection of coatings and ultrasonic measurement of multilayer structures.

  12. Simulation of Biochemical Pathway Adaptability Using Evolutionary Algorithms

    SciTech Connect

    Bosl, W J

    2005-01-26

    The systems approach to genomics seeks quantitative and predictive descriptions of cells and organisms. However, both the theoretical and experimental methods necessary for such studies still need to be developed. We are far from understanding even the simplest collective behavior of biomolecules, cells or organisms. A key aspect to all biological problems, including environmental microbiology, evolution of infectious diseases, and the adaptation of cancer cells is the evolvability of genomes. This is particularly important for Genomes to Life missions, which tend to focus on the prospect of engineering microorganisms to achieve desired goals in environmental remediation and climate change mitigation, and energy production. All of these will require quantitative tools for understanding the evolvability of organisms. Laboratory biodefense goals will need quantitative tools for predicting complicated host-pathogen interactions and finding counter-measures. In this project, we seek to develop methods to simulate how external and internal signals cause the genetic apparatus to adapt and organize to produce complex biochemical systems to achieve survival. This project is specifically directed toward building a computational methodology for simulating the adaptability of genomes. This project investigated the feasibility of using a novel quantitative approach to studying the adaptability of genomes and biochemical pathways. This effort was intended to be the preliminary part of a larger, long-term effort between key leaders in computational and systems biology at Harvard University and LLNL, with Dr. Bosl as the lead PI. Scientific goals for the long-term project include the development and testing of new hypotheses to explain the observed adaptability of yeast biochemical pathways when the myosin-II gene is deleted and the development of a novel data-driven evolutionary computation as a way to connect exploratory computational simulation with hypothesis

  13. On Polymorphic Circuits and Their Design Using Evolutionary Algorithms

    NASA Technical Reports Server (NTRS)

    Stoica, Adrian; Zebulum, Ricardo; Keymeulen, Didier; Lohn, Jason; Clancy, Daniel (Technical Monitor)

    2002-01-01

    This paper introduces the concept of polymorphic electronics (polytronics) - referring to electronics with superimposed built-in functionality. A function change does not require switches/reconfiguration as in traditional approaches. Instead the change comes from modifications in the characteristics of devices involved in the circuit, in response to controls such as temperature, power supply voltage (VDD), control signals, light, etc. The paper illustrates polytronic circuits in which the control is done by temperature, morphing signals, and VDD respectively. Polytronic circuits are obtained by evolutionary design/evolvable hardware techniques. These techniques are ideal for the polytronics design, a new area that lacks design guidelines, know-how,- yet the requirements/objectives are easy to specify and test. The circuits are evolved/synthesized in two different modes. The first mode explores an unstructured space, in which transistors can be interconnected freely in any arrangement (in simulations only). The second mode uses a Field Programmable Transistor Array (FPTA) model, and the circuit topology is sought as a mapping onto a programmable architecture (these experiments are performed both in simulations and on FPTA chips). The experiments demonstrated the synthesis. of polytronic circuits by evolution. The capacity of storing/hiding "extra" functions provides for watermark/invisible functionality, thus polytronics may find uses in intelligence/security applications.

  14. Towards unbiased benchmarking of evolutionary and hybrid algorithms for real-valued optimisation

    NASA Astrophysics Data System (ADS)

    MacNish, Cara

    2007-12-01

    Randomised population-based algorithms, such as evolutionary, genetic and swarm-based algorithms, and their hybrids with traditional search techniques, have proven successful and robust on many difficult real-valued optimisation problems. This success, along with the readily applicable nature of these techniques, has led to an explosion in the number of algorithms and variants proposed. In order for the field to advance it is necessary to carry out effective comparative evaluations of these algorithms, and thereby better identify and understand those properties that lead to better performance. This paper discusses the difficulties of providing benchmarking of evolutionary and allied algorithms that is both meaningful and logistically viable. To be meaningful the benchmarking test must give a fair comparison that is free, as far as possible, from biases that favour one style of algorithm over another. To be logistically viable it must overcome the need for pairwise comparison between all the proposed algorithms. To address the first problem, we begin by attempting to identify the biases that are inherent in commonly used benchmarking functions. We then describe a suite of test problems, generated recursively as self-similar or fractal landscapes, designed to overcome these biases. For the second, we describe a server that uses web services to allow researchers to 'plug in' their algorithms, running on their local machines, to a central benchmarking repository.

  15. On source models for (192)Ir HDR brachytherapy dosimetry using model based algorithms.

    PubMed

    Pantelis, Evaggelos; Zourari, Kyveli; Zoros, Emmanouil; Lahanas, Vasileios; Karaiskos, Pantelis; Papagiannis, Panagiotis

    2016-06-07

    A source model is a prerequisite of all model based dose calculation algorithms. Besides direct simulation, the use of pre-calculated phase space files (phsp source models) and parameterized phsp source models has been proposed for Monte Carlo (MC) to promote efficiency and ease of implementation in obtaining photon energy, position and direction. In this work, a phsp file for a generic (192)Ir source design (Ballester et al 2015) is obtained from MC simulation. This is used to configure a parameterized phsp source model comprising appropriate probability density functions (PDFs) and a sampling procedure. According to phsp data analysis 15.6% of the generated photons are absorbed within the source, and 90.4% of the emergent photons are primary. The PDFs for sampling photon energy and direction relative to the source long axis, depend on the position of photon emergence. Photons emerge mainly from the cylindrical source surface with a constant probability over  ±0.1 cm from the center of the 0.35 cm long source core, and only 1.7% and 0.2% emerge from the source tip and drive wire, respectively. Based on these findings, an analytical parameterized source model is prepared for the calculation of the PDFs from data of source geometry and materials, without the need for a phsp file. The PDFs from the analytical parameterized source model are in close agreement with those employed in the parameterized phsp source model. This agreement prompted the proposal of a purely analytical source model based on isotropic emission of photons generated homogeneously within the source core with energy sampled from the (192)Ir spectrum, and the assignment of a weight according to attenuation within the source. Comparison of single source dosimetry data obtained from detailed MC simulation and the proposed analytical source model show agreement better than 2% except for points lying close to the source longitudinal axis.

  16. Hybridization of evolutionary algorithms and local search by means of a clustering method.

    PubMed

    Martínez-Estudillo, Alfonso C; Hervás-Martínez, César; Martínez-Estudillo, Francisco J; García-Pedrajas, Nicolás

    2006-06-01

    This paper presents a hybrid evolutionary algorithm (EA) to solve nonlinear-regression problems. Although EAs have proven their ability to explore large search spaces, they are comparatively inefficient in fine tuning the solution. This drawback is usually avoided by means of local optimization algorithms that are applied to the individuals of the population. The algorithms that use local optimization procedures are usually called hybrid algorithms. On the other hand, it is well known that the clustering process enables the creation of groups (clusters) with mutually close points that hopefully correspond to relevant regions of attraction. Local-search procedures can then be started once in every such region. This paper proposes the combination of an EA, a clustering process, and a local-search procedure to the evolutionary design of product-units neural networks. In the methodology presented, only a few individuals are subject to local optimization. Moreover, the local optimization algorithm is only applied at specific stages of the evolutionary process. Our results show a favorable performance when the regression method proposed is compared to other standard methods.

  17. Evolutionary algorithms with segment-based search for multiobjective optimization problems.

    PubMed

    Li, Miqing; Yang, Shengxiang; Li, Ke; Liu, Xiaohui

    2014-08-01

    This paper proposes a variation operator, called segment-based search (SBS), to improve the performance of evolutionary algorithms on continuous multiobjective optimization problems. SBS divides the search space into many small segments according to the evolutionary information feedback from the set of current optimal solutions. Two operations, micro-jumping and macro-jumping, are implemented upon these segments in order to guide an efficient information exchange among "good" individuals. Moreover, the running of SBS is adaptive according to the current evolutionary status. SBS is activated only when the population evolves slowly, depending on general genetic operators (e.g., mutation and crossover). A comprehensive set of 36 test problems is employed for experimental verification. The influence of two algorithm settings (i.e., the dimensionality and boundary relaxation strategy) and two probability parameters in SBS (i.e., the SBS rate and micro-jumping proportion) are investigated in detail. Moreover, an empirical comparative study with three representative variation operators is carried out. Experimental results show that the incorporation of SBS into the optimization process can improve the performance of evolutionary algorithms for multiobjective optimization problems.

  18. Runtime analysis of an evolutionary algorithm for stochastic multi-objective combinatorial optimization.

    PubMed

    Gutjahr, Walter J

    2012-01-01

    For stochastic multi-objective combinatorial optimization (SMOCO) problems, the adaptive Pareto sampling (APS) framework has been proposed, which is based on sampling and on the solution of deterministic multi-objective subproblems. We show that when plugging in the well-known simple evolutionary multi-objective optimizer (SEMO) as a subprocedure into APS, ε-dominance has to be used to achieve fast convergence to the Pareto front. Two general theorems are presented indicating how runtime complexity results for APS can be derived from corresponding results for SEMO. This may be a starting point for the runtime analysis of evolutionary SMOCO algorithms.

  19. Application of an evolutionary algorithm in the optimal design of micro-sensor.

    PubMed

    Lu, Qibing; Wang, Pan; Guo, Sihai; Sheng, Buyun; Liu, Xingxing; Fan, Zhun

    2015-01-01

    This paper introduces an automatic bond graph design method based on genetic programming for the evolutionary design of micro-resonator. First, the system-level behavioral model is discussed, which based on genetic programming and bond graph. Then, the geometry parameters of components are automatically optimized, by using the genetic algorithm with constraints. To illustrate this approach, a typical device micro-resonator is designed as an example in biomedicine. This paper provides a new idea for the automatic optimization design of biomedical sensors by evolutionary calculation.

  20. Progress implementing a model-based iterative reconstruction algorithm for ultrasound imaging of thick concrete

    NASA Astrophysics Data System (ADS)

    Almansouri, Hani; Johnson, Christi; Clayton, Dwight; Polsky, Yarom; Bouman, Charles; Santos-Villalobos, Hector

    2017-02-01

    All commercial nuclear power plants (NPPs) in the United States contain concrete structures. These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and the degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Concrete structures in NPPs are often inaccessible and contain large volumes of massively thick concrete. While acoustic imaging using the synthetic aperture focusing technique (SAFT) works adequately well for thin specimens of concrete such as concrete transportation structures, enhancements are needed for heavily reinforced, thick concrete. We argue that image reconstruction quality for acoustic imaging in thick concrete could be improved with Model-Based Iterative Reconstruction (MBIR) techniques. MBIR works by designing a probabilistic model for the measurements (forward model) and a probabilistic model for the object (prior model). Both models are used to formulate an objective function (cost function). The final step in MBIR is to optimize the cost function. Previously, we have demonstrated a first implementation of MBIR for an ultrasonic transducer array system. The original forward model has been upgraded to account for direct arrival signal. Updates to the forward model will be documented and the new algorithm will be assessed with synthetic and empirical samples.

  1. Progress Implementing a Model-Based Iterative Reconstruction Algorithm for Ultrasound Imaging of Thick Concrete

    SciTech Connect

    Almansouri, Hani; Johnson, Christi R; Clayton, Dwight A; Polsky, Yarom; Bouman, Charlie; Santos-Villalobos, Hector J

    2017-01-01

    All commercial nuclear power plants (NPPs) in the United States contain concrete structures. These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and the degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Concrete structures in NPPs are often inaccessible and contain large volumes of massively thick concrete. While acoustic imaging using the synthetic aperture focusing technique (SAFT) works adequately well for thin specimens of concrete such as concrete transportation structures, enhancements are needed for heavily reinforced, thick concrete. We argue that image reconstruction quality for acoustic imaging in thick concrete could be improved with Model-Based Iterative Reconstruction (MBIR) techniques. MBIR works by designing a probabilistic model for the measurements (forward model) and a probabilistic model for the object (prior model). Both models are used to formulate an objective function (cost function). The final step in MBIR is to optimize the cost function. Previously, we have demonstrated a first implementation of MBIR for an ultrasonic transducer array system. The original forward model has been upgraded to account for direct arrival signal. Updates to the forward model will be documented and the new algorithm will be assessed with synthetic and empirical samples.

  2. A Guiding Evolutionary Algorithm with Greedy Strategy for Global Optimization Problems.

    PubMed

    Cao, Leilei; Xu, Lihong; Goodman, Erik D

    2016-01-01

    A Guiding Evolutionary Algorithm (GEA) with greedy strategy for global optimization problems is proposed. Inspired by Particle Swarm Optimization, the Genetic Algorithm, and the Bat Algorithm, the GEA was designed to retain some advantages of each method while avoiding some disadvantages. In contrast to the usual Genetic Algorithm, each individual in GEA is crossed with the current global best one instead of a randomly selected individual. The current best individual served as a guide to attract offspring to its region of genotype space. Mutation was added to offspring according to a dynamic mutation probability. To increase the capability of exploitation, a local search mechanism was applied to new individuals according to a dynamic probability of local search. Experimental results show that GEA outperformed the other three typical global optimization algorithms with which it was compared.

  3. A Guiding Evolutionary Algorithm with Greedy Strategy for Global Optimization Problems

    PubMed Central

    Cao, Leilei; Xu, Lihong; Goodman, Erik D.

    2016-01-01

    A Guiding Evolutionary Algorithm (GEA) with greedy strategy for global optimization problems is proposed. Inspired by Particle Swarm Optimization, the Genetic Algorithm, and the Bat Algorithm, the GEA was designed to retain some advantages of each method while avoiding some disadvantages. In contrast to the usual Genetic Algorithm, each individual in GEA is crossed with the current global best one instead of a randomly selected individual. The current best individual served as a guide to attract offspring to its region of genotype space. Mutation was added to offspring according to a dynamic mutation probability. To increase the capability of exploitation, a local search mechanism was applied to new individuals according to a dynamic probability of local search. Experimental results show that GEA outperformed the other three typical global optimization algorithms with which it was compared. PMID:27293421

  4. Learning deterministic finite automata with a smart state labeling evolutionary algorithm.

    PubMed

    Lucas, Simon M; Reynolds, T Jeff

    2005-07-01

    Learning a Deterministic Finite Automaton (DFA) from a training set of labeled strings is a hard task that has been much studied within the machine learning community. It is equivalent to learning a regular language by example and has applications in language modeling. In this paper, we describe a novel evolutionary method for learning DFA that evolves only the transition matrix and uses a simple deterministic procedure to optimally assign state labels. We compare its performance with the Evidence Driven State Merging (EDSM) algorithm, one of the most powerful known DFA learning algorithms. We present results on random DFA induction problems of varying target size and training set density. We also studythe effects of noisy training data on the evolutionary approach and on EDSM. On noise-free data, we find that our evolutionary method outperforms EDSM on small sparse data sets. In the case of noisy training data, we find that our evolutionary method consistently outperforms EDSM, as well as other significant methods submitted to two recent competitions.

  5. THE APPLICATION OF AN EVOLUTIONARY ALGORITHM TO THE OPTIMIZATION OF A MESOSCALE METEOROLOGICAL MODEL

    SciTech Connect

    Werth, D.; O'Steen, L.

    2008-02-11

    We show that a simple evolutionary algorithm can optimize a set of mesoscale atmospheric model parameters with respect to agreement between the mesoscale simulation and a limited set of synthetic observations. This is illustrated using the Regional Atmospheric Modeling System (RAMS). A set of 23 RAMS parameters is optimized by minimizing a cost function based on the root mean square (rms) error between the RAMS simulation and synthetic data (observations derived from a separate RAMS simulation). We find that the optimization can be efficient with relatively modest computer resources, thus operational implementation is possible. The optimization efficiency, however, is found to depend strongly on the procedure used to perturb the 'child' parameters relative to their 'parents' within the evolutionary algorithm. In addition, the meteorological variables included in the rms error and their weighting are found to be an important factor with respect to finding the global optimum.

  6. Models of performance of evolutionary program induction algorithms based on indicators of problem difficulty.

    PubMed

    Graff, Mario; Poli, Riccardo; Flores, Juan J

    2013-01-01

    Modeling the behavior of algorithms is the realm of evolutionary algorithm theory. From a practitioner's point of view, theory must provide some guidelines regarding which algorithm/parameters to use in order to solve a particular problem. Unfortunately, most theoretical models of evolutionary algorithms are difficult to apply to realistic situations. However, in recent work (Graff and Poli, 2008, 2010), where we developed a method to practically estimate the performance of evolutionary program-induction algorithms (EPAs), we started addressing this issue. The method was quite general; however, it suffered from some limitations: it required the identification of a set of reference problems, it required hand picking a distance measure in each particular domain, and the resulting models were opaque, typically being linear combinations of 100 features or more. In this paper, we propose a significant improvement of this technique that overcomes the three limitations of our previous method. We achieve this through the use of a novel set of features for assessing problem difficulty for EPAs which are very general, essentially based on the notion of finite difference. To show the capabilities or our technique and to compare it with our previous performance models, we create models for the same two important classes of problems-symbolic regression on rational functions and Boolean function induction-used in our previous work. We model a variety of EPAs. The comparison showed that for the majority of the algorithms and problem classes, the new method produced much simpler and more accurate models than before. To further illustrate the practicality of the technique and its generality (beyond EPAs), we have also used it to predict the performance of both autoregressive models and EPAs on the problem of wind speed forecasting, obtaining simpler and more accurate models that outperform in all cases our previous performance models.

  7. Scheduling for the National Hockey League Using a Multi-objective Evolutionary Algorithm

    NASA Astrophysics Data System (ADS)

    Craig, Sam; While, Lyndon; Barone, Luigi

    We describe a multi-objective evolutionary algorithm that derives schedules for the National Hockey League according to three objectives: minimising the teams' total travel, promoting equity in rest time between games, and minimising long streaks of home or away games. Experiments show that the system is able to derive schedules that beat the 2008-9 NHL schedule in all objectives simultaneously, and that it returns a set of schedules that offer a range of trade-offs across the objectives.

  8. XTALOPT version r9: An open-source evolutionary algorithm for crystal structure prediction

    NASA Astrophysics Data System (ADS)

    Falls, Zackary; Lonie, David C.; Avery, Patrick; Shamp, Andrew; Zurek, Eva

    2016-02-01

    A new version of XTALOPT, an evolutionary algorithm for crystal structure prediction, is available for download from the CPC library or the XTALOPT website, http://xtalopt.github.io. XTALOPT is published under the Gnu Public License (GPL), which is an open source license that is recognized by the Open Source Initiative. The new version incorporates many bug-fixes and new features, as detailed below.

  9. Optimal Wavelengths Selection Using Hierarchical Evolutionary Algorithm for Prediction of Firmness and Soluble Solids Content in Apples

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Hyperspectral scattering is a promising technique for rapid and noninvasive measurement of multiple quality attributes of apple fruit. A hierarchical evolutionary algorithm (HEA) approach, in combination with subspace decomposition and partial least squares (PLS) regression, was proposed to select o...

  10. A diagnostic assessment of evolutionary algorithms for multi-objective surface water reservoir control

    NASA Astrophysics Data System (ADS)

    Zatarain Salazar, Jazmin; Reed, Patrick M.; Herman, Jonathan D.; Giuliani, Matteo; Castelletti, Andrea

    2016-06-01

    Globally, the pressures of expanding populations, climate change, and increased energy demands are motivating significant investments in re-operationalizing existing reservoirs or designing operating policies for new ones. These challenges require an understanding of the tradeoffs that emerge across the complex suite of multi-sector demands in river basin systems. This study benchmarks our current capabilities to use Evolutionary Multi-Objective Direct Policy Search (EMODPS), a decision analytic framework in which reservoirs' candidate operating policies are represented using parameterized global approximators (e.g., radial basis functions) then those parameterized functions are optimized using multi-objective evolutionary algorithms to discover the Pareto approximate operating policies. We contribute a comprehensive diagnostic assessment of modern MOEAs' abilities to support EMODPS using the Conowingo reservoir in the Lower Susquehanna River Basin, Pennsylvania, USA. Our diagnostic results highlight that EMODPS can be very challenging for some modern MOEAs and that epsilon dominance, time-continuation, and auto-adaptive search are helpful for attaining high levels of performance. The ɛ-MOEA, the auto-adaptive Borg MOEA, and ɛ-NSGAII all yielded superior results for the six-objective Lower Susquehanna benchmarking test case. The top algorithms show low sensitivity to different MOEA parameterization choices and high algorithmic reliability in attaining consistent results for different random MOEA trials. Overall, EMODPS poses a promising method for discovering key reservoir management tradeoffs; however algorithmic choice remains a key concern for problems of increasing complexity.

  11. Evolutionary algorithm based offline/online path planner for UAV navigation.

    PubMed

    Nikolos, I K; Valavanis, K P; Tsourveloudis, N C; Kostaras, A N

    2003-01-01

    An evolutionary algorithm based framework, a combination of modified breeder genetic algorithms incorporating characteristics of classic genetic algorithms, is utilized to design an offline/online path planner for unmanned aerial vehicles (UAVs) autonomous navigation. The path planner calculates a curved path line with desired characteristics in a three-dimensional (3-D) rough terrain environment, represented using B-spline curves, with the coordinates of its control points being the evolutionary algorithm artificial chromosome genes. Given a 3-D rough environment and assuming flight envelope restrictions, two problems are solved: i) UAV navigation using an offline planner in a known environment, and, ii) UAV navigation using an online planner in a completely unknown environment. The offline planner produces a single B-Spline curve that connects the starting and target points with a predefined initial direction. The online planner, based on the offline one, is given on-board radar readings which gradually produces a smooth 3-D trajectory aiming at reaching a predetermined target in an unknown environment; the produced trajectory consists of smaller B-spline curves smoothly connected with each other. Both planners have been tested under different scenarios, and they have been proven effective in guiding an UAV to its final destination, providing near-optimal curved paths quickly and efficiently.

  12. ETEA: a Euclidean minimum spanning tree-based evolutionary algorithm for multi-objective optimization.

    PubMed

    Li, Miqing; Yang, Shengxiang; Zheng, Jinhua; Liu, Xiaohui

    2014-01-01

    The Euclidean minimum spanning tree (EMST), widely used in a variety of domains, is a minimum spanning tree of a set of points in space where the edge weight between each pair of points is their Euclidean distance. Since the generation of an EMST is entirely determined by the Euclidean distance between solutions (points), the properties of EMSTs have a close relation with the distribution and position information of solutions. This paper explores the properties of EMSTs and proposes an EMST-based evolutionary algorithm (ETEA) to solve multi-objective optimization problems (MOPs). Unlike most EMO algorithms that focus on the Pareto dominance relation, the proposed algorithm mainly considers distance-based measures to evaluate and compare individuals during the evolutionary search. Specifically, in ETEA, four strategies are introduced: (1) An EMST-based crowding distance (ETCD) is presented to estimate the density of individuals in the population; (2) A distance comparison approach incorporating ETCD is used to assign the fitness value for individuals; (3) A fitness adjustment technique is designed to avoid the partial overcrowding in environmental selection; (4) Three diversity indicators-the minimum edge, degree, and ETCD-with regard to EMSTs are applied to determine the survival of individuals in archive truncation. From a series of extensive experiments on 32 test instances with different characteristics, ETEA is found to be competitive against five state-of-the-art algorithms and its predecessor in providing a good balance among convergence, uniformity, and spread.

  13. Cubic time algorithms of amalgamating gene trees and building evolutionary scenarios

    PubMed Central

    2012-01-01

    Background A long recognized problem is the inference of the supertree S that amalgamates a given set {Gj} of trees Gj, with leaves in each Gj being assigned homologous elements. We ground on an approach to find the tree S by minimizing the total cost of mappings αj of individual gene trees Gj into S. Traditionally, this cost is defined basically as a sum of duplications and gaps in each αj. The classical problem is to minimize the total cost, where S runs over the set of all trees that contain an exhaustive non-redundant set of species from all input Gj. Results We suggest a reformulation of the classical NP-hard problem of building a supertree in terms of the global minimization of the same cost functional but only over species trees S that consist of clades belonging to a fixed set P (e.g., an exhaustive set of clades in all Gj). We developed a deterministic solving algorithm with a low degree polynomial (typically cubic) time complexity with respect to the size of input data. We define an extensive set of elementary evolutionary events and suggest an original definition of mapping β of tree G into tree S. We introduce the cost functional c(G, S, f ) and define the mapping β as the global minimum of this functional with respect to the variable f, in which sense it is a generalization of classical mapping α. We suggest a reformulation of the classical NP-hard mapping (reconciliation) problem by introducing time slices into the species tree S and present a cubic time solving algorithm to compute the mapping β. We introduce two novel definitions of the evolutionary scenario based on mapping β or a random process of gene evolution along a species tree. Conclusions Developed algorithms are mathematically proved, which justifies the following statements. The supertree building algorithm finds exactly the global minimum of the total cost if only gene duplications and losses are allowed and the given sets of gene trees satisfies a certain condition. The mapping

  14. Design and Optimization of Low-thrust Orbit Transfers Using Q-law and Evolutionary Algorithms

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon; vonAllmen, Paul; Fink, Wolfgang; Petropoulos, Anastassios; Terrile, Richard

    2005-01-01

    Future space missions will depend more on low-thrust propulsion (such as ion engines) thanks to its high specific impulse. Yet, the design of low-thrust trajectories is complex and challenging. Third-body perturbations often dominate the thrust, and a significant change to the orbit requires a long duration of thrust. In order to guide the early design phases, we have developed an efficient and efficacious method to obtain approximate propellant and flight-time requirements (i.e., the Pareto front) for orbit transfers. A search for the Pareto-optimal trajectories is done in two levels: optimal thrust angles and locations are determined by Q-law, while the Q-law is optimized with two evolutionary algorithms: a genetic algorithm and a simulated-annealing-related algorithm. The examples considered are several types of orbit transfers around the Earth and the asteroid Vesta.

  15. Classification of Medical Datasets Using SVMs with Hybrid Evolutionary Algorithms Based on Endocrine-Based Particle Swarm Optimization and Artificial Bee Colony Algorithms.

    PubMed

    Lin, Kuan-Cheng; Hsieh, Yi-Hsiu

    2015-10-01

    The classification and analysis of data is an important issue in today's research. Selecting a suitable set of features makes it possible to classify an enormous quantity of data quickly and efficiently. Feature selection is generally viewed as a problem of feature subset selection, such as combination optimization problems. Evolutionary algorithms using random search methods have proven highly effective in obtaining solutions to problems of optimization in a diversity of applications. In this study, we developed a hybrid evolutionary algorithm based on endocrine-based particle swarm optimization (EPSO) and artificial bee colony (ABC) algorithms in conjunction with a support vector machine (SVM) for the selection of optimal feature subsets for the classification of datasets. The results of experiments using specific UCI medical datasets demonstrate that the accuracy of the proposed hybrid evolutionary algorithm is superior to that of basic PSO, EPSO and ABC algorithms, with regard to classification accuracy using subsets with a reduced number of features.

  16. Introducing Elitist Black-Box Models: When Does Elitist Behavior Weaken the Performance of Evolutionary Algorithms?

    PubMed

    Doerr, Carola; Lengler, Johannes

    2016-10-04

    Black-box complexity theory provides lower bounds for the runtime of black-box optimizers like evolutionary algorithms and other search heuristics and serves as an inspiration for the design of new genetic algorithms. Several black-box models covering different classes of algorithms exist, each highlighting a different aspect of the algorithms under considerations. In this work we add to the existing black-box notions a new elitist black-box model, in which algorithms are required to base all decisions solely on (the relative performance of) a fixed number of the best search points sampled so far. Our elitist model thus combines features of the ranking-based and the memory-restricted black-box models with an enforced usage of truncation selection. We provide several examples for which the elitist black-box complexity is exponentially larger than that of the respective complexities in all previous black-box models, thus showing that the elitist black-box complexity can be much closer to the runtime of typical evolutionary algorithms. We also introduce the concept of [Formula: see text]-Monte Carlo black-box complexity, which measures the time it takes to optimize a problem with failure probability at most [Formula: see text]. Even for small [Formula: see text], the [Formula: see text]-Monte Carlo black-box complexity of a function class [Formula: see text] can be smaller by an exponential factor than its typically regarded Las Vegas complexity (which measures the expected time it takes to optimize [Formula: see text]).

  17. Tuning of MEMS Gyroscope using Evolutionary Algorithm and "Switched Drive-Angle" Method

    NASA Technical Reports Server (NTRS)

    Keymeulen, Didier; Ferguson, Michael I.; Breuer, Luke; Peay, Chris; Oks, Boris; Cheng, Yen; Kim, Dennis; MacDonald, Eric; Foor, David; Terrile, Rich; Yee, Karl

    2006-01-01

    We propose a tuning method for Micro-Electro-Mechanical Systems (MEMS) gyroscopes based on evolutionary computation that has the capacity to efficiently increase the sensitivity of MEMS gyroscopes through tuning and, furthermore, to find the optimally tuned configuration for this state of increased sensitivity. We present the results of an experiment to determine the speed and efficiency of an evolutionary algorithm applied to electrostatic tuning of MEMS micro gyros. The MEMS gyro used in this experiment is a pyrex post resonator gyro (PRG) in a closed-loop control system. A measure of the quality of tuning is given by the difference in resonant frequencies, or frequency split, for the two orthogonal rocking axes. The current implementation of the closed-loop platform is able to measure and attain a relative stability in the sub-millihertz range, leading to a reduction of the frequency split to less than 100 mHz.

  18. A Gaze-Driven Evolutionary Algorithm to Study Aesthetic Evaluation of Visual Symmetry.

    PubMed

    Makin, Alexis D J; Bertamini, Marco; Jones, Andrew; Holmes, Tim; Zanker, Johannes M

    2016-03-01

    Empirical work has shown that people like visual symmetry. We used a gaze-driven evolutionary algorithm technique to answer three questions about symmetry preference. First, do people automatically evaluate symmetry without explicit instruction? Second, is perfect symmetry the best stimulus, or do people prefer a degree of imperfection? Third, does initial preference for symmetry diminish after familiarity sets in? Stimuli were generated as phenotypes from an algorithmic genotype, with genes for symmetry (coded as deviation from a symmetrical template, deviation-symmetry, DS gene) and orientation (0° to 90°, orientation, ORI gene). An eye tracker identified phenotypes that were good at attracting and retaining the gaze of the observer. Resulting fitness scores determined the genotypes that passed to the next generation. We recorded changes to the distribution of DS and ORI genes over 20 generations. When participants looked for symmetry, there was an increase in high-symmetry genes. When participants looked for the patterns they preferred, there was a smaller increase in symmetry, indicating that people tolerated some imperfection. Conversely, there was no increase in symmetry during free viewing, and no effect of familiarity or orientation. This work demonstrates the viability of the evolutionary algorithm approach as a quantitative measure of aesthetic preference.

  19. A multi-objective evolutionary algorithm for protein structure prediction with immune operators.

    PubMed

    Judy, M V; Ravichandran, K S; Murugesan, K

    2009-08-01

    Genetic algorithms (GA) are often well suited for optimisation problems involving several conflicting objectives. It is more suitable to model the protein structure prediction problem as a multi-objective optimisation problem since the potential energy functions used in the literature to evaluate the conformation of a protein are based on the calculations of two different interaction energies: local (bond atoms) and non-local (non-bond atoms) and experiments have shown that those types of interactions are in conflict, by using the potential energy function, Chemistry at Harvard Macromolecular Mechanics. In this paper, we have modified the immune inspired Pareto archived evolutionary strategy (I-PAES) algorithm and denoted it as MI-PAES. It can effectively exploit some prior knowledge about the hydrophobic interactions, which is one of the most important driving forces in protein folding to make vaccines. The proposed MI-PAES is comparable with other evolutionary algorithms proposed in literature, both in terms of best solution found and the computational time and often results in much better search ability than that of the canonical GA.

  20. [In Silico Drug Design Using an Evolutionary Algorithm and Compound Database].

    PubMed

    Kawai, Kentaro; Takahashi, Yoshimasa

    2016-01-01

      Computational drug design plays an important role in the discovery of new drugs. Recently, we proposed an algorithm for designing new drug-like molecules utilizing the structure of a known active molecule. To design molecules, three types of fragments (ring, linker, and side-chain fragments) were defined as building blocks, and a fragment library was prepared from molecules listed in G protein-coupled receptor (GPCR)-SARfari database. An evolutionary algorithm which executes evolutionary operations, such as crossover, mutation, and selection, was implemented to evolve the molecules. As a case study, some GPCRs were selected for computational experiments in which we tried to design ligands from simple seed fragments using the Tanimoto coefficient as a fitness function. The results showed that the algorithm could be used successfully to design new molecules with structural similarity, scaffold variety, and chemical validity. In addition, a docking study revealed that these designed molecules also exhibited shape complementarity with the binding site of the target protein. Therefore, this is expected to become a powerful tool for designing new drug-like molecules in drug discovery projects.

  1. Optimising operational amplifiers by evolutionary algorithms and gm/Id method

    NASA Astrophysics Data System (ADS)

    Tlelo-Cuautle, E.; Sanabria-Borbon, A. C.

    2016-10-01

    The evolutionary algorithm called non-dominated sorting genetic algorithm (NSGA-II) is applied herein in the optimisation of operational transconductance amplifiers. NSGA-II is accelerated by applying the gm/Id method to estimate reduced search spaces associated to widths (W) and lengths (L) of the metal-oxide-semiconductor field-effect-transistor (MOSFETs), and to guarantee their appropriate bias levels conditions. In addition, we introduce an integer encoding for the W/L sizes of the MOSFETs to avoid a post-processing step for rounding-off their values to be multiples of the integrated circuit fabrication technology. Finally, from the feasible solutions generated by NSGA-II, we introduce a second optimisation stage to guarantee that the final feasible W/L sizes solutions support process, voltage and temperature (PVT) variations. The optimisation results lead us to conclude that the gm/Id method and integer encoding are quite useful to accelerate the convergence of the evolutionary algorithm NSGA-II, while the second optimisation stage guarantees robustness of the feasible solutions to PVT variations.

  2. A Gaze-Driven Evolutionary Algorithm to Study Aesthetic Evaluation of Visual Symmetry

    PubMed Central

    Bertamini, Marco; Jones, Andrew; Holmes, Tim; Zanker, Johannes M.

    2016-01-01

    Empirical work has shown that people like visual symmetry. We used a gaze-driven evolutionary algorithm technique to answer three questions about symmetry preference. First, do people automatically evaluate symmetry without explicit instruction? Second, is perfect symmetry the best stimulus, or do people prefer a degree of imperfection? Third, does initial preference for symmetry diminish after familiarity sets in? Stimuli were generated as phenotypes from an algorithmic genotype, with genes for symmetry (coded as deviation from a symmetrical template, deviation–symmetry, DS gene) and orientation (0° to 90°, orientation, ORI gene). An eye tracker identified phenotypes that were good at attracting and retaining the gaze of the observer. Resulting fitness scores determined the genotypes that passed to the next generation. We recorded changes to the distribution of DS and ORI genes over 20 generations. When participants looked for symmetry, there was an increase in high-symmetry genes. When participants looked for the patterns they preferred, there was a smaller increase in symmetry, indicating that people tolerated some imperfection. Conversely, there was no increase in symmetry during free viewing, and no effect of familiarity or orientation. This work demonstrates the viability of the evolutionary algorithm approach as a quantitative measure of aesthetic preference. PMID:27433324

  3. A multiobjective evolutionary algorithm based on similarity for community detection from signed social networks.

    PubMed

    Liu, Chenlong; Liu, Jing; Jiang, Zhongzhou

    2014-12-01

    Various types of social relationships, such as friends and foes, can be represented as signed social networks (SNs) that contain both positive and negative links. Although many community detection (CD) algorithms have been proposed, most of them were designed primarily for networks containing only positive links. Thus, it is important to design CD algorithms which can handle large-scale SNs. To this purpose, we first extend the original similarity to the signed similarity based on the social balance theory. Then, based on the signed similarity and the natural contradiction between positive and negative links, two objective functions are designed to model the problem of detecting communities in SNs as a multiobjective problem. Afterward, we propose a multiobjective evolutionary algorithm, called MEAs-SN. In MEAs-SN, to overcome the defects of direct and indirect representations for communities, a direct and indirect combined representation is designed. Attributing to this representation, MEAs-SN can switch between different representations during the evolutionary process. As a result, MEAs-SN can benefit from both representations. Moreover, owing to this representation, MEAs-SN can also detect overlapping communities directly. In the experiments, both benchmark problems and large-scale synthetic networks generated by various parameter settings are used to validate the performance of MEAs-SN. The experimental results show the effectiveness and efficacy of MEAs-SN on networks with 1000, 5000, and 10,000 nodes and also in various noisy situations. A thorough comparison is also made between MEAs-SN and three existing algorithms, and the results show that MEAs-SN outperforms other algorithms.

  4. An evolutionary algorithm technique for intelligence, surveillance, and reconnaissance plan optimization

    NASA Astrophysics Data System (ADS)

    Langton, John T.; Caroli, Joseph A.; Rosenberg, Brad

    2008-04-01

    To support an Effects Based Approach to Operations (EBAO), Intelligence, Surveillance, and Reconnaissance (ISR) planners must optimize collection plans within an evolving battlespace. A need exists for a decision support tool that allows ISR planners to rapidly generate and rehearse high-performing ISR plans that balance multiple objectives and constraints to address dynamic collection requirements for assessment. To meet this need we have designed an evolutionary algorithm (EA)-based "Integrated ISR Plan Analysis and Rehearsal System" (I2PARS) to support Effects-based Assessment (EBA). I2PARS supports ISR mission planning and dynamic replanning to coordinate assets and optimize their routes, allocation and tasking. It uses an evolutionary algorithm to address the large parametric space of route-finding problems which is sometimes discontinuous in the ISR domain because of conflicting objectives such as minimizing asset utilization yet maximizing ISR coverage. EAs are uniquely suited for generating solutions in dynamic environments and also allow user feedback. They are therefore ideal for "streaming optimization" and dynamic replanning of ISR mission plans. I2PARS uses the Non-dominated Sorting Genetic Algorithm (NSGA-II) to automatically generate a diverse set of high performing collection plans given multiple objectives, constraints, and assets. Intended end users of I2PARS include ISR planners in the Combined Air Operations Centers and Joint Intelligence Centers. Here we show the feasibility of applying the NSGA-II algorithm and EAs in general to the ISR planning domain. Unique genetic representations and operators for optimization within the ISR domain are presented along with multi-objective optimization criteria for ISR planning. Promising results of the I2PARS architecture design, early software prototype, and limited domain testing of the new algorithm are discussed. We also present plans for future research and development, as well as technology

  5. Creating ensembles of oblique decision trees with evolutionary algorithms and sampling

    DOEpatents

    Cantu-Paz, Erick; Kamath, Chandrika

    2006-06-13

    A decision tree system that is part of a parallel object-oriented pattern recognition system, which in turn is part of an object oriented data mining system. A decision tree process includes the step of reading the data. If necessary, the data is sorted. A potential split of the data is evaluated according to some criterion. An initial split of the data is determined. The final split of the data is determined using evolutionary algorithms and statistical sampling techniques. The data is split. Multiple decision trees are combined in ensembles.

  6. Searching for the Optimal Working Point of the MEIC at JLab Using an Evolutionary Algorithm

    SciTech Connect

    Balsa Terzic, Matthew Kramer, Colin Jarvis

    2011-03-01

    The Medium-energy Electron Ion Collider (MEIC), a proposed medium-energy ring-ring electron-ion collider based on CEBAF at Jefferson Lab. The collider luminosity and stability are sensitive to the choice of a working point - the betatron and synchrotron tunes of the two colliding beams. Therefore, a careful selection of the working point is essential for stable operation of the collider, as well as for achieving high luminosity. Here we describe a novel approach for locating an optimal working point based on evolutionary algorithm techniques.

  7. A Data-Driven Evolutionary Algorithm for Mapping Multibasin Protein Energy Landscapes.

    PubMed

    Clausen, Rudy; Shehu, Amarda

    2015-09-01

    Evidence is emerging that many proteins involved in proteinopathies are dynamic molecules switching between stable and semistable structures to modulate their function. A detailed understanding of the relationship between structure and function in such molecules demands a comprehensive characterization of their conformation space. Currently, only stochastic optimization methods are capable of exploring conformation spaces to obtain sample-based representations of associated energy surfaces. These methods have to address the fundamental but challenging issue of balancing computational resources between exploration (obtaining a broad view of the space) and exploitation (going deep in the energy surface). We propose a novel algorithm that strikes an effective balance by employing concepts from evolutionary computation. The algorithm leverages deposited crystal structures of wildtype and variant sequences of a protein to define a reduced, low-dimensional search space from where to rapidly draw samples. A multiscale technique maps samples to local minima of the all-atom energy surface of a protein under investigation. Several novel algorithmic strategies are employed to avoid premature convergence to particular minima and obtain a broad view of a possibly multibasin energy surface. Analysis of applications on different proteins demonstrates the broad utility of the algorithm to map multibasin energy landscapes and advance modeling of multibasin proteins. In particular, applications on wildtype and variant sequences of proteins involved in proteinopathies demonstrate that the algorithm makes an important first step toward understanding the impact of sequence mutations on misfunction by providing the energy landscape as the intermediate explanatory link between protein sequence and function.

  8. Decomposition-Based Multiobjective Evolutionary Algorithm for Community Detection in Dynamic Social Networks

    PubMed Central

    Ma, Jingjing; Liu, Jie; Ma, Wenping; Gong, Maoguo; Jiao, Licheng

    2014-01-01

    Community structure is one of the most important properties in social networks. In dynamic networks, there are two conflicting criteria that need to be considered. One is the snapshot quality, which evaluates the quality of the community partitions at the current time step. The other is the temporal cost, which evaluates the difference between communities at different time steps. In this paper, we propose a decomposition-based multiobjective community detection algorithm to simultaneously optimize these two objectives to reveal community structure and its evolution in dynamic networks. It employs the framework of multiobjective evolutionary algorithm based on decomposition to simultaneously optimize the modularity and normalized mutual information, which quantitatively measure the quality of the community partitions and temporal cost, respectively. A local search strategy dealing with the problem-specific knowledge is incorporated to improve the effectiveness of the new algorithm. Experiments on computer-generated and real-world networks demonstrate that the proposed algorithm can not only find community structure and capture community evolution more accurately, but also be steadier than the two compared algorithms. PMID:24723806

  9. Environment Sensitivity-based Cooperative Co-evolutionary Algorithms for Dynamic Multi-objective Optimization.

    PubMed

    Xu, Biao; Zhang, Yong; Gong, Dunwei; Guo, Yinan; Rong, Miao

    2017-01-16

    Dynamic multi-objective optimization problems (DMOPs) not only involve multiple conflicting objectives, but these objectives may also vary with time, raising a challenge for researchers to solve them. This paper presents a cooperative co-evolutionary strategy based on environment sensitivities for solving DMOPs. In this strategy, a new method that groups decision variables is first proposed, in which all the decision variables are partitioned into two subcomponents according to their interrelation with environment. Adopting two populations to cooperatively optimize the two subcomponents, two prediction methods, i.e., differential prediction and Cauchy mutation, are then employed respectively to speed up their responses on the change of the environment. Furthermore, two improved dynamic multi-objective optimization algorithms, i.e., DNSGAII-CO and DMOPSO-CO, are proposed by incorporating the above strategy into NSGA-II and multi-objective particle swarm optimization, respectively. The proposed algorithms are compared with three state-of-the-art algorithms by applying to seven benchmark DMOPs. Experimental results reveal that the proposed algorithms significantly outperform the compared algorithms in terms of convergence and distribution on most DMOPs.

  10. Improved evolutionary algorithm for the global optimization of clusters with competing attractive and repulsive interactions

    NASA Astrophysics Data System (ADS)

    Cruz, S. M. A.; Marques, J. M. C.; Pereira, F. B.

    2016-10-01

    We propose improvements to our evolutionary algorithm (EA) [J. M. C. Marques and F. B. Pereira, J. Mol. Liq. 210, 51 (2015)] in order to avoid dissociative solutions in the global optimization of clusters with competing attractive and repulsive interactions. The improved EA outperforms the original version of the method for charged colloidal clusters in the size range 3 ≤ N ≤ 25, which is a very stringent test for global optimization algorithms. While the Bernal spiral is the global minimum for clusters in the interval 13 ≤ N ≤ 18, the lowest-energy structure is a peculiar, so-called beaded-necklace, motif for 19 ≤ N ≤ 25. We have also applied the method for larger sizes and unusual quasi-linear and branched clusters arise as low-energy structures.

  11. Comparison of Multiobjective Evolutionary Algorithms for Operations Scheduling under Machine Availability Constraints

    PubMed Central

    Frutos, M.; Méndez, M.; Tohmé, F.; Broz, D.

    2013-01-01

    Many of the problems that arise in production systems can be handled with multiobjective techniques. One of those problems is that of scheduling operations subject to constraints on the availability of machines and buffer capacity. In this paper we analyze different Evolutionary multiobjective Algorithms (MOEAs) for this kind of problems. We consider an experimental framework in which we schedule production operations for four real world Job-Shop contexts using three algorithms, NSGAII, SPEA2, and IBEA. Using two performance indexes, Hypervolume and R2, we found that SPEA2 and IBEA are the most efficient for the tasks at hand. On the other hand IBEA seems to be a better choice of tool since it yields more solutions in the approximate Pareto frontier. PMID:24489502

  12. A novel evolutionary algorithm applied to algebraic modifications of the RANS stress-strain relationship

    NASA Astrophysics Data System (ADS)

    Weatheritt, Jack; Sandberg, Richard

    2016-11-01

    This paper presents a novel and promising approach to turbulence model formulation, rather than putting forward a particular new model. Evolutionary computation has brought symbolic regression of scalar fields into the domain of algorithms and this paper describes a novel expansion of Gene Expression Programming for the purpose of tensor modeling. By utilizing high-fidelity data and uncertainty measures, mathematical models for tensors are created. The philosophy behind the framework is to give freedom to the algorithm to produce a constraint-free model; its own functional form that was not previously imposed. Turbulence modeling is the target application, specifically the improvement of separated flow prediction. Models are created by considering the anisotropy of the turbulent stress tensor and formulating non-linear constitutive stress-strain relationships. A previously unseen flow field is computed and compared to the baseline linear model and an established non-linear model of comparable complexity. The results are highly encouraging.

  13. a Model-Based Autofocus Algorithm for Ultrasonic Imaging Using a Flexible Array

    NASA Astrophysics Data System (ADS)

    Hunter, A. J.; Drinkwater, B. W.; Wilcox, P. D.

    2010-02-01

    Autofocus is a methodology for estimating and correcting errors in the assumed parameters of an imaging algorithm. It provides improved image quality and, therefore, better defect detection and characterization capabilities. In this paper, we present a new autofocus algorithm developed specifically for ultrasonic non-destructive testing and evaluation (NDE). We consider the estimation and correction of errors in the assumed element positions for a flexible ultrasonic array coupled to a specimen with an unknown surface profile. The algorithm performs a weighted least-squares minimization of the time-of-arrival errors in the echo data using assumed models for known features in the specimen. The algorithm is described for point and planar specimen features and demonstrated using experimental data from a flexible array prototype.

  14. Hybrid model based on Genetic Algorithms and SVM applied to variable selection within fruit juice classification.

    PubMed

    Fernandez-Lozano, C; Canto, C; Gestal, M; Andrade-Garda, J M; Rabuñal, J R; Dorado, J; Pazos, A

    2013-01-01

    Given the background of the use of Neural Networks in problems of apple juice classification, this paper aim at implementing a newly developed method in the field of machine learning: the Support Vector Machines (SVM). Therefore, a hybrid model that combines genetic algorithms and support vector machines is suggested in such a way that, when using SVM as a fitness function of the Genetic Algorithm (GA), the most representative variables for a specific classification problem can be selected.

  15. Hybrid Model Based on Genetic Algorithms and SVM Applied to Variable Selection within Fruit Juice Classification

    PubMed Central

    Fernandez-Lozano, C.; Canto, C.; Gestal, M.; Andrade-Garda, J. M.; Rabuñal, J. R.; Dorado, J.; Pazos, A.

    2013-01-01

    Given the background of the use of Neural Networks in problems of apple juice classification, this paper aim at implementing a newly developed method in the field of machine learning: the Support Vector Machines (SVM). Therefore, a hybrid model that combines genetic algorithms and support vector machines is suggested in such a way that, when using SVM as a fitness function of the Genetic Algorithm (GA), the most representative variables for a specific classification problem can be selected. PMID:24453933

  16. Log-linear model based behavior selection method for artificial fish swarm algorithm.

    PubMed

    Huang, Zhehuang; Chen, Yidong

    2015-01-01

    Artificial fish swarm algorithm (AFSA) is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm.

  17. Evolutionary Connectionism: Algorithmic Principles Underlying the Evolution of Biological Organisation in Evo-Devo, Evo-Eco and Evolutionary Transitions.

    PubMed

    Watson, Richard A; Mills, Rob; Buckley, C L; Kouvaris, Kostas; Jackson, Adam; Powers, Simon T; Cox, Chris; Tudge, Simon; Davies, Adam; Kounios, Loizos; Power, Daniel

    2016-01-01

    The mechanisms of variation, selection and inheritance, on which evolution by natural selection depends, are not fixed over evolutionary time. Current evolutionary biology is increasingly focussed on understanding how the evolution of developmental organisations modifies the distribution of phenotypic variation, the evolution of ecological relationships modifies the selective environment, and the evolution of reproductive relationships modifies the heritability of the evolutionary unit. The major transitions in evolution, in particular, involve radical changes in developmental, ecological and reproductive organisations that instantiate variation, selection and inheritance at a higher level of biological organisation. However, current evolutionary theory is poorly equipped to describe how these organisations change over evolutionary time and especially how that results in adaptive complexes at successive scales of organisation (the key problem is that evolution is self-referential, i.e. the products of evolution change the parameters of the evolutionary process). Here we first reinterpret the central open questions in these domains from a perspective that emphasises the common underlying themes. We then synthesise the findings from a developing body of work that is building a new theoretical approach to these questions by converting well-understood theory and results from models of cognitive learning. Specifically, connectionist models of memory and learning demonstrate how simple incremental mechanisms, adjusting the relationships between individually-simple components, can produce organisations that exhibit complex system-level behaviours and improve the adaptive capabilities of the system. We use the term "evolutionary connectionism" to recognise that, by functionally equivalent processes, natural selection acting on the relationships within and between evolutionary entities can result in organisations that produce complex system-level behaviours in evolutionary

  18. Spatial multiobjective optimization of agricultural conservation practices using a SWAT model and an evolutionary algorithm.

    PubMed

    Rabotyagov, Sergey; Campbell, Todd; Valcu, Adriana; Gassman, Philip; Jha, Manoj; Schilling, Keith; Wolter, Calvin; Kling, Catherine

    2012-12-09

    Finding the cost-efficient (i.e., lowest-cost) ways of targeting conservation practice investments for the achievement of specific water quality goals across the landscape is of primary importance in watershed management. Traditional economics methods of finding the lowest-cost solution in the watershed context (e.g.,(5,12,20)) assume that off-site impacts can be accurately described as a proportion of on-site pollution generated. Such approaches are unlikely to be representative of the actual pollution process in a watershed, where the impacts of polluting sources are often determined by complex biophysical processes. The use of modern physically-based, spatially distributed hydrologic simulation models allows for a greater degree of realism in terms of process representation but requires a development of a simulation-optimization framework where the model becomes an integral part of optimization. Evolutionary algorithms appear to be a particularly useful optimization tool, able to deal with the combinatorial nature of a watershed simulation-optimization problem and allowing the use of the full water quality model. Evolutionary algorithms treat a particular spatial allocation of conservation practices in a watershed as a candidate solution and utilize sets (populations) of candidate solutions iteratively applying stochastic operators of selection, recombination, and mutation to find improvements with respect to the optimization objectives. The optimization objectives in this case are to minimize nonpoint-source pollution in the watershed, simultaneously minimizing the cost of conservation practices. A recent and expanding set of research is attempting to use similar methods and integrates water quality models with broadly defined evolutionary optimization methods(3,4,9,10,13-15,17-19,22,23,25). In this application, we demonstrate a program which follows Rabotyagov et al.'s approach and integrates a modern and commonly used SWAT water quality model(7) with a

  19. Spatial Multiobjective Optimization of Agricultural Conservation Practices using a SWAT Model and an Evolutionary Algorithm

    PubMed Central

    Rabotyagov, Sergey; Campbell, Todd; Valcu, Adriana; Gassman, Philip; Jha, Manoj; Schilling, Keith; Wolter, Calvin; Kling, Catherine

    2012-01-01

    Finding the cost-efficient (i.e., lowest-cost) ways of targeting conservation practice investments for the achievement of specific water quality goals across the landscape is of primary importance in watershed management. Traditional economics methods of finding the lowest-cost solution in the watershed context (e.g.,5,12,20) assume that off-site impacts can be accurately described as a proportion of on-site pollution generated. Such approaches are unlikely to be representative of the actual pollution process in a watershed, where the impacts of polluting sources are often determined by complex biophysical processes. The use of modern physically-based, spatially distributed hydrologic simulation models allows for a greater degree of realism in terms of process representation but requires a development of a simulation-optimization framework where the model becomes an integral part of optimization. Evolutionary algorithms appear to be a particularly useful optimization tool, able to deal with the combinatorial nature of a watershed simulation-optimization problem and allowing the use of the full water quality model. Evolutionary algorithms treat a particular spatial allocation of conservation practices in a watershed as a candidate solution and utilize sets (populations) of candidate solutions iteratively applying stochastic operators of selection, recombination, and mutation to find improvements with respect to the optimization objectives. The optimization objectives in this case are to minimize nonpoint-source pollution in the watershed, simultaneously minimizing the cost of conservation practices. A recent and expanding set of research is attempting to use similar methods and integrates water quality models with broadly defined evolutionary optimization methods3,4,9,10,13-15,17-19,22,23,25. In this application, we demonstrate a program which follows Rabotyagov et al.'s approach and integrates a modern and commonly used SWAT water quality model7 with a

  20. An improved algorithm for model-based analysis of evoked skin conductance responses☆

    PubMed Central

    Bach, Dominik R.; Friston, Karl J.; Dolan, Raymond J.

    2013-01-01

    Model-based analysis of psychophysiological signals is more robust to noise – compared to standard approaches – and may furnish better predictors of psychological state, given a physiological signal. We have previously established the improved predictive validity of model-based analysis of evoked skin conductance responses to brief stimuli, relative to standard approaches. Here, we consider some technical aspects of the underlying generative model and demonstrate further improvements. Most importantly, harvesting between-subject variability in response shape can improve predictive validity, but only under constraints on plausible response forms. A further improvement is achieved by conditioning the physiological signal with high pass filtering. A general conclusion is that precise modelling of physiological time series does not markedly increase predictive validity; instead, it appears that a more constrained model and optimised data features provide better results, probably through a suppression of physiological fluctuation that is not caused by the experiment. PMID:24063955

  1. A standard deviation selection in evolutionary algorithm for grouper fish feed formulation

    NASA Astrophysics Data System (ADS)

    Cai-Juan, Soong; Ramli, Razamin; Rahman, Rosshairy Abdul

    2016-10-01

    Malaysia is one of the major producer countries for fishery production due to its location in the equatorial environment. Grouper fish is one of the potential markets in contributing to the income of the country due to its desirable taste, high demand and high price. However, the demand of grouper fish is still insufficient from the wild catch. Therefore, there is a need to farm grouper fish to cater to the market demand. In order to farm grouper fish, there is a need to have prior knowledge of the proper nutrients needed because there is no exact data available. Therefore, in this study, primary data and secondary data are collected even though there is a limitation of related papers and 30 samples are investigated by using standard deviation selection in Evolutionary algorithm. Thus, this study would unlock frontiers for an extensive research in respect of grouper fish feed formulation. Results shown that the fitness of standard deviation selection in evolutionary algorithm is applicable. The feasible and low fitness, quick solution can be obtained. These fitness can be further predicted to minimize cost in farming grouper fish.

  2. AI-BL1.0: a program for automatic on-line beamline optimization using the evolutionary algorithm.

    PubMed

    Xi, Shibo; Borgna, Lucas Santiago; Zheng, Lirong; Du, Yonghua; Hu, Tiandou

    2017-01-01

    In this report, AI-BL1.0, an open-source Labview-based program for automatic on-line beamline optimization, is presented. The optimization algorithms used in the program are Genetic Algorithm and Differential Evolution. Efficiency was improved by use of a strategy known as Observer Mode for Evolutionary Algorithm. The program was constructed and validated at the XAFCA beamline of the Singapore Synchrotron Light Source and 1W1B beamline of the Beijing Synchrotron Radiation Facility.

  3. Developing Multiple Diverse Potential Designs for Heat Transfer Utilizing Graph Based Evolutionary Algorithms

    SciTech Connect

    David J. Muth Jr.

    2006-09-01

    This paper examines the use of graph based evolutionary algorithms (GBEAs) to find multiple acceptable solutions for heat transfer in engineering systems during the optimization process. GBEAs are a type of evolutionary algorithm (EA) in which a topology, or geography, is imposed on an evolving population of solutions. The rates at which solutions can spread within the population are controlled by the choice of topology. As in nature geography can be used to develop and sustain diversity within the solution population. Altering the choice of graph can create a more or less diverse population of potential solutions. The choice of graph can also affect the convergence rate for the EA and the number of mating events required for convergence. The engineering system examined in this paper is a biomass fueled cookstove used in developing nations for household cooking. In this cookstove wood is combusted in a small combustion chamber and the resulting hot gases are utilized to heat the stove’s cooking surface. The spatial temperature profile of the cooking surface is determined by a series of baffles that direct the flow of hot gases. The optimization goal is to find baffle configurations that provide an even temperature distribution on the cooking surface. Often in engineering, the goal of optimization is not to find the single optimum solution but rather to identify a number of good solutions that can be used as a starting point for detailed engineering design. Because of this a key aspect of evolutionary optimization is the diversity of the solutions found. The key conclusion in this paper is that GBEA’s can be used to create multiple good solutions needed to support engineering design.

  4. XTALOPT: An open-source evolutionary algorithm for crystal structure prediction

    NASA Astrophysics Data System (ADS)

    Lonie, David C.; Zurek, Eva

    2011-02-01

    The implementation and testing of XTALOPT, an evolutionary algorithm for crystal structure prediction, is outlined. We present our new periodic displacement (ripple) operator which is ideally suited to extended systems. It is demonstrated that hybrid operators, which combine two pure operators, reduce the number of duplicate structures in the search. This allows for better exploration of the potential energy surface of the system in question, while simultaneously zooming in on the most promising regions. A continuous workflow, which makes better use of computational resources as compared to traditional generation based algorithms, is employed. Various parameters in XTALOPT are optimized using a novel benchmarking scheme. XTALOPT is available under the GNU Public License, has been interfaced with various codes commonly used to study extended systems, and has an easy to use, intuitive graphical interface. Program summaryProgram title:XTALOPT Catalogue identifier: AEGX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPL v2.1 or later [1] No. of lines in distributed program, including test data, etc.: 36 849 No. of bytes in distributed program, including test data, etc.: 1 149 399 Distribution format: tar.gz Programming language: C++ Computer: PCs, workstations, or clusters Operating system: Linux Classification: 7.7 External routines: QT [2], OpenBabel [3], AVOGADRO [4], SPGLIB [8] and one of: VASP [5], PWSCF [6], GULP [7]. Nature of problem: Predicting the crystal structure of a system from its stoichiometry alone remains a grand challenge in computational materials science, chemistry, and physics. Solution method: Evolutionary algorithms are stochastic search techniques which use concepts from biological evolution in order to locate the global minimum on their potential energy surface. Our evolutionary algorithm, XTALOPT, is freely

  5. Microcellular propagation prediction model based on an improved ray tracing algorithm.

    PubMed

    Liu, Z-Y; Guo, L-X; Fan, T-Q

    2013-11-01

    Two-dimensional (2D)/two-and-one-half-dimensional ray tracing (RT) algorithms for the use of the uniform theory of diffraction and geometrical optics are widely used for channel prediction in urban microcellular environments because of their high efficiency and reliable prediction accuracy. In this study, an improved RT algorithm based on the "orientation face set" concept and on the improved 2D polar sweep algorithm is proposed. The goal is to accelerate point-to-point prediction, thereby making RT prediction attractive and convenient. In addition, the use of threshold control of each ray path and the handling of visible grid points for reflection and diffraction sources are adopted, resulting in an improved efficiency of coverage prediction over large areas. Measured results and computed predictions are also compared for urban scenarios. The results indicate that the proposed prediction model works well and is a useful tool for microcellular communication applications.

  6. Uncertainty Representation and Interpretation in Model-Based Prognostics Algorithms Based on Kalman Filter Estimation

    NASA Technical Reports Server (NTRS)

    Galvan, Jose Ramon; Saxena, Abhinav; Goebel, Kai Frank

    2012-01-01

    This article discusses several aspects of uncertainty representation and management for model-based prognostics methodologies based on our experience with Kalman Filters when applied to prognostics for electronics components. In particular, it explores the implications of modeling remaining useful life prediction as a stochastic process, and how it relates to uncertainty representation, management and the role of prognostics in decision-making. A distinction between the interpretations of estimated remaining useful life probability density function is explained and a cautionary argument is provided against mixing interpretations for two while considering prognostics in making critical decisions.

  7. A Parallel EM Algorithm for Model-Based Clustering Applied to the Exploration of Large Spatio-Temporal Data

    SciTech Connect

    Chen, Wei-Chen; Ostrouchov, George; Pugmire, Dave; Prabhat,; Wehner, Michael

    2013-01-01

    We develop a parallel EM algorithm for multivariate Gaussian mixture models and use it to perform model-based clustering of a large climate data set. Three variants of the EM algorithm are reformulated in parallel and a new variant that is faster is presented. All are implemented using the single program, multiple data (SPMD) programming model, which is able to take advantage of the combined collective memory of large distributed computer architectures to process larger data sets. Displays of the estimated mixture model rather than the data allow us to explore multivariate relationships in a way that scales to arbitrary size data. We study the performance of our methodology on simulated data and apply our methodology to a high resolution climate dataset produced by the community atmosphere model (CAM5). This article has supplementary material online.

  8. [3-D endocardial surface modelling based on the convex hull algorithm].

    PubMed

    Lu, Ying; Xi, Ri-hui; Shen, Hai-dong; Ye, You-li; Zhang, Yong

    2006-11-01

    In this paper, a method based on the convex hull algorithm is presented for extracting modelling data from the locations of catheter electrodes within a cardiac chamber, so as to create a 3-D model of the heart chamber during diastole and to obtain a good result in the 3-D reconstruction of the chamber based on VTK.

  9. A Data Preprocessing Algorithm for Classification Model Based On Rough Sets

    NASA Astrophysics Data System (ADS)

    Xiang-wei, Li; Yian-fang, Qi

    Aimed to solve the limitation of abundant data to constructing classification modeling in data mining, the paper proposed a novel effective preprocessing algorithm based on rough sets. Firstly, we construct the relation Information System using original data sets. Secondly, make use of attribute reduction theory of Rough sets to produce the Core of Information System. Core is the most important and necessary information which cannot reduce in original Information System. So it can get a same effect as original data sets to data analysis, and can construct classification modeling using it. Thirdly, construct indiscernibility matrix using reduced Information System, and finally, get the classification of original data sets. Compared to existing techniques, the developed algorithm enjoy following advantages: (1) avoiding the abundant data in follow-up data processing, and (2) avoiding large amount of computation in whole data mining process. (3) The results become more effective because of introducing the attributes reducing theory of Rough Sets.

  10. A model-based spike sorting algorithm for removing correlation artifacts in multi-neuron recordings.

    PubMed

    Pillow, Jonathan W; Shlens, Jonathon; Chichilnisky, E J; Simoncelli, Eero P

    2013-01-01

    We examine the problem of estimating the spike trains of multiple neurons from voltage traces recorded on one or more extracellular electrodes. Traditional spike-sorting methods rely on thresholding or clustering of recorded signals to identify spikes. While these methods can detect a large fraction of the spikes from a recording, they generally fail to identify synchronous or near-synchronous spikes: cases in which multiple spikes overlap. Here we investigate the geometry of failures in traditional sorting algorithms, and document the prevalence of such errors in multi-electrode recordings from primate retina. We then develop a method for multi-neuron spike sorting using a model that explicitly accounts for the superposition of spike waveforms. We model the recorded voltage traces as a linear combination of spike waveforms plus a stochastic background component of correlated Gaussian noise. Combining this measurement model with a Bernoulli prior over binary spike trains yields a posterior distribution for spikes given the recorded data. We introduce a greedy algorithm to maximize this posterior that we call "binary pursuit". The algorithm allows modest variability in spike waveforms and recovers spike times with higher precision than the voltage sampling rate. This method substantially corrects cross-correlation artifacts that arise with conventional methods, and substantially outperforms clustering methods on both real and simulated data. Finally, we develop diagnostic tools that can be used to assess errors in spike sorting in the absence of ground truth.

  11. A Model-Based Spike Sorting Algorithm for Removing Correlation Artifacts in Multi-Neuron Recordings

    PubMed Central

    Chichilnisky, E. J.; Simoncelli, Eero P.

    2013-01-01

    We examine the problem of estimating the spike trains of multiple neurons from voltage traces recorded on one or more extracellular electrodes. Traditional spike-sorting methods rely on thresholding or clustering of recorded signals to identify spikes. While these methods can detect a large fraction of the spikes from a recording, they generally fail to identify synchronous or near-synchronous spikes: cases in which multiple spikes overlap. Here we investigate the geometry of failures in traditional sorting algorithms, and document the prevalence of such errors in multi-electrode recordings from primate retina. We then develop a method for multi-neuron spike sorting using a model that explicitly accounts for the superposition of spike waveforms. We model the recorded voltage traces as a linear combination of spike waveforms plus a stochastic background component of correlated Gaussian noise. Combining this measurement model with a Bernoulli prior over binary spike trains yields a posterior distribution for spikes given the recorded data. We introduce a greedy algorithm to maximize this posterior that we call “binary pursuit”. The algorithm allows modest variability in spike waveforms and recovers spike times with higher precision than the voltage sampling rate. This method substantially corrects cross-correlation artifacts that arise with conventional methods, and substantially outperforms clustering methods on both real and simulated data. Finally, we develop diagnostic tools that can be used to assess errors in spike sorting in the absence of ground truth. PMID:23671583

  12. Vertex shading of the three-dimensional model based on ray-tracing algorithm

    NASA Astrophysics Data System (ADS)

    Hu, Xiaoming; Sang, Xinzhu; Xing, Shujun; Yan, Binbin; Wang, Kuiru; Dou, Wenhua; Xiao, Liquan

    2016-10-01

    Ray Tracing Algorithm is one of the research hotspots in Photorealistic Graphics. It is an important light and shadow technology in many industries with the three-dimensional (3D) structure, such as aerospace, game, video and so on. Unlike the traditional method of pixel shading based on ray tracing, a novel ray tracing algorithm is presented to color and render vertices of the 3D model directly. Rendering results are related to the degree of subdivision of the 3D model. A good light and shade effect is achieved by realizing the quad-tree data structure to get adaptive subdivision of a triangle according to the brightness difference of its vertices. The uniform grid algorithm is adopted to improve the rendering efficiency. Besides, the rendering time is independent of the screen resolution. In theory, as long as the subdivision of a model is adequate, cool effects as the same as the way of pixel shading will be obtained. Our practical application can be compromised between the efficiency and the effectiveness.

  13. Attribute index and uniform design based multiobjective association rule mining with evolutionary algorithm.

    PubMed

    Zhang, Jie; Wang, Yuping; Feng, Junhong

    2013-01-01

    In association rule mining, evaluating an association rule needs to repeatedly scan database to compare the whole database with the antecedent, consequent of a rule and the whole rule. In order to decrease the number of comparisons and time consuming, we present an attribute index strategy. It only needs to scan database once to create the attribute index of each attribute. Then all metrics values to evaluate an association rule do not need to scan database any further, but acquire data only by means of the attribute indices. The paper visualizes association rule mining as a multiobjective problem rather than a single objective one. In order to make the acquired solutions scatter uniformly toward the Pareto frontier in the objective space, elitism policy and uniform design are introduced. The paper presents the algorithm of attribute index and uniform design based multiobjective association rule mining with evolutionary algorithm, abbreviated as IUARMMEA. It does not require the user-specified minimum support and minimum confidence anymore, but uses a simple attribute index. It uses a well-designed real encoding so as to extend its application scope. Experiments performed on several databases demonstrate that the proposed algorithm has excellent performance, and it can significantly reduce the number of comparisons and time consumption.

  14. Combined evolutionary algorithm and minimum classification error training for DHMM-based land mine detection

    NASA Astrophysics Data System (ADS)

    Zhao, Yunxin; Chen, Ping; Gader, Paul D.; Zhang, Yue

    2002-08-01

    Minimum classification error (MCE) training is proposed to improve performance of a discrete hidden Markov model (DHMM) based landmine detection system. The system (baseline) was proposed previously for detection of both metal and nonmetal mines from ground penetrating radar signatures collected by moving vehicles. An initial DHMM model is trained by conventional methods of vector quantization and Baum-Welch algorithm. A sequential generalized probabilistic descent (GPD) algorithm that minimizes an empirical loss function is then used to estimate the landmine/background DHMM parameters, and an evolutionary algorithm based on fitness score of classification accuracy is used to generate and select codebooks. The landmine data of one geographical site was used for model training, and those of two different sites were used for evaluation of system performance. Three scenarios were studied: apply MCE/GPD alone to DHMM estimation, apply EA alone to codebook generation, first apply EA to codebook generation and then apply MCE/GPD to DHMM estimation. Overall, the combined EA and MCE/GPD training led to the best performance. At the same level of detection rate as the baseline DHMM system, the proposed training reduced false alarm rate by a factor of two, indicating significant performance improvement.

  15. An evolutionary algorithm for the segmentation of muscles and bones of the lower limb.

    NASA Astrophysics Data System (ADS)

    Lpez, Marco A.; Braidot, A.; Sattler, Anbal; Schira, Claudia; Uriburu, E.

    2016-04-01

    In the field of medical image segmentation, muscles segmentation is a problem that has not been fully resolved yet. This is due to the fact that the basic assumption of image segmentation, which asserts that a visual distinction should ex- ist between the different structures to be identified, is infringed. As the tissue composition of two different muscles is the same, it becomes extremely difficult to distinguish one another if they are near. We have developed an evolutionary algorithm which selects the set and the sequence of morphological operators that better segments muscles and bones from an MRI image. The achieved results shows that the developed algorithm presents average sensitivity values close to 75% in the segmentation of the different processed muscles and bones. It also presents average specificity values close to 93% for the same structures. Furthermore, the algorithm can identify muscles that are closely located through the path from their origin point to their insertions, with very low error values (below 7%) .

  16. Human creativity, evolutionary algorithms, and predictive representations: The mechanics of thought trials.

    PubMed

    Dietrich, Arne; Haider, Hilde

    2015-08-01

    Creative thinking is arguably the pinnacle of cerebral functionality. Like no other mental faculty, it has been omnipotent in transforming human civilizations. Probing the neural basis of this most extraordinary capacity, however, has been doggedly frustrated. Despite a flurry of activity in cognitive neuroscience, recent reviews have shown that there is no coherent picture emerging from the neuroimaging work. Based on this, we take a different route and apply two well established paradigms to the problem. First is the evolutionary framework that, despite being part and parcel of creativity research, has no informed experimental work in cognitive neuroscience. Second is the emerging prediction framework that recognizes predictive representations as an integrating principle of all cognition. We show here how the prediction imperative revealingly synthesizes a host of new insights into the way brains process variation-selection thought trials and present a new neural mechanism for the partial sightedness in human creativity. Our ability to run offline simulations of expected future environments and action outcomes can account for some of the characteristic properties of cultural evolutionary algorithms running in brains, such as degrees of sightedness, the formation of scaffolds to jump over unviable intermediate forms, or how fitness criteria are set for a selection process that is necessarily hypothetical. Prospective processing in the brain also sheds light on how human creating and designing - as opposed to biological creativity - can be accompanied by intentions and foresight. This paper raises questions about the nature of creative thought that, as far as we know, have never been asked before.

  17. A hybrid artificial bee colony optimization and quantum evolutionary algorithm for continuous optimization problems.

    PubMed

    Duan, Hai-Bin; Xu, Chun-Fang; Xing, Zhi-Hui

    2010-02-01

    In this paper, a novel hybrid Artificial Bee Colony (ABC) and Quantum Evolutionary Algorithm (QEA) is proposed for solving continuous optimization problems. ABC is adopted to increase the local search capacity as well as the randomness of the populations. In this way, the improved QEA can jump out of the premature convergence and find the optimal value. To show the performance of our proposed hybrid QEA with ABC, a number of experiments are carried out on a set of well-known Benchmark continuous optimization problems and the related results are compared with two other QEAs: the QEA with classical crossover operation, and the QEA with 2-crossover strategy. The experimental comparison results demonstrate that the proposed hybrid ABC and QEA approach is feasible and effective in solving complex continuous optimization problems.

  18. Signal design using nonlinear oscillators and evolutionary algorithms: application to phase-locked loop disruption.

    PubMed

    Olson, C C; Nichols, J M; Michalowicz, J V; Bucholtz, F

    2011-06-01

    This work describes an approach for efficiently shaping the response characteristics of a fixed dynamical system by forcing with a designed input. We obtain improved inputs by using an evolutionary algorithm to search a space of possible waveforms generated by a set of nonlinear, ordinary differential equations (ODEs). Good solutions are those that result in a desired system response subject to some input efficiency constraint, such as signal power. In particular, we seek to find inputs that best disrupt a phase-locked loop (PLL). Three sets of nonlinear ODEs are investigated and found to have different disruption capabilities against a model PLL. These differences are explored and implications for their use as input signal models are discussed. The PLL was chosen here as an archetypal example but the approach has broad applicability to any input∕output system for which a desired input cannot be obtained analytically.

  19. Evolutionary algorithm based uniform received power and illumination rendering for indoor visible light communication.

    PubMed

    Ding, Jupeng; Huang, Zhitong; Ji, Yuefeng

    2012-06-01

    In this paper, an evolutionary algorithm based optimization scheme is proposed to realize uniform received power and illumination distribution on the communication floor for fully diffuse indoor visible light communication. Simulation results show that in three distributed lighting configurations, by dynamically modifying the relative optical intensity of transmitters the dynamic range of the received power, referenced against the peak received power, can be reduced to about 40.0% while the uniformity illuminance ratio can be improved up to about 0.70 with the impact to the root mean square delay spread and bandwidth being negligible. Furthermore, the relationship between the field of view of the receivers and the optimization performance is presented as well.

  20. Comparison of Evolutionary (Genetic) Algorithm and Adjoint Methods for Multi-Objective Viscous Airfoil Optimizations

    NASA Technical Reports Server (NTRS)

    Pulliam, T. H.; Nemec, M.; Holst, T.; Zingg, D. W.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    A comparison between an Evolutionary Algorithm (EA) and an Adjoint-Gradient (AG) Method applied to a two-dimensional Navier-Stokes code for airfoil design is presented. Both approaches use a common function evaluation code, the steady-state explicit part of the code,ARC2D. The parameterization of the design space is a common B-spline approach for an airfoil surface, which together with a common griding approach, restricts the AG and EA to the same design space. Results are presented for a class of viscous transonic airfoils in which the optimization tradeoff between drag minimization as one objective and lift maximization as another, produces the multi-objective design space. Comparisons are made for efficiency, accuracy and design consistency.

  1. Identifying irregularly shaped crime hot-spots using a multiobjective evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Wu, Xiaolan; Grubesic, Tony H.

    2010-12-01

    Spatial cluster detection techniques are widely used in criminology, geography, epidemiology, and other fields. In particular, spatial scan statistics are popular and efficient techniques for detecting areas of elevated crime or disease events. The majority of spatial scan approaches attempt to delineate geographic zones by evaluating the significance of clusters using likelihood ratio statistics tested with the Poisson distribution. While this can be effective, many scan statistics give preference to circular clusters, diminishing their ability to identify elongated and/or irregular shaped clusters. Although adjusting the shape of the scan window can mitigate some of these problems, both the significance of irregular clusters and their spatial structure must be accounted for in a meaningful way. This paper utilizes a multiobjective evolutionary algorithm to find clusters with maximum significance while quantitatively tracking their geographic structure. Crime data for the city of Cincinnati are utilized to demonstrate the advantages of the new approach and highlight its benefits versus more traditional scan statistics.

  2. Correcting encoder interpolation error on the Green Bank Telescope using an iterative model based identification algorithm

    NASA Astrophysics Data System (ADS)

    Franke, Timothy; Weadon, Tim; Ford, John; Garcia-Sanz, Mario

    2015-10-01

    Various forms of measurement errors limit telescope tracking performance in practice. A new method for identifying the correcting coefficients for encoder interpolation error is developed. The algorithm corrects the encoder measurement by identifying a harmonic model of the system and using that model to compute the necessary correction parameters. The approach improves upon others by explicitly modeling the unknown dynamics of the structure and controller and by not requiring a separate system identification to be performed. Experience gained from pin-pointing the source of encoder error on the Green Bank Radio Telescope (GBT) is presented. Several tell-tale indicators of encoder error are discussed. Experimental data from the telescope, tested with two different encoders, are presented. Demonstration of the identification methodology on the GBT as well as details of its implementation are discussed. A root mean square tracking error reduction from 0.68 arc seconds to 0.21 arc sec was achieved by changing encoders and was further reduced to 0.10 arc sec with the calibration algorithm. In particular, the ubiquity of this error source is shown and how, by careful correction, it is possible to go beyond the advertised accuracy of an encoder.

  3. How Do Severe Constraints Affect the Search Ability of Multiobjective Evolutionary Algorithms in Water Resources?

    NASA Astrophysics Data System (ADS)

    Clarkin, T. J.; Kasprzyk, J. R.; Raseman, W. J.; Herman, J. D.

    2015-12-01

    This study contributes a diagnostic assessment of multiobjective evolutionary algorithm (MOEA) search on a set of water resources problem formulations with different configurations of constraints. Unlike constraints in classical optimization modeling, constraints within MOEA simulation-optimization represent limits on acceptable performance that delineate whether solutions within the search problem are feasible. Constraints are relevant because of the emergent pressures on water resources systems: increasing public awareness of their sustainability, coupled with regulatory pressures on water management agencies. In this study, we test several state-of-the-art MOEAs that utilize restricted tournament selection for constraint handling on varying configurations of water resources planning problems. For example, a problem that has no constraints on performance levels will be compared with a problem with several severe constraints, and a problem with constraints that have less severe values on the constraint thresholds. One such problem, Lower Rio Grande Valley (LRGV) portfolio planning, has been solved with a suite of constraints that ensure high reliability, low cost variability, and acceptable performance in a single year severe drought. But to date, it is unclear whether or not the constraints are negatively affecting MOEAs' ability to solve the problem effectively. Two categories of results are explored. The first category uses control maps of algorithm performance to determine if the algorithm's performance is sensitive to user-defined parameters. The second category uses run-time performance metrics to determine the time required for the algorithm to reach sufficient levels of convergence and diversity on the solution sets. Our work exploring the effect of constraints will better enable practitioners to define MOEA problem formulations for real-world systems, especially when stakeholders are concerned with achieving fixed levels of performance according to one or

  4. Comparing State-of-the-Art Evolutionary Multi-Objective Algorithms for Long-Term Groundwater Monitoring Design

    NASA Astrophysics Data System (ADS)

    Reed, P. M.; Kollat, J. B.

    2005-12-01

    This study demonstrates the effectiveness of a modified version of Deb's Non-Dominated Sorted Genetic Algorithm II (NSGAII), which the authors have named the Epsilon-Dominance Non-Dominated Sorted Genetic Algorithm II (Epsilon-NSGAII), at solving a four objective long-term groundwater monitoring (LTM) design test case. The Epsilon-NSGAII incorporates prior theoretical competent evolutionary algorithm (EA) design concepts and epsilon-dominance archiving to improve the original NSGAII's efficiency, reliability, and ease-of-use. This algorithm eliminates much of the traditional trial-and-error parameterization associated with evolutionary multi-objective optimization (EMO) through epsilon-dominance archiving, dynamic population sizing, and automatic termination. The effectiveness and reliability of the new algorithm is compared to the original NSGAII as well as two other benchmark multi-objective evolutionary algorithms (MOEAs), the Epsilon-Dominance Multi-Objective Evolutionary Algorithm (Epsilon-MOEA) and the Strength Pareto Evolutionary Algorithm 2 (SPEA2). These MOEAs have been selected because they have been demonstrated to be highly effective at solving numerous multi-objective problems. The results presented in this study indicate superior performance of the Epsilon-NSGAII in terms of the hypervolume indicator, unary Epsilon-indicator, and first-order empirical attainment function metrics. In addition, the runtime metric results indicate that the diversity and convergence dynamics of the Epsilon-NSGAII are competitive to superior relative to the SPEA2, with both algorithms greatly outperforming the NSGAII and Epsilon-MOEA in terms of these metrics. The improvements in performance of the Epsilon-NSGAII over its parent algorithm the NSGAII demonstrate that the application of Epsilon-dominance archiving, dynamic population sizing with archive injection, and automatic termination greatly improve algorithm efficiency and reliability. In addition, the usability of

  5. On-line experimental validation of a model-based diagnostic algorithm dedicated to a solid oxide fuel cell system

    NASA Astrophysics Data System (ADS)

    Polverino, Pierpaolo; Esposito, Angelo; Pianese, Cesare; Ludwig, Bastian; Iwanschitz, Boris; Mai, Andreas

    2016-02-01

    In the current energetic scenario, Solid Oxide Fuel Cells (SOFCs) exhibit appealing features which make them suitable for environmental-friendly power production, especially for stationary applications. An example is represented by micro-combined heat and power (μ-CHP) generation units based on SOFC stacks, which are able to produce electric and thermal power with high efficiency and low pollutant and greenhouse gases emissions. However, the main limitations to their diffusion into the mass market consist in high maintenance and production costs and short lifetime. To improve these aspects, the current research activity focuses on the development of robust and generalizable diagnostic techniques, aimed at detecting and isolating faults within the entire system (i.e. SOFC stack and balance of plant). Coupled with appropriate recovery strategies, diagnosis can prevent undesired system shutdowns during faulty conditions, with consequent lifetime increase and maintenance costs reduction. This paper deals with the on-line experimental validation of a model-based diagnostic algorithm applied to a pre-commercial SOFC system. The proposed algorithm exploits a Fault Signature Matrix based on a Fault Tree Analysis and improved through fault simulations. The algorithm is characterized on the considered system and it is validated by means of experimental induction of faulty states in controlled conditions.

  6. [Research on optimization of lower limb parameters of cardiopulmonary resuscitation simulation model based on genetic algorithm].

    PubMed

    Xu, Lin

    2014-10-01

    Sudden cardiac arrest is one of the critical clinical syndromes in emergency situations. A cardiopulmonary resuscitation (CPR) is a necessary curing means for those patients with sudden cardiac arrest. In order to simulate effectively the hemodynamic effects of human under AEI-CPR, which is active compression-decompression CPR coupled with enhanced external counter-pulsation and inspiratory impedance threshold valve, and research physiological parameters of each part of lower limbs in more detail, a CPR simulation model established by Babbs was refined. The part of lower limbs was divided into iliac, thigh and calf, which had 15 physiological parameters. Then, these 15 physiological parameters based on genetic algorithm were optimized, and ideal simulation results were obtained finally.

  7. Nonlinear systems modeling based on self-organizing fuzzy-neural-network with adaptive computation algorithm.

    PubMed

    Han, Honggui; Wu, Xiao-Long; Qiao, Jun-Fei

    2014-04-01

    In this paper, a self-organizing fuzzy-neural-network with adaptive computation algorithm (SOFNN-ACA) is proposed for modeling a class of nonlinear systems. This SOFNN-ACA is constructed online via simultaneous structure and parameter learning processes. In structure learning, a set of fuzzy rules can be self-designed using an information-theoretic methodology. The fuzzy rules with high spiking intensities (SI) are divided into new ones. And the fuzzy rules with a small relative mutual information (RMI) value will be pruned in order to simplify the FNN structure. In parameter learning, the consequent part parameters are learned through the use of an ACA that incorporates an adaptive learning rate strategy into the learning process to accelerate the convergence speed. Then, the convergence of SOFNN-ACA is analyzed. Finally, the proposed SOFNN-ACA is used to model nonlinear systems. The modeling results demonstrate that this proposed SOFNN-ACA can model nonlinear systems effectively.

  8. Geometric model-based fitting algorithm for orientation-selective PELDOR data

    NASA Astrophysics Data System (ADS)

    Abdullin, Dinar; Hagelueken, Gregor; Hunter, Robert I.; Smith, Graham M.; Schiemann, Olav

    2015-03-01

    Pulsed electron-electron double resonance (PELDOR or DEER) spectroscopy is frequently used to determine distances between spin centres in biomacromolecular systems. Experiments where mutual orientations of the spin pair are selectively excited provide the so-called orientation-selective PELDOR data. This data is characterised by the orientation dependence of the modulation depth parameter and of the dipolar frequencies. This dependence has to be taken into account in the data analysis in order to extract distance distributions accurately from the experimental time traces. In this work, a fitting algorithm for such data analysis is discussed. The approach is tested on PELDOR data-sets from the literature and is compared with the previous results.

  9. Constraint satisfaction using a hybrid evolutionary hill-climbing algorithm that performs opportunistic arc and path revision

    SciTech Connect

    Bowen, J.; Dozier, G.

    1996-12-31

    This paper introduces a hybrid evolutionary hill-climbing algorithm that quickly solves (Constraint Satisfaction Problems (CSPs)). This hybrid uses opportunistic arc and path revision in an interleaved fashion to reduce the size of the search space and to realize when to quit if a CSP is based on an inconsistent constraint network. This hybrid outperforms a well known hill-climbing algorithm, the Iterative Descent Method, on a test suite of 750 randomly generated CSPs.

  10. Model-based analyses of bioequivalence crossover trials using the stochastic approximation expectation maximisation algorithm.

    PubMed

    Dubois, Anne; Lavielle, Marc; Gsteiger, Sandro; Pigeolet, Etienne; Mentré, France

    2011-09-20

    In this work, we develop a bioequivalence analysis using nonlinear mixed effects models (NLMEM) that mimics the standard noncompartmental analysis (NCA). We estimate NLMEM parameters, including between-subject and within-subject variability and treatment, period and sequence effects. We explain how to perform a Wald test on a secondary parameter, and we propose an extension of the likelihood ratio test for bioequivalence. We compare these NLMEM-based bioequivalence tests with standard NCA-based tests. We evaluate by simulation the NCA and NLMEM estimates and the type I error of the bioequivalence tests. For NLMEM, we use the stochastic approximation expectation maximisation (SAEM) algorithm implemented in monolix. We simulate crossover trials under H(0) using different numbers of subjects and of samples per subject. We simulate with different settings for between-subject and within-subject variability and for the residual error variance. The simulation study illustrates the accuracy of NLMEM-based geometric means estimated with the SAEM algorithm, whereas the NCA estimates are biased for sparse design. NCA-based bioequivalence tests show good type I error except for high variability. For a rich design, type I errors of NLMEM-based bioequivalence tests (Wald test and likelihood ratio test) do not differ from the nominal level of 5%. Type I errors are inflated for sparse design. We apply the bioequivalence Wald test based on NCA and NLMEM estimates to a three-way crossover trial, showing that Omnitrope®; (Sandoz GmbH, Kundl, Austria) powder and solution are bioequivalent to Genotropin®; (Pfizer Pharma GmbH, Karlsruhe, Germany). NLMEM-based bioequivalence tests are an alternative to standard NCA-based tests. However, caution is needed for small sample size and highly variable drug.

  11. Design Optimization of an Axial Fan Blade Through Multi-Objective Evolutionary Algorithm

    NASA Astrophysics Data System (ADS)

    Kim, Jin-Hyuk; Choi, Jae-Ho; Husain, Afzal; Kim, Kwang-Yong

    2010-06-01

    This paper presents design optimization of an axial fan blade with hybrid multi-objective evolutionary algorithm (hybrid MOEA). Reynolds-averaged Navier-Stokes equations with shear stress transport turbulence model are discretized by the finite volume approximations and solved on hexahedral grids for the flow analyses. The validation of the numerical results was performed with the experimental data for the axial and tangential velocities. Six design variables related to the blade lean angle and blade profile are selected and the Latin hypercube sampling of design of experiments is used to generate design points within the selected design space. Two objective functions namely total efficiency and torque are employed and the multi-objective optimization is carried out to enhance total efficiency and to reduce the torque. The flow analyses are performed numerically at the designed points to obtain values of the objective functions. The Non-dominated Sorting of Genetic Algorithm (NSGA-II) with ɛ -constraint strategy for local search coupled with surrogate model is used for multi-objective optimization. The Pareto-optimal solutions are presented and trade-off analysis is performed between the two competing objectives in view of the design and flow constraints. It is observed that total efficiency is enhanced and torque is decreased as compared to the reference design by the process of multi-objective optimization. The Pareto-optimal solutions are analyzed to understand the mechanism of the improvement in the total efficiency and reduction in torque.

  12. A graph-based evolutionary algorithm: Genetic Network Programming (GNP) and its extension using reinforcement learning.

    PubMed

    Mabu, Shingo; Hirasawa, Kotaro; Hu, Jinglu

    2007-01-01

    This paper proposes a graph-based evolutionary algorithm called Genetic Network Programming (GNP). Our goal is to develop GNP, which can deal with dynamic environments efficiently and effectively, based on the distinguished expression ability of the graph (network) structure. The characteristics of GNP are as follows. 1) GNP programs are composed of a number of nodes which execute simple judgment/processing, and these nodes are connected by directed links to each other. 2) The graph structure enables GNP to re-use nodes, thus the structure can be very compact. 3) The node transition of GNP is executed according to its node connections without any terminal nodes, thus the past history of the node transition affects the current node to be used and this characteristic works as an implicit memory function. These structural characteristics are useful for dealing with dynamic environments. Furthermore, we propose an extended algorithm, "GNP with Reinforcement Learning (GNPRL)" which combines evolution and reinforcement learning in order to create effective graph structures and obtain better results in dynamic environments. In this paper, we applied GNP to the problem of determining agents' behavior to evaluate its effectiveness. Tileworld was used as the simulation environment. The results show some advantages for GNP over conventional methods.

  13. MrsRF: an efficient MapReduce algorithm for analyzing large collections of evolutionary trees

    PubMed Central

    2010-01-01

    Background MapReduce is a parallel framework that has been used effectively to design large-scale parallel applications for large computing clusters. In this paper, we evaluate the viability of the MapReduce framework for designing phylogenetic applications. The problem of interest is generating the all-to-all Robinson-Foulds distance matrix, which has many applications for visualizing and clustering large collections of evolutionary trees. We introduce MrsRF (MapReduce Speeds up RF), a multi-core algorithm to generate a t × t Robinson-Foulds distance matrix between t trees using the MapReduce paradigm. Results We studied the performance of our MrsRF algorithm on two large biological trees sets consisting of 20,000 trees of 150 taxa each and 33,306 trees of 567 taxa each. Our experiments show that MrsRF is a scalable approach reaching a speedup of over 18 on 32 total cores. Our results also show that achieving top speedup on a multi-core cluster requires different cluster configurations. Finally, we show how to use an RF matrix to summarize collections of phylogenetic trees visually. Conclusion Our results show that MapReduce is a promising paradigm for developing multi-core phylogenetic applications. The results also demonstrate that different multi-core configurations must be tested in order to obtain optimum performance. We conclude that RF matrices play a critical role in developing techniques to summarize large collections of trees. PMID:20122186

  14. Projector Augmented Wave (PAW) Datasets for Multi-Mbar Simulations: An Evolutionary Algorithm Based Recipe

    NASA Astrophysics Data System (ADS)

    Sarkar, K.; Topsakal, M.; Wentzcovitch, R. M.

    2015-12-01

    We attempt to achieve the accuracy of full-potential linearized augmented-plane-wave (FLAPW) method, as implemented in the WIEN2k code, at the favorable computational efficiency of the projector augmented wave (PAW) method for ab initio calculations of solids. For decades, PAW datasets have been generated by manually choosing its parameters and by visually inspecting its logarithmic derivatives, partial wave, and projector basis set. In addition to being tedious and error-prone, this procedure is inadequate because it is impractical to manually explore the full parameter space, as an infinite number of PAW parameter sets for a given augmentation radius can be generated maintaining all the constraints on logarithmic derivatives and basis sets. Performance verification of all plausible solutions against FLAPW is also impractical. Here we report the development of a hybrid algorithm to construct optimized PAW basis sets that can closely reproduce FLAPW results from zero to ultra-high pressures. The approach applies evolutionary computing (EC) to generate optimum PAW parameter sets using the ATOMPAW code. We have the Quantum ESPRESSO distribution to generate equation of state (EOS) to be compared with WIEN2k EOSs set as target. Softer PAW potentials reproducing yet more closely FLAPW EOSs can be found with this method. We demonstrate its working principles and workability by optimizing PAW basis functions for carbon, magnesium, aluminum, silicon, calcium, and iron atoms. The algorithm requires minimal user intervention in a sense that there is no requirement of visual inspection of logarithmic derivatives or of projector functions.

  15. Application of a multi-objective evolutionary algorithm to the spacecraft stationkeeping problem

    NASA Astrophysics Data System (ADS)

    Myers, Philip L.; Spencer, David B.

    2016-10-01

    Satellite operations are becoming an increasingly private industry, requiring increased profitability. Efficient and safe operation of satellites in orbit will ensure longer lasting and more profitable satellite services. This paper focuses on the use of a multi-objective evolutionary algorithm to schedule the maneuvers of a hypothetical satellite operating at geosynchronous altitude, by seeking to minimize the propellant consumed through the execution of stationkeeping maneuvers and the time the satellite is displaced from its desired orbital plane. Minimization of the time out of place increases the operational availability and minimizing the propellant usage which allows the spacecraft to operate longer. North-South stationkeeping was studied in this paper, through the use of a set of orbit inclination change maneuvers each year. Two cases for the maximum number of maneuvers to be executed were considered, with four and five maneuvers per year. The results delivered by the algorithm provide maneuver schedules which require 40-100 m/s of total Δv for two years of operation, with the satellite maintaining the satellite's orbital plane to within 0.1° between 84% and 96% of the two years being modeled.

  16. Double-layer evolutionary algorithm for distributed optimization of particle detection on the Grid

    NASA Astrophysics Data System (ADS)

    Padée, Adam; Kurek, Krzysztof; Zaremba, Krzysztof

    2013-08-01

    Reconstruction of particle tracks from information collected by position-sensitive detectors is an important procedure in HEP experiments. It is usually controlled by a set of numerical parameters which have to be manually optimized. This paper proposes an automatic approach to this task by utilizing evolutionary algorithm (EA) operating on both real-valued and binary representations. Because of computational complexity of the task a special distributed architecture of the algorithm is proposed, designed to be run in grid environment. It is two-level hierarchical hybrid utilizing asynchronous master-slave EA on the level of clusters and island model EA on the level of the grid. The technical aspects of usage of production grid infrastructure are covered, including communication protocols on both levels. The paper deals also with the problem of heterogeneity of the resources, presenting efficiency tests on a benchmark function. These tests confirm that even relatively small islands (clusters) can be beneficial to the optimization process when connected to the larger ones. Finally a real-life usage example is presented, which is an optimization of track reconstruction in Large Angle Spectrometer of NA-58 COMPASS experiment held at CERN, using a sample of Monte Carlo simulated data. The overall reconstruction efficiency gain, achieved by the proposed method, is more than 4%, compared to the manually optimized parameters.

  17. Multi-objective control optimization for greenhouse environment using evolutionary algorithms.

    PubMed

    Hu, Haigen; Xu, Lihong; Wei, Ruihua; Zhu, Bingkun

    2011-01-01

    This paper investigates the issue of tuning the Proportional Integral and Derivative (PID) controller parameters for a greenhouse climate control system using an Evolutionary Algorithm (EA) based on multiple performance measures such as good static-dynamic performance specifications and the smooth process of control. A model of nonlinear thermodynamic laws between numerous system variables affecting the greenhouse climate is formulated. The proposed tuning scheme is tested for greenhouse climate control by minimizing the integrated time square error (ITSE) and the control increment or rate in a simulation experiment. The results show that by tuning the gain parameters the controllers can achieve good control performance through step responses such as small overshoot, fast settling time, and less rise time and steady state error. Besides, it can be applied to tuning the system with different properties, such as strong interactions among variables, nonlinearities and conflicting performance criteria. The results implicate that it is a quite effective and promising tuning method using multi-objective optimization algorithms in the complex greenhouse production.

  18. Solving Large-scale Spatial Optimization Problems in Water Resources Management through Spatial Evolutionary Algorithms

    NASA Astrophysics Data System (ADS)

    Wang, J.; Cai, X.

    2007-12-01

    A water resources system can be defined as a large-scale spatial system, within which distributed ecological system interacts with the stream network and ground water system. Water resources management, the causative factors and hence the solutions to be developed have a significant spatial dimension. This motivates a modeling analysis of water resources management within a spatial analytical framework, where data is usually geo- referenced and in the form of a map. One of the important functions of Geographic information systems (GIS) is to identify spatial patterns of environmental variables. The role of spatial patterns in water resources management has been well established in the literature particularly regarding how to design better spatial patterns for satisfying the designated objectives of water resources management. Evolutionary algorithms (EA) have been demonstrated to be successful in solving complex optimization models for water resources management due to its flexibility to incorporate complex simulation models in the optimal search procedure. The idea of combining GIS and EA motivates the development and application of spatial evolutionary algorithms (SEA). SEA assimilates spatial information into EA, and even changes the representation and operators of EA. In an EA used for water resources management, the mathematical optimization model should be modified to account the spatial patterns; however, spatial patterns are usually implicit, and it is difficult to impose appropriate patterns to spatial data. Also it is difficult to express complex spatial patterns by explicit constraints included in the EA. The GIS can help identify the spatial linkages and correlations based on the spatial knowledge of the problem. These linkages are incorporated in the fitness function for the preference of the compatible vegetation distribution. Unlike a regular GA for spatial models, the SEA employs a special hierarchical hyper-population and spatial genetic operators

  19. A possibilistic approach to rotorcraft design through a multi-objective evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Chae, Han Gil

    Most of the engineering design processes in use today in the field may be considered as a series of successive decision making steps. The decision maker uses information at hand, determines the direction of the procedure, and generates information for the next step and/or other decision makers. However, the information is often incomplete, especially in the early stages of the design process of a complex system. As the complexity of the system increases, uncertainties eventually become unmanageable using traditional tools. In such a case, the tools and analysis values need to be "softened" to account for the designer's intuition. One of the methods that deals with issues of intuition and incompleteness is possibility theory. Through the use of possibility theory coupled with fuzzy inference, the uncertainties estimated by the intuition of the designer are quantified for design problems. By involving quantified uncertainties in the tools, the solutions can represent a possible set, instead of a crisp spot, for predefined levels of certainty. From a different point of view, it is a well known fact that engineering design is a multi-objective problem or a set of such problems. The decision maker aims to find satisfactory solutions, sometimes compromising the objectives that conflict with each other. Once the candidates of possible solutions are generated, a satisfactory solution can be found by various decision-making techniques. A number of multi-objective evolutionary algorithms (MOEAs) have been developed, and can be found in the literature, which are capable of generating alternative solutions and evaluating multiple sets of solutions in one single execution of an algorithm. One of the MOEA techniques that has been proven to be very successful for this class of problems is the strength Pareto evolutionary algorithm (SPEA) which falls under the dominance-based category of methods. The Pareto dominance that is used in SPEA, however, is not enough to account for the

  20. A simple model based magnet sorting algorithm for planar hybrid undulators

    SciTech Connect

    Rakowsky, G.

    2010-05-23

    Various magnet sorting strategies have been used to optimize undulator performance, ranging from intuitive pairing of high- and low-strength magnets, to full 3D FEM simulation with 3-axis Helmholtz coil magnet data. In the extreme, swapping magnets in a full field model to minimize trajectory wander and rms phase error can be time consuming. This paper presents a simpler approach, extending the field error signature concept to obtain trajectory displacement, kick angle and phase error signatures for each component of magnetization error from a Radia model of a short hybrid-PM undulator. We demonstrate that steering errors and phase errors are essentially decoupled and scalable from measured X, Y and Z components of magnetization. Then, for any given sequence of magnets, rms trajectory and phase errors are obtained from simple cumulative sums of the scaled displacements and phase errors. The cost function (a weighted sum of these errors) is then minimized by swapping magnets, using one's favorite optimization algorithm. This approach was applied recently at NSLS to a short in-vacuum undulator, which required no subsequent trajectory or phase shimming. Trajectory and phase signatures are also obtained for some mechanical errors, to guide 'virtual shimming' and specifying mechanical tolerances. Some simple inhomogeneities are modeled to assess their error contributions.

  1. Energy efficient model based algorithm for control of building HVAC systems.

    PubMed

    Kirubakaran, V; Sahu, Chinmay; Radhakrishnan, T K; Sivakumaran, N

    2015-11-01

    Energy efficient designs are receiving increasing attention in various fields of engineering. Heating ventilation and air conditioning (HVAC) control system designs involve improved energy usage with an acceptable relaxation in thermal comfort. In this paper, real time data from a building HVAC system provided by BuildingLAB is considered. A resistor-capacitor (RC) framework for representing thermal dynamics of the building is estimated using particle swarm optimization (PSO) algorithm. With objective costs as thermal comfort (deviation of room temperature from required temperature) and energy measure (Ecm) explicit MPC design for this building model is executed based on its state space representation of the supply water temperature (input)/room temperature (output) dynamics. The controllers are subjected to servo tracking and external disturbance (ambient temperature) is provided from the real time data during closed loop control. The control strategies are ported on a PIC32mx series microcontroller platform. The building model is implemented in MATLAB and hardware in loop (HIL) testing of the strategies is executed over a USB port. Results indicate that compared to traditional proportional integral (PI) controllers, the explicit MPC's improve both energy efficiency and thermal comfort significantly.

  2. Improving HybrID: How to best combine indirect and direct encoding in evolutionary algorithms

    PubMed Central

    Helms, Lucas; Clune, Jeff

    2017-01-01

    Many challenging engineering problems are regular, meaning solutions to one part of a problem can be reused to solve other parts. Evolutionary algorithms with indirect encoding perform better on regular problems because they reuse genomic information to create regular phenotypes. However, on problems that are mostly regular, but contain some irregularities, which describes most real-world problems, indirect encodings struggle to handle the irregularities, hurting performance. Direct encodings are better at producing irregular phenotypes, but cannot exploit regularity. An algorithm called HybrID combines the best of both: it first evolves with indirect encoding to exploit problem regularity, then switches to direct encoding to handle problem irregularity. While HybrID has been shown to outperform both indirect and direct encoding, its initial implementation required the manual specification of when to switch from indirect to direct encoding. In this paper, we test two new methods to improve HybrID by eliminating the need to manually specify this parameter. Auto-Switch-HybrID automatically switches from indirect to direct encoding when fitness stagnates. Offset-HybrID simultaneously evolves an indirect encoding with directly encoded offsets, eliminating the need to switch. We compare the original HybrID to these alternatives on three different problems with adjustable regularity. The results show that both Auto-Switch-HybrID and Offset-HybrID outperform the original HybrID on different types of problems, and thus offer more tools for researchers to solve challenging problems. The Offset-HybrID algorithm is particularly interesting because it suggests a path forward for automatically and simultaneously combining the best traits of indirect and direct encoding. PMID:28334002

  3. A Hybrid Evolutionary Algorithm to Quadratic Three-Dimensional Assignment Problem with Local Search for Many-Core Graphics Processors

    NASA Astrophysics Data System (ADS)

    Lipinski, Piotr

    This paper concerns the quadratic three-dimensional assignment problem (Q3AP), an extension of the quadratic assignment problem (QAP), and proposes an efficient hybrid evolutionary algorithm combining stochastic optimization and local search with a number of crossover operators, a number of mutation operators and an auto-adaptation mechanism. Auto-adaptation manages the pool of evolutionary operators applying different operators in different computation phases to better explore the search space and to avoid premature convergence. Local search additionally optimizes populations of candidate solutions and accelerates evolutionary search. It uses a many-core graphics processor to optimize a number of solutions in parallel, which enables its incorporation into the evolutionary algorithm without excessive increases in the computation time. Experiments performed on benchmark Q3AP instances derived from the classic QAP instances proposed by Nugent et al. confirmed that the proposed algorithm is able to find optimal solutions to Q3AP in a reasonable time and outperforms best known results found in the literature.

  4. Multi-objective evolutionary algorithm for investigating the trade-off between pleiotropy and redundancy

    NASA Astrophysics Data System (ADS)

    Ong, Zhiyang; Lo, Andy Hao-Wei; Berryman, Matthew; Abbott, Derek

    2005-12-01

    The trade-off between pleiotropy and redundancy in telecommunications networks is analyzed in this paper. They are optimized to reduce installation costs and propagation delays. Pleiotropy of a server in a telecommunications network is defined as the number of clients and servers that it can service whilst redundancy is described as the number of servers servicing a client. Telecommunications networks containing many servers with large pleiotropy are cost-effective but vulnerable to network failures and attacks. Conversely, those networks containing many servers with high redundancy are reliable but costly. Several key issues regarding the choice of cost functions and techniques in evolutionary computation (such as the modeling of Darwinian evolution, and mutualism and commensalism) will be discussed, and a future research agenda is outlined. Experimental results indicate that the pleiotropy of servers in the optimum network does improve, whilst the redundancy of clients do not vary significantly, as expected, with evolving networks. This is due to the controlled evolution of networks that is modeled by the steady-state genetic algorithm; changes in telecommunications networks that occur drastically over a very short period of time are rare.

  5. Enhancements of evolutionary algorithm for the complex requirements of a nurse scheduling problem

    NASA Astrophysics Data System (ADS)

    Tein, Lim Huai; Ramli, Razamin

    2014-12-01

    Over the years, nurse scheduling is a noticeable problem that is affected by the global nurse turnover crisis. The more nurses are unsatisfied with their working environment the more severe the condition or implication they tend to leave. Therefore, the current undesirable work schedule is partly due to that working condition. Basically, there is a lack of complimentary requirement between the head nurse's liability and the nurses' need. In particular, subject to highly nurse preferences issue, the sophisticated challenge of doing nurse scheduling is failure to stimulate tolerance behavior between both parties during shifts assignment in real working scenarios. Inevitably, the flexibility in shifts assignment is hard to achieve for the sake of satisfying nurse diverse requests with upholding imperative nurse ward coverage. Hence, Evolutionary Algorithm (EA) is proposed to cater for this complexity in a nurse scheduling problem (NSP). The restriction of EA is discussed and thus, enhancement on the EA operators is suggested so that the EA would have the characteristic of a flexible search. This paper consists of three types of constraints which are the hard, semi-hard and soft constraints that can be handled by the EA with enhanced parent selection and specialized mutation operators. These operators and EA as a whole contribute to the efficiency of constraint handling, fitness computation as well as flexibility in the search, which correspond to the employment of exploration and exploitation principles.

  6. Constructing Robust Cooperative Networks using a Multi-Objective Evolutionary Algorithm

    PubMed Central

    Wang, Shuai; Liu, Jing

    2017-01-01

    The design and construction of network structures oriented towards different applications has attracted much attention recently. The existing studies indicated that structural heterogeneity plays different roles in promoting cooperation and robustness. Compared with rewiring a predefined network, it is more flexible and practical to construct new networks that satisfy the desired properties. Therefore, in this paper, we study a method for constructing robust cooperative networks where the only constraint is that the number of nodes and links is predefined. We model this network construction problem as a multi-objective optimization problem and propose a multi-objective evolutionary algorithm, named MOEA-Netrc, to generate the desired networks from arbitrary initializations. The performance of MOEA-Netrc is validated on several synthetic and real-world networks. The results show that MOEA-Netrc can construct balanced candidates and is insensitive to the initializations. MOEA-Netrc can find the Pareto fronts for networks with different levels of cooperation and robustness. In addition, further investigation of the robustness of the constructed networks revealed the impact on other aspects of robustness during the construction process. PMID:28134314

  7. Constructing Robust Cooperative Networks using a Multi-Objective Evolutionary Algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Shuai; Liu, Jing

    2017-01-01

    The design and construction of network structures oriented towards different applications has attracted much attention recently. The existing studies indicated that structural heterogeneity plays different roles in promoting cooperation and robustness. Compared with rewiring a predefined network, it is more flexible and practical to construct new networks that satisfy the desired properties. Therefore, in this paper, we study a method for constructing robust cooperative networks where the only constraint is that the number of nodes and links is predefined. We model this network construction problem as a multi-objective optimization problem and propose a multi-objective evolutionary algorithm, named MOEA-Netrc, to generate the desired networks from arbitrary initializations. The performance of MOEA-Netrc is validated on several synthetic and real-world networks. The results show that MOEA-Netrc can construct balanced candidates and is insensitive to the initializations. MOEA-Netrc can find the Pareto fronts for networks with different levels of cooperation and robustness. In addition, further investigation of the robustness of the constructed networks revealed the impact on other aspects of robustness during the construction process.

  8. Modeling an aquatic ecosystem: application of an evolutionary algorithm with genetic doping to reduce prediction uncertainty

    NASA Astrophysics Data System (ADS)

    Friedel, Michael; Buscema, Massimo

    2016-04-01

    Aquatic ecosystem models can potentially be used to understand the influence of stresses on catchment resource quality. Given that catchment responses are functions of natural and anthropogenic stresses reflected in sparse and spatiotemporal biological, physical, and chemical measurements, an ecosystem is difficult to model using statistical or numerical methods. We propose an artificial adaptive systems approach to model ecosystems. First, an unsupervised machine-learning (ML) network is trained using the set of available sparse and disparate data variables. Second, an evolutionary algorithm with genetic doping is applied to reduce the number of ecosystem variables to an optimal set. Third, the optimal set of ecosystem variables is used to retrain the ML network. Fourth, a stochastic cross-validation approach is applied to quantify and compare the nonlinear uncertainty in selected predictions of the original and reduced models. Results are presented for aquatic ecosystems (tens of thousands of square kilometers) undergoing landscape change in the USA: Upper Illinois River Basin and Central Colorado Assessment Project Area, and Southland region, NZ.

  9. Multi-criteria optimal pole assignment robust controller design for uncertainty systems using an evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Sarjaš, Andrej; Chowdhury, Amor; Svečko, Rajko

    2016-09-01

    This paper presents the synthesis of an optimal robust controller design using the polynomial pole placement technique and multi-criteria optimisation procedure via an evolutionary computation algorithm - differential evolution. The main idea of the design is to provide a reliable fixed-order robust controller structure and an efficient closed-loop performance with a preselected nominally characteristic polynomial. The multi-criteria objective functions have quasi-convex properties that significantly improve convergence and the regularity of the optimal/sub-optimal solution. The fundamental aim of the proposed design is to optimise those quasi-convex functions with fixed closed-loop characteristic polynomials, the properties of which are unrelated and hard to present within formal algebraic frameworks. The objective functions are derived from different closed-loop criteria, such as robustness with metric ?∞, time performance indexes, controller structures, stability properties, etc. Finally, the design results from the example verify the efficiency of the controller design and also indicate broader possibilities for different optimisation criteria and control structures.

  10. Evolutionary Algorithm Based Feature Optimization for Multi-Channel EEG Classification

    PubMed Central

    Wang, Yubo; Veluvolu, Kalyana C.

    2017-01-01

    The most BCI systems that rely on EEG signals employ Fourier based methods for time-frequency decomposition for feature extraction. The band-limited multiple Fourier linear combiner is well-suited for such band-limited signals due to its real-time applicability. Despite the improved performance of these techniques in two channel settings, its application in multiple-channel EEG is not straightforward and challenging. As more channels are available, a spatial filter will be required to eliminate the noise and preserve the required useful information. Moreover, multiple-channel EEG also adds the high dimensionality to the frequency feature space. Feature selection will be required to stabilize the performance of the classifier. In this paper, we develop a new method based on Evolutionary Algorithm (EA) to solve these two problems simultaneously. The real-valued EA encodes both the spatial filter estimates and the feature selection into its solution and optimizes it with respect to the classification error. Three Fourier based designs are tested in this paper. Our results show that the combination of Fourier based method with covariance matrix adaptation evolution strategy (CMA-ES) has the best overall performance. PMID:28203141

  11. Combining Interactive Infrastructure Modeling and Evolutionary Algorithm Optimization for Sustainable Water Resources Design

    NASA Astrophysics Data System (ADS)

    Smith, R.; Kasprzyk, J. R.; Zagona, E. A.

    2013-12-01

    Population growth and climate change, combined with difficulties in building new infrastructure, motivate portfolio-based solutions to ensuring sufficient water supply. Powerful simulation models with graphical user interfaces (GUI) are often used to evaluate infrastructure portfolios; these GUI based models require manual modification of the system parameters, such as reservoir operation rules, water transfer schemes, or system capacities. Multiobjective evolutionary algorithm (MOEA) based optimization can be employed to balance multiple objectives and automatically suggest designs for infrastructure systems, but MOEA based decision support typically uses a fixed problem formulation (i.e., a single set of objectives, decisions, and constraints). This presentation suggests a dynamic framework for linking GUI-based infrastructure models with MOEA search. The framework begins with an initial formulation which is solved using a MOEA. Then, stakeholders can interact with candidate solutions, viewing their properties in the GUI model. This is followed by changes in the formulation which represent users' evolving understanding of exigent system properties. Our case study is built using RiverWare, an object-oriented, data-centered model that facilitates the representation of a diverse array of water resources systems. Results suggest that assumptions within the initial MOEA search are violated after investigating tradeoffs and reveal how formulations should be modified to better capture stakeholders' preferences.

  12. Implementation and comparative analysis of the optimisations produced by evolutionary algorithms for the parameter extraction of PSP MOSFET model

    NASA Astrophysics Data System (ADS)

    Hadia, Sarman K.; Thakker, R. A.; Bhatt, Kirit R.

    2016-05-01

    The study proposes an application of evolutionary algorithms, specifically an artificial bee colony (ABC), variant ABC and particle swarm optimisation (PSO), to extract the parameters of metal oxide semiconductor field effect transistor (MOSFET) model. These algorithms are applied for the MOSFET parameter extraction problem using a Pennsylvania surface potential model. MOSFET parameter extraction procedures involve reducing the error between measured and modelled data. This study shows that ABC algorithm optimises the parameter values based on intelligent activities of honey bee swarms. Some modifications have also been applied to the basic ABC algorithm. Particle swarm optimisation is a population-based stochastic optimisation method that is based on bird flocking activities. The performances of these algorithms are compared with respect to the quality of the solutions. The simulation results of this study show that the PSO algorithm performs better than the variant ABC and basic ABC algorithm for the parameter extraction of the MOSFET model; also the implementation of the ABC algorithm is shown to be simpler than that of the PSO algorithm.

  13. Enhancements to commissioning techniques and quality assurance of brachytherapy treatment planning systems that use model-based dose calculation algorithms.

    PubMed

    Rivard, Mark J; Beaulieu, Luc; Mourtada, Firas

    2010-06-01

    The current standard for brachytherapy dose calculations is based on the AAPM TG-43 formalism. Simplifications used in the TG-43 formalism have been challenged by many publications over the past decade. With the continuous increase in computing power, approaches based on fundamental physics processes or physics models such as the linear-Boltzmann transport equation are now applicable in a clinical setting. Thus, model-based dose calculation algorithms (MBDCAs) have been introduced to address TG-43 limitations for brachytherapy. The MBDCA approach results in a paradigm shift, which will require a concerted effort to integrate them properly into the radiation therapy community. MBDCA will improve treatment planning relative to the implementation of the traditional TG-43 formalism by accounting for individualized, patient-specific radiation scatter conditions, and the radiological effect of material heterogeneities differing from water. A snapshot of the current status of MBDCA and AAPM Task Group reports related to the subject of QA recommendations for brachytherapy treatment planning is presented. Some simplified Monte Carlo simulation results are also presented to delineate the effects MBDCA are called to account for and facilitate the discussion on suggestions for (i) new QA standards to augment current societal recommendations, (ii) consideration of dose specification such as dose to medium in medium, collisional kerma to medium in medium, or collisional kerma to water in medium, and (iii) infrastructure needed to uniformly introduce these new algorithms. Suggestions in this Vision 20/20 article may serve as a basis for developing future standards to be recommended by professional societies such as the AAPM, ESTRO, and ABS toward providing consistent clinical implementation throughout the brachytherapy community and rigorous quality management of MBDCA-based treatment planning systems.

  14. Optimisation of groundwater level monitoring networks using geostatistical modelling based on the Spartan family variogram and a genetic algorithm method

    NASA Astrophysics Data System (ADS)

    Parasyris, Antonios E.; Spanoudaki, Katerina; Kampanis, Nikolaos A.

    2016-04-01

    Groundwater level monitoring networks provide essential information for water resources management, especially in areas with significant groundwater exploitation for agricultural and domestic use. Given the high maintenance costs of these networks, development of tools, which can be used by regulators for efficient network design is essential. In this work, a monitoring network optimisation tool is presented. The network optimisation tool couples geostatistical modelling based on the Spartan family variogram with a genetic algorithm method and is applied to Mires basin in Crete, Greece, an area of high socioeconomic and agricultural interest, which suffers from groundwater overexploitation leading to a dramatic decrease of groundwater levels. The purpose of the optimisation tool is to determine which wells to exclude from the monitoring network because they add little or no beneficial information to groundwater level mapping of the area. Unlike previous relevant investigations, the network optimisation tool presented here uses Ordinary Kriging with the recently-established non-differentiable Spartan variogram for groundwater level mapping, which, based on a previous geostatistical study in the area leads to optimal groundwater level mapping. Seventy boreholes operate in the area for groundwater abstraction and water level monitoring. The Spartan variogram gives overall the most accurate groundwater level estimates followed closely by the power-law model. The geostatistical model is coupled to an integer genetic algorithm method programmed in MATLAB 2015a. The algorithm is used to find the set of wells whose removal leads to the minimum error between the original water level mapping using all the available wells in the network and the groundwater level mapping using the reduced well network (error is defined as the 2-norm of the difference between the original mapping matrix with 70 wells and the mapping matrix of the reduced well network). The solution to the

  15. A two-level hybrid evolutionary algorithm for modeling one-dimensional dynamic systems by higher-order ODE models.

    PubMed

    Cao, H Q; Kang, L S; Guo, T; Chen, Y P; de Garis, H

    2000-01-01

    This paper presents a new algorithm for modeling one-dimensional (1-D) dynamic systems by higher-order ordinary differential equation (HODE) models instead of the ARMA models as used in traditional time series analysis. A two-level hybrid evolutionary modeling algorithm (THEMA) is used to approach the modeling problem of HODE's for dynamic systems. The main idea of this modeling algorithm is to embed a genetic algorithm (GA) into genetic programming (GP), where GP is employed to optimize the structure of a model (the upper level), while a GA is employed to optimize the parameters of the model (the lower level). In the GA, we use a novel crossover operator based on a nonconvex linear combination of multiple parents which works efficiently and quickly in parameter optimization tasks. Two practical examples of time series are used to demonstrate the THEMA's effectiveness and advantages.

  16. Handling time-expensive global optimization problems through the surrogate-enhanced evolutionary annealing-simplex algorithm

    NASA Astrophysics Data System (ADS)

    Tsoukalas, Ioannis; Kossieris, Panagiotis; Efstratiadis, Andreas; Makropoulos, Christos

    2015-04-01

    In water resources optimization problems, the calculation of the objective function usually presumes to first run a simulation model and then evaluate its outputs. In several cases, however, long simulation times may pose significant barriers to the optimization procedure. Often, to obtain a solution within a reasonable time, the user has to substantially restrict the allowable number of function evaluations, thus terminating the search much earlier than required by the problem's complexity. A promising novel strategy to address these shortcomings is the use of surrogate modelling techniques within global optimization algorithms. Here we introduce the Surrogate-Enhanced Evolutionary Annealing-Simplex (SE-EAS) algorithm that couples the strengths of surrogate modelling with the effectiveness and efficiency of the EAS method. The algorithm combines three different optimization approaches (evolutionary search, simulated annealing and the downhill simplex search scheme), in which key decisions are partially guided by numerical approximations of the objective function. The performance of the proposed algorithm is benchmarked against other surrogate-assisted algorithms, in both theoretical and practical applications (i.e. test functions and hydrological calibration problems, respectively), within a limited budget of trials (from 100 to 1000). Results reveal the significant potential of using SE-EAS in challenging optimization problems, involving time-consuming simulations.

  17. Global WASF-GA: An Evolutionary Algorithm in Multiobjective Optimization to Approximate the Whole Pareto Optimal Front.

    PubMed

    Saborido, Rubén; Ruiz, Ana B; Luque, Mariano

    2016-02-08

    In this article, we propose a new evolutionary algorithm for multiobjective optimization called Global WASF-GA (global weighting achievement scalarizing function genetic algorithm), which falls within the aggregation-based evolutionary algorithms. The main purpose of Global WASF-GA is to approximate the whole Pareto optimal front. Its fitness function is defined by an achievement scalarizing function (ASF) based on the Tchebychev distance, in which two reference points are considered (both utopian and nadir objective vectors) and the weight vector used is taken from a set of weight vectors whose inverses are well-distributed. At each iteration, all individuals are classified into different fronts. Each front is formed by the solutions with the lowest values of the ASF for the different weight vectors in the set, using the utopian vector and the nadir vector as reference points simultaneously. Varying the weight vector in the ASF while considering the utopian and the nadir vectors at the same time enables the algorithm to obtain a final set of nondominated solutions that approximate the whole Pareto optimal front. We compared Global WASF-GA to MOEA/D (different versions) and NSGA-II in two-, three-, and five-objective problems. The computational results obtained permit us to conclude that Global WASF-GA gets better performance, regarding the hypervolume metric and the epsilon indicator, than the other two algorithms in many cases, especially in three- and five-objective problems.

  18. Investigating preferences for color-shape combinations with gaze driven optimization method based on evolutionary algorithms

    PubMed Central

    Holmes, Tim; Zanker, Johannes M.

    2013-01-01

    Studying aesthetic preference is notoriously difficult because it targets individual experience. Eye movements provide a rich source of behavioral measures that directly reflect subjective choice. To determine individual preferences for simple composition rules we here use fixation duration as the fitness measure in a Gaze Driven Evolutionary Algorithm (GDEA), which has been demonstrated as a tool to identify aesthetic preferences (Holmes and Zanker, 2012). In the present study, the GDEA was used to investigate the preferred combination of color and shape which have been promoted in the Bauhaus arts school. We used the same three shapes (square, circle, triangle) used by Kandinsky (1923), with the three color palette from the original experiment (A), an extended seven color palette (B), and eight different shape orientation (C). Participants were instructed to look for their preferred circle, triangle or square in displays with eight stimuli of different shapes, colors and rotations, in an attempt to test for a strong preference for red squares, yellow triangles and blue circles in such an unbiased experimental design and with an extended set of possible combinations. We Tested six participants extensively on the different conditions and found consistent preferences for color-shape combinations for individuals, but little evidence at the group level for clear color/shape preference consistent with Kandinsky's claims, apart from some weak link between yellow and triangles. Our findings suggest substantial inter-individual differences in the presence of stable individual associations of color and shapes, but also that these associations are robust within a single individual. These individual differences go some way toward challenging the claims of the universal preference for color/shape combinations proposed by Kandinsky, but also indicate that a much larger sample size would be needed to confidently reject that hypothesis. Moreover, these experiments highlight the

  19. Investigating preferences for color-shape combinations with gaze driven optimization method based on evolutionary algorithms.

    PubMed

    Holmes, Tim; Zanker, Johannes M

    2013-01-01

    Studying aesthetic preference is notoriously difficult because it targets individual experience. Eye movements provide a rich source of behavioral measures that directly reflect subjective choice. To determine individual preferences for simple composition rules we here use fixation duration as the fitness measure in a Gaze Driven Evolutionary Algorithm (GDEA), which has been demonstrated as a tool to identify aesthetic preferences (Holmes and Zanker, 2012). In the present study, the GDEA was used to investigate the preferred combination of color and shape which have been promoted in the Bauhaus arts school. We used the same three shapes (square, circle, triangle) used by Kandinsky (1923), with the three color palette from the original experiment (A), an extended seven color palette (B), and eight different shape orientation (C). Participants were instructed to look for their preferred circle, triangle or square in displays with eight stimuli of different shapes, colors and rotations, in an attempt to test for a strong preference for red squares, yellow triangles and blue circles in such an unbiased experimental design and with an extended set of possible combinations. We Tested six participants extensively on the different conditions and found consistent preferences for color-shape combinations for individuals, but little evidence at the group level for clear color/shape preference consistent with Kandinsky's claims, apart from some weak link between yellow and triangles. Our findings suggest substantial inter-individual differences in the presence of stable individual associations of color and shapes, but also that these associations are robust within a single individual. These individual differences go some way toward challenging the claims of the universal preference for color/shape combinations proposed by Kandinsky, but also indicate that a much larger sample size would be needed to confidently reject that hypothesis. Moreover, these experiments highlight the

  20. Specification of absorbed dose to water using model-based dose calculation algorithms for treatment planning in brachytherapy

    NASA Astrophysics Data System (ADS)

    Carlsson Tedgren, Åsa; Alm Carlsson, Gudrun

    2013-04-01

    Model-based dose calculation algorithms (MBDCAs), recently introduced in treatment planning systems (TPS) for brachytherapy, calculate tissue absorbed doses. In the TPS framework, doses have hereto been reported as dose to water and water may still be preferred as a dose specification medium. Dose to tissue medium Dmed then needs to be converted into dose to water in tissue Dw,med. Methods to calculate absorbed dose to differently sized water compartments/cavities inside tissue, infinitesimal (used for definition of absorbed dose), small, large or intermediate, are reviewed. Burlin theory is applied to estimate photon energies at which cavity sizes in the range 1 nm-10 mm can be considered small or large. Photon and electron energy spectra are calculated at 1 cm distance from the central axis in cylindrical phantoms of bone, muscle and adipose tissue for 20, 50, 300 keV photons and photons from 125I, 169Yb and 192Ir sources; ratios of mass-collision-stopping powers and mass energy absorption coefficients are calculated as applicable to convert Dmed into Dw,med for small and large cavities. Results show that 1-10 nm sized cavities are small at all investigated photon energies; 100 µm cavities are large only at photon energies <20 keV. A choice of an appropriate conversion coefficient Dw, med/Dmed is discussed in terms of the cavity size in relation to the size of important cellular targets. Free radicals from DNA bound water of nanometre dimensions contribute to DNA damage and cell killing and may be the most important water compartment in cells implying use of ratios of mass-collision-stopping powers for converting Dmed into Dw,med.

  1. Specification of absorbed dose to water using model-based dose calculation algorithms for treatment planning in brachytherapy.

    PubMed

    Tedgren, Åsa Carlsson; Carlsson, Gudrun Alm

    2013-04-21

    Model-based dose calculation algorithms (MBDCAs), recently introduced in treatment planning systems (TPS) for brachytherapy, calculate tissue absorbed doses. In the TPS framework, doses have hereto been reported as dose to water and water may still be preferred as a dose specification medium. Dose to tissue medium Dmed then needs to be converted into dose to water in tissue Dw,med. Methods to calculate absorbed dose to differently sized water compartments/cavities inside tissue, infinitesimal (used for definition of absorbed dose), small, large or intermediate, are reviewed. Burlin theory is applied to estimate photon energies at which cavity sizes in the range 1 nm-10 mm can be considered small or large. Photon and electron energy spectra are calculated at 1 cm distance from the central axis in cylindrical phantoms of bone, muscle and adipose tissue for 20, 50, 300 keV photons and photons from (125)I, (169)Yb and (192)Ir sources; ratios of mass-collision-stopping powers and mass energy absorption coefficients are calculated as applicable to convert Dmed into Dw,med for small and large cavities. Results show that 1-10 nm sized cavities are small at all investigated photon energies; 100 µm cavities are large only at photon energies <20 keV. A choice of an appropriate conversion coefficient Dw, med/Dmed is discussed in terms of the cavity size in relation to the size of important cellular targets. Free radicals from DNA bound water of nanometre dimensions contribute to DNA damage and cell killing and may be the most important water compartment in cells implying use of ratios of mass-collision-stopping powers for converting Dmed into Dw,med.

  2. Model-based x-ray energy spectrum estimation algorithm from CT scanning data with spectrum filter

    NASA Astrophysics Data System (ADS)

    Li, Lei; Wang, Lin-Yuan; Yan, Bin

    2016-10-01

    With the development of technology, the traditional X-ray CT can't meet the modern medical and industry needs for component distinguish and identification. This is due to the inconsistency of X-ray imaging system and reconstruction algorithm. In the current CT systems, X-ray spectrum produced by X-ray source is continuous in energy range determined by tube voltage and energy filter, and the attenuation coefficient of object is varied with the X-ray energy. So the distribution of X-ray energy spectrum plays an important role for beam-hardening correction, dual energy CT image reconstruction or dose calculation. However, due to high ill-condition and ill-posed feature of system equations of transmission measurement data, statistical fluctuations of X ray quantum and noise pollution, it is very hard to get stable and accurate spectrum estimation using existing methods. In this paper, a model-based X-ray energy spectrum estimation method from CT scanning data with energy spectrum filter is proposed. First, transmission measurement data were accurately acquired by CT scan and measurement using phantoms with different energy spectrum filter. Second, a physical meaningful X-ray tube spectrum model was established with weighted gaussian functions and priori information such as continuity of bremsstrahlung and specificity of characteristic emission and estimation information of average attenuation coefficient. The parameter in model was optimized to get the best estimation result for filtered spectrum. Finally, the original energy spectrum was reconstructed from filtered spectrum estimation with filter priori information. Experimental results demonstrate that the stability and accuracy of X ray energy spectrum estimation using the proposed method are improved significantly.

  3. Project scheduling: A multi-objective evolutionary algorithm that optimizes the effectiveness of human resources and the project makespan

    NASA Astrophysics Data System (ADS)

    Yannibelli, Virginia; Amandi, Analía

    2013-01-01

    In this article, the project scheduling problem is addressed in order to assist project managers at the early stage of scheduling. Thus, as part of the problem, two priority optimization objectives for managers at that stage are considered. One of these objectives is to assign the most effective set of human resources to each project activity. The effectiveness of a human resource is considered to depend on its work context. The other objective is to minimize the project makespan. To solve the problem, a multi-objective evolutionary algorithm is proposed. This algorithm designs feasible schedules for a given project and evaluates the designed schedules in relation to each objective. The algorithm generates an approximation to the Pareto set as a solution to the problem. The computational experiments carried out on nine different instance sets are reported.

  4. Towards an Extended Evolutionary Game Theory with Survival Analysis and Agreement Algorithms for Modeling Uncertainty, Vulnerability, and Deception

    NASA Astrophysics Data System (ADS)

    Ma, Zhanshan (Sam)

    Competition, cooperation and communication are the three fundamental relationships upon which natural selection acts in the evolution of life. Evolutionary game theory (EGT) is a 'marriage' between game theory and Darwin's evolution theory; it gains additional modeling power and flexibility by adopting population dynamics theory. In EGT, natural selection acts as optimization agents and produces inherent strategies, which eliminates some essential assumptions in traditional game theory such as rationality and allows more realistic modeling of many problems. Prisoner's Dilemma (PD) and Sir Philip Sidney (SPS) games are two well-known examples of EGT, which are formulated to study cooperation and communication, respectively. Despite its huge success, EGT exposes a certain degree of weakness in dealing with time-, space- and covariate-dependent (i.e., dynamic) uncertainty, vulnerability and deception. In this paper, I propose to extend EGT in two ways to overcome the weakness. First, I introduce survival analysis modeling to describe the lifetime or fitness of game players. This extension allows more flexible and powerful modeling of the dynamic uncertainty and vulnerability (collectively equivalent to the dynamic frailty in survival analysis). Secondly, I introduce agreement algorithms, which can be the Agreement algorithms in distributed computing (e.g., Byzantine Generals Problem [6][8], Dynamic Hybrid Fault Models [12]) or any algorithms that set and enforce the rules for players to determine their consensus. The second extension is particularly useful for modeling dynamic deception (e.g., asymmetric faults in fault tolerance and deception in animal communication). From a computational perspective, the extended evolutionary game theory (EEGT) modeling, when implemented in simulation, is equivalent to an optimization methodology that is similar to evolutionary computing approaches such as Genetic algorithms with dynamic populations [15][17].

  5. An evolutionary computation based algorithm for calculating solar differential rotation by automatic tracking of coronal bright points

    NASA Astrophysics Data System (ADS)

    Shahamatnia, Ehsan; Dorotovič, Ivan; Fonseca, Jose M.; Ribeiro, Rita A.

    2016-03-01

    Developing specialized software tools is essential to support studies of solar activity evolution. With new space missions such as Solar Dynamics Observatory (SDO), solar images are being produced in unprecedented volumes. To capitalize on that huge data availability, the scientific community needs a new generation of software tools for automatic and efficient data processing. In this paper a prototype of a modular framework for solar feature detection, characterization, and tracking is presented. To develop an efficient system capable of automatic solar feature tracking and measuring, a hybrid approach combining specialized image processing, evolutionary optimization, and soft computing algorithms is being followed. The specialized hybrid algorithm for tracking solar features allows automatic feature tracking while gathering characterization details about the tracked features. The hybrid algorithm takes advantages of the snake model, a specialized image processing algorithm widely used in applications such as boundary delineation, image segmentation, and object tracking. Further, it exploits the flexibility and efficiency of Particle Swarm Optimization (PSO), a stochastic population based optimization algorithm. PSO has been used successfully in a wide range of applications including combinatorial optimization, control, clustering, robotics, scheduling, and image processing and video analysis applications. The proposed tool, denoted PSO-Snake model, was already successfully tested in other works for tracking sunspots and coronal bright points. In this work, we discuss the application of the PSO-Snake algorithm for calculating the sidereal rotational angular velocity of the solar corona. To validate the results we compare them with published manual results performed by an expert.

  6. Lung motion estimation using dynamic point shifting: An innovative model based on a robust point matching algorithm

    SciTech Connect

    Yi, Jianbing; Yang, Xuan Li, Yan-Ran; Chen, Guoliang

    2015-10-15

    Purpose: Image-guided radiotherapy is an advanced 4D radiotherapy technique that has been developed in recent years. However, respiratory motion causes significant uncertainties in image-guided radiotherapy procedures. To address these issues, an innovative lung motion estimation model based on a robust point matching is proposed in this paper. Methods: An innovative robust point matching algorithm using dynamic point shifting is proposed to estimate patient-specific lung motion during free breathing from 4D computed tomography data. The correspondence of the landmark points is determined from the Euclidean distance between the landmark points and the similarity between the local images that are centered at points at the same time. To ensure that the points in the source image correspond to the points in the target image during other phases, the virtual target points are first created and shifted based on the similarity between the local image centered at the source point and the local image centered at the virtual target point. Second, the target points are shifted by the constrained inverse function mapping the target points to the virtual target points. The source point set and shifted target point set are used to estimate the transformation function between the source image and target image. Results: The performances of the authors’ method are evaluated on two publicly available DIR-lab and POPI-model lung datasets. For computing target registration errors on 750 landmark points in six phases of the DIR-lab dataset and 37 landmark points in ten phases of the POPI-model dataset, the mean and standard deviation by the authors’ method are 1.11 and 1.11 mm, but they are 2.33 and 2.32 mm without considering image intensity, and 1.17 and 1.19 mm with sliding conditions. For the two phases of maximum inhalation and maximum exhalation in the DIR-lab dataset with 300 landmark points of each case, the mean and standard deviation of target registration errors on the

  7. Capability of the Maximax&Maximin selection operator in the evolutionary algorithm for a nurse scheduling problem

    NASA Astrophysics Data System (ADS)

    Ramli, Razamin; Tein, Lim Huai

    2016-08-01

    A good work schedule can improve hospital operations by providing better coverage with appropriate staffing levels in managing nurse personnel. Hence, constructing the best nurse work schedule is the appropriate effort. In doing so, an improved selection operator in the Evolutionary Algorithm (EA) strategy for a nurse scheduling problem (NSP) is proposed. The smart and efficient scheduling procedures were considered. Computation of the performance of each potential solution or schedule was done through fitness evaluation. The best so far solution was obtained via special Maximax&Maximin (MM) parent selection operator embedded in the EA, which fulfilled all constraints considered in the NSP.

  8. Multidimensional scaling for evolutionary algorithms--visualization of the path through search space and solution space using Sammon mapping.

    PubMed

    Pohlheim, Hartmut

    2006-01-01

    Multidimensional scaling as a technique for the presentation of high-dimensional data with standard visualization techniques is presented. The technique used is often known as Sammon mapping. We explain the mathematical foundations of multidimensional scaling and its robust calculation. We also demonstrate the use of this technique in the area of evolutionary algorithms. First, we present the visualization of the path through the search space of the best individuals during an optimization run. We then apply multidimensional scaling to the comparison of multiple runs regarding the variables of individuals and multi-criteria objective values (path through the solution space).

  9. Restart Operator Meta-heuristics for a Problem-Oriented Evolutionary Strategies Algorithm in Inverse Mathematical MISO Modelling Problem Solving

    NASA Astrophysics Data System (ADS)

    Ryzhikov, I. S.; Semenkin, E. S.

    2017-02-01

    This study is focused on solving an inverse mathematical modelling problem for dynamical systems based on observation data and control inputs. The mathematical model is being searched in the form of a linear differential equation, which determines the system with multiple inputs and a single output, and a vector of the initial point coordinates. The described problem is complex and multimodal and for this reason the proposed evolutionary-based optimization technique, which is oriented on a dynamical system identification problem, was applied. To improve its performance an algorithm restart operator was implemented.

  10. CCS Site Optimization by Applying a Multi-objective Evolutionary Algorithm to Semi-Analytical Leakage Models

    NASA Astrophysics Data System (ADS)

    Cody, B. M.; Gonzalez-Nicolas, A.; Bau, D. A.

    2011-12-01

    Carbon capture and storage (CCS) has been proposed as a method of reducing global carbon dioxide (CO2) emissions. Although CCS has the potential to greatly retard greenhouse gas loading to the atmosphere while cleaner, more sustainable energy solutions are developed, there is a possibility that sequestered CO2 may leak and intrude into and adversely affect groundwater resources. It has been reported [1] that, while CO2 intrusion typically does not directly threaten underground drinking water resources, it may cause secondary effects, such as the mobilization of hazardous inorganic constituents present in aquifer minerals and changes in pH values. These risks must be fully understood and minimized before CCS project implementation. Combined management of project resources and leakage risk is crucial for the implementation of CCS. In this work, we present a method of: (a) minimizing the total CCS cost, the summation of major project costs with the cost associated with CO2 leakage; and (b) maximizing the mass of injected CO2, for a given proposed sequestration site. Optimization decision variables include the number of CO2 injection wells, injection rates, and injection well locations. The capital and operational costs of injection wells are directly related to injection well depth, location, injection flow rate, and injection duration. The cost of leakage is directly related to the mass of CO2 leaked through weak areas, such as abandoned oil wells, in the cap rock layers overlying the injected formation. Additional constraints on fluid overpressure caused by CO2 injection are imposed to maintain predefined effective stress levels that prevent cap rock fracturing. Here, both mass leakage and fluid overpressure are estimated using two semi-analytical models based upon work by [2,3]. A multi-objective evolutionary algorithm coupled with these semi-analytical leakage flow models is used to determine Pareto-optimal trade-off sets giving minimum total cost vs. maximum mass

  11. Support Vector Machines Trained with Evolutionary Algorithms Employing Kernel Adatron for Large Scale Classification of Protein Structures

    PubMed Central

    Arana-Daniel, Nancy; Gallegos, Alberto A.; López-Franco, Carlos; Alanís, Alma Y.; Morales, Jacob; López-Franco, Adriana

    2016-01-01

    With the increasing power of computers, the amount of data that can be processed in small periods of time has grown exponentially, as has the importance of classifying large-scale data efficiently. Support vector machines have shown good results classifying large amounts of high-dimensional data, such as data generated by protein structure prediction, spam recognition, medical diagnosis, optical character recognition and text classification, etc. Most state of the art approaches for large-scale learning use traditional optimization methods, such as quadratic programming or gradient descent, which makes the use of evolutionary algorithms for training support vector machines an area to be explored. The present paper proposes an approach that is simple to implement based on evolutionary algorithms and Kernel-Adatron for solving large-scale classification problems, focusing on protein structure prediction. The functional properties of proteins depend upon their three-dimensional structures. Knowing the structures of proteins is crucial for biology and can lead to improvements in areas such as medicine, agriculture and biofuels. PMID:27980384

  12. Multi-objective entropy evolutionary algorithm for marine oil spill detection using cosmo-skymed satellite data

    NASA Astrophysics Data System (ADS)

    Marghany, M.

    2015-06-01

    Oil spill pollution has a substantial role in damaging the marine ecosystem. Oil spill that floats on top of water, as well as decreasing the fauna populations, affects the food chain in the ecosystem. In fact, oil spill is reducing the sunlight penetrates the water, limiting the photosynthesis of marine plants and phytoplankton. Moreover, marine mammals for instance, disclosed to oil spills their insulating capacities are reduced, and so making them more vulnerable to temperature variations and much less buoyant in the seawater. This study has demonstrated a design tool for oil spill detection in SAR satellite data using optimization of Entropy based Multi-Objective Evolutionary Algorithm (E-MMGA) which based on Pareto optimal solutions. The study also shows that optimization entropy based Multi-Objective Evolutionary Algorithm provides an accurate pattern of oil slick in SAR data. This shown by 85 % for oil spill, 10 % look-alike and 5 % for sea roughness using the receiver-operational characteristics (ROC) curve. The E-MMGA also shows excellent performance in SAR data. In conclusion, E-MMGA can be used as optimization for entropy to perform an automatic detection of oil spill in SAR satellite data.

  13. Support Vector Machines Trained with Evolutionary Algorithms Employing Kernel Adatron for Large Scale Classification of Protein Structures.

    PubMed

    Arana-Daniel, Nancy; Gallegos, Alberto A; López-Franco, Carlos; Alanís, Alma Y; Morales, Jacob; López-Franco, Adriana

    2016-01-01

    With the increasing power of computers, the amount of data that can be processed in small periods of time has grown exponentially, as has the importance of classifying large-scale data efficiently. Support vector machines have shown good results classifying large amounts of high-dimensional data, such as data generated by protein structure prediction, spam recognition, medical diagnosis, optical character recognition and text classification, etc. Most state of the art approaches for large-scale learning use traditional optimization methods, such as quadratic programming or gradient descent, which makes the use of evolutionary algorithms for training support vector machines an area to be explored. The present paper proposes an approach that is simple to implement based on evolutionary algorithms and Kernel-Adatron for solving large-scale classification problems, focusing on protein structure prediction. The functional properties of proteins depend upon their three-dimensional structures. Knowing the structures of proteins is crucial for biology and can lead to improvements in areas such as medicine, agriculture and biofuels.

  14. An evolutionary algorithm for global optimization based on self-organizing maps

    NASA Astrophysics Data System (ADS)

    Barmada, Sami; Raugi, Marco; Tucci, Mauro

    2016-10-01

    In this article, a new population-based algorithm for real-parameter global optimization is presented, which is denoted as self-organizing centroids optimization (SOC-opt). The proposed method uses a stochastic approach which is based on the sequential learning paradigm for self-organizing maps (SOMs). A modified version of the SOM is proposed where each cell contains an individual, which performs a search for a locally optimal solution and it is affected by the search for a global optimum. The movement of the individuals in the search space is based on a discrete-time dynamic filter, and various choices of this filter are possible to obtain different dynamics of the centroids. In this way, a general framework is defined where well-known algorithms represent a particular case. The proposed algorithm is validated through a set of problems, which include non-separable problems, and compared with state-of-the-art algorithms for global optimization.

  15. A Novel Automatic Detection System for ECG Arrhythmias Using Maximum Margin Clustering with Immune Evolutionary Algorithm

    PubMed Central

    Zhu, Bohui; Ding, Yongsheng; Hao, Kuangrong

    2013-01-01

    This paper presents a novel maximum margin clustering method with immune evolution (IEMMC) for automatic diagnosis of electrocardiogram (ECG) arrhythmias. This diagnostic system consists of signal processing, feature extraction, and the IEMMC algorithm for clustering of ECG arrhythmias. First, raw ECG signal is processed by an adaptive ECG filter based on wavelet transforms, and waveform of the ECG signal is detected; then, features are extracted from ECG signal to cluster different types of arrhythmias by the IEMMC algorithm. Three types of performance evaluation indicators are used to assess the effect of the IEMMC method for ECG arrhythmias, such as sensitivity, specificity, and accuracy. Compared with K-means and iterSVR algorithms, the IEMMC algorithm reflects better performance not only in clustering result but also in terms of global search ability and convergence ability, which proves its effectiveness for the detection of ECG arrhythmias. PMID:23690875

  16. Comparison of Algorithms for Prediction of Protein Structural Features from Evolutionary Data.

    PubMed

    Bywater, Robert P

    2016-01-01

    Proteins have many functions and predicting these is still one of the major challenges in theoretical biophysics and bioinformatics. Foremost amongst these functions is the need to fold correctly thereby allowing the other genetically dictated tasks that the protein has to carry out to proceed efficiently. In this work, some earlier algorithms for predicting protein domain folds are revisited and they are compared with more recently developed methods. In dealing with intractable problems such as fold prediction, when different algorithms show convergence onto the same result there is every reason to take all algorithms into account such that a consensus result can be arrived at. In this work it is shown that the application of different algorithms in protein structure prediction leads to results that do not converge as such but rather they collude in a striking and useful way that has never been considered before.

  17. Percentage depth dose calculation accuracy of model based algorithms in high energy photon small fields through heterogeneous media and comparison with plastic scintillator dosimetry.

    PubMed

    Alagar, Ananda Giri Babu; Kadirampatti Mani, Ganesh; Karunakaran, Kaviarasu

    2016-01-08

    Small fields smaller than 4 × 4 cm2 are used in stereotactic and conformal treatments where heterogeneity is normally present. Since dose calculation accuracy in both small fields and heterogeneity often involves more discrepancy, algorithms used by treatment planning systems (TPS) should be evaluated for achieving better treatment results. This report aims at evaluating accuracy of four model-based algorithms, X-ray Voxel Monte Carlo (XVMC) from Monaco, Superposition (SP) from CMS-Xio, AcurosXB (AXB) and analytical anisotropic algorithm (AAA) from Eclipse are tested against the measurement. Measurements are done using Exradin W1 plastic scintillator in Solid Water phantom with heterogeneities like air, lung, bone, and aluminum, irradiated with 6 and 15 MV photons of square field size ranging from 1 to 4 cm2. Each heterogeneity is introduced individually at two different depths from depth-of-dose maximum (Dmax), one setup being nearer and another farther from the Dmax. The central axis percentage depth-dose (CADD) curve for each setup is measured separately and compared with the TPS algorithm calculated for the same setup. The percentage normalized root mean squared deviation (%NRMSD) is calculated, which represents the whole CADD curve's deviation against the measured. It is found that for air and lung heterogeneity, for both 6 and 15 MV, all algorithms show maximum deviation for field size 1 × 1 cm2 and gradually reduce when field size increases, except for AAA. For aluminum and bone, all algorithms' deviations are less for 15 MV irrespective of setup. In all heterogeneity setups, 1 × 1 cm2 field showed maximum deviation, except in 6MV bone setup. All algorithms in the study, irrespective of energy and field size, when any heterogeneity is nearer to Dmax, the dose deviation is higher compared to the same heterogeneity far from the Dmax. Also, all algorithms show maximum deviation in lower-density materials compared to high-density materials.

  18. A master-slave parallel hybrid multi-objective evolutionary algorithm for groundwater remediation design under general hydrogeological conditions

    NASA Astrophysics Data System (ADS)

    Wu, J.; Yang, Y.; Luo, Q.; Wu, J.

    2012-12-01

    This study presents a new hybrid multi-objective evolutionary algorithm, the niched Pareto tabu search combined with a genetic algorithm (NPTSGA), whereby the global search ability of niched Pareto tabu search (NPTS) is improved by the diversification of candidate solutions arose from the evolving nondominated sorting genetic algorithm II (NSGA-II) population. Also, the NPTSGA coupled with the commonly used groundwater flow and transport codes, MODFLOW and MT3DMS, is developed for multi-objective optimal design of groundwater remediation systems. The proposed methodology is then applied to a large-scale field groundwater remediation system for cleanup of large trichloroethylene (TCE) plume at the Massachusetts Military Reservation (MMR) in Cape Cod, Massachusetts. Furthermore, a master-slave (MS) parallelization scheme based on the Message Passing Interface (MPI) is incorporated into the NPTSGA to implement objective function evaluations in distributed processor environment, which can greatly improve the efficiency of the NPTSGA in finding Pareto-optimal solutions to the real-world application. This study shows that the MS parallel NPTSGA in comparison with the original NPTS and NSGA-II can balance the tradeoff between diversity and optimality of solutions during the search process and is an efficient and effective tool for optimizing the multi-objective design of groundwater remediation systems under complicated hydrogeologic conditions.

  19. A model-based parallel origin and orientation refinement algorithm for cryoTEM and its application to the study of virus structures

    PubMed Central

    Ji, Yongchang; Marinescu, Dan C.; Zhang, Wei; Zhang, Xing; Yan, Xiaodong; Baker, Timothy S.

    2014-01-01

    We present a model-based parallel algorithm for origin and orientation refinement for 3D reconstruction in cryoTEM. The algorithm is based upon the Projection Theorem of the Fourier Transform. Rather than projecting the current 3D model and searching for the best match between an experimental view and the calculated projections, the algorithm computes the Discrete Fourier Transform (DFT) of each projection and searches for the central section (“cut”) of the 3D DFT that best matches the DFT of the projection. Factors that affect the efficiency of a parallel program are first reviewed and then the performance and limitations of the proposed algorithm are discussed. The parallel program that implements this algorithm, called PO2R, has been used for the refinement of several virus structures, including those of the 500 Å diameter dengue virus (to 9.5 Å resolution), the 850 Å mammalian reovirus (to better than 7 Å), and the 1800 Å paramecium bursaria chlorella virus (to 15 Å). PMID:16459100

  20. Artificial Neural Networks, and Evolutionary Algorithms as a systems biology approach to a data-base on fetal growth restriction.

    PubMed

    Street, Maria E; Buscema, Massimo; Smerieri, Arianna; Montanini, Luisa; Grossi, Enzo

    2013-12-01

    One of the specific aims of systems biology is to model and discover properties of cells, tissues and organisms functioning. A systems biology approach was undertaken to investigate possibly the entire system of intra-uterine growth we had available, to assess the variables of interest, discriminate those which were effectively related with appropriate or restricted intrauterine growth, and achieve an understanding of the systems in these two conditions. The Artificial Adaptive Systems, which include Artificial Neural Networks and Evolutionary Algorithms lead us to the first analyses. These analyses identified the importance of the biochemical variables IL-6, IGF-II and IGFBP-2 protein concentrations in placental lysates, and offered a new insight into placental markers of fetal growth within the IGF and cytokine systems, confirmed they had relationships and offered a critical assessment of studies previously performed.

  1. Multi-objective optimization using evolutionary algorithms for qualitative and quantitative control of urban runoff

    NASA Astrophysics Data System (ADS)

    Oraei Zare, S.; Saghafian, B.; Shamsai, A.; Nazif, S.

    2012-01-01

    Urban development and affects the quantity and quality of urban floods. Generally, flood management include planning and management activities to reduce the harmful effects of floods on people, environment and economy is in a region. In recent years, a concept called Best Management Practices (BMPs) has been widely used for urban flood control from both quality and quantity aspects. In this paper, three objective functions relating to the quality of runoff (including BOD5 and TSS parameters), the quantity of runoff (including runoff volume produced at each sub-basin) and expenses (including construction and maintenance costs of BMPs) were employed in the optimization algorithm aimed at finding optimal solution MOPSO and NSGAII optimization methods were coupled with the SWMM urban runoff simulation model. In the proposed structure for NSGAII algorithm, a continuous structure and intermediate crossover was used because they perform better for improving the optimization model efficiency. To compare the performance of the two optimization algorithms, a number of statistical indicators were computed for the last generation of solutions. Comparing the pareto solution resulted from each of the optimization algorithms indicated that the NSGAII solutions was more optimal. Moreover, the standard deviation of solutions in the last generation had no significant differences in comparison with MOPSO.

  2. A two-dimensional coupled flow-mass transport model based on an improved unstructured finite volume algorithm.

    PubMed

    Zhou, Jianzhong; Song, Lixiang; Kursan, Suncana; Liu, Yi

    2015-05-01

    A two-dimensional coupled water quality model is developed for modeling the flow-mass transport in shallow water. To simulate shallow flows on complex topography with wetting and drying, an unstructured grid, well-balanced, finite volume algorithm is proposed for numerical resolution of a modified formulation of two-dimensional shallow water equations. The slope-limited linear reconstruction method is used to achieve second-order accuracy in space. The algorithm adopts a HLLC-based integrated solver to compute the flow and mass transport fluxes simultaneously, and uses Hancock's predictor-corrector scheme for efficient time stepping as well as second-order temporal accuracy. The continuity and momentum equations are updated in both wet and dry cells. A new hybrid method, which can preserve the well-balanced property of the algorithm for simulations involving flooding and recession, is proposed for bed slope terms approximation. The effectiveness and robustness of the proposed algorithm are validated by the reasonable good agreement between numerical and reference results of several benchmark test cases. Results show that the proposed coupled flow-mass transport model can simulate complex flows and mass transport in shallow water.

  3. Binomial probability distribution model-based protein identification algorithm for tandem mass spectrometry utilizing peak intensity information.

    PubMed

    Xiao, Chuan-Le; Chen, Xiao-Zhou; Du, Yang-Li; Sun, Xuesong; Zhang, Gong; He, Qing-Yu

    2013-01-04

    Mass spectrometry has become one of the most important technologies in proteomic analysis. Tandem mass spectrometry (LC-MS/MS) is a major tool for the analysis of peptide mixtures from protein samples. The key step of MS data processing is the identification of peptides from experimental spectra by searching public sequence databases. Although a number of algorithms to identify peptides from MS/MS data have been already proposed, e.g. Sequest, OMSSA, X!Tandem, Mascot, etc., they are mainly based on statistical models considering only peak-matches between experimental and theoretical spectra, but not peak intensity information. Moreover, different algorithms gave different results from the same MS data, implying their probable incompleteness and questionable reproducibility. We developed a novel peptide identification algorithm, ProVerB, based on a binomial probability distribution model of protein tandem mass spectrometry combined with a new scoring function, making full use of peak intensity information and, thus, enhancing the ability of identification. Compared with Mascot, Sequest, and SQID, ProVerB identified significantly more peptides from LC-MS/MS data sets than the current algorithms at 1% False Discovery Rate (FDR) and provided more confident peptide identifications. ProVerB is also compatible with various platforms and experimental data sets, showing its robustness and versatility. The open-source program ProVerB is available at http://bioinformatics.jnu.edu.cn/software/proverb/ .

  4. Constructing large-scale genetic maps using an evolutionary strategy algorithm.

    PubMed Central

    Mester, D; Ronin, Y; Minkov, D; Nevo, E; Korol, A

    2003-01-01

    This article is devoted to the problem of ordering in linkage groups with many dozens or even hundreds of markers. The ordering problem belongs to the field of discrete optimization on a set of all possible orders, amounting to n!/2 for n loci; hence it is considered an NP-hard problem. Several authors attempted to employ the methods developed in the well-known traveling salesman problem (TSP) for multilocus ordering, using the assumption that for a set of linked loci the true order will be the one that minimizes the total length of the linkage group. A novel, fast, and reliable algorithm developed for the TSP and based on evolution-strategy discrete optimization was applied in this study for multilocus ordering on the basis of pairwise recombination frequencies. The quality of derived maps under various complications (dominant vs. codominant markers, marker misclassification, negative and positive interference, and missing data) was analyzed using simulated data with approximately 50-400 markers. High performance of the employed algorithm allows systematic treatment of the problem of verification of the obtained multilocus orders on the basis of computing-intensive bootstrap and/or jackknife approaches for detecting and removing questionable marker scores, thereby stabilizing the resulting maps. Parallel calculation technology can easily be adopted for further acceleration of the proposed algorithm. Real data analysis (on maize chromosome 1 with 230 markers) is provided to illustrate the proposed methodology. PMID:14704202

  5. Model-based surface soil moisture (SSM) retrieval algorithm using multi-temporal RISAT-1 C-band SAR data

    NASA Astrophysics Data System (ADS)

    Pandey, Dharmendra K.; Maity, Saroj; Bhattacharya, Bimal; Misra, Arundhati

    2016-05-01

    Accurate measurement of surface soil moisture of bare and vegetation covered soil over agricultural field and monitoring the changes in surface soil moisture is vital for estimation for managing and mitigating risk to agricultural crop, which requires information and knowledge to assess risk potential and implement risk reduction strategies and deliver essential responses. The empirical and semi-empirical model-based soil moisture inversion approach developed in the past are either sensor or region specific, vegetation type specific or have limited validity range, and have limited scope to explain physical scattering processes. Hence, there is need for more robust, physical polarimetric radar backscatter model-based retrieval methods, which are sensor and location independent and have wide range of validity over soil properties. In the present study, Integral Equation Model (IEM) and Vector Radiative Transfer (VRT) model were used to simulate averaged backscatter coefficients in various soil moisture (dry, moist and wet soil), soil roughness (smooth to very rough) and crop conditions (low to high vegetation water contents) over selected regions of Gujarat state of India and the results were compared with multi-temporal Radar Imaging Satellite-1 (RISAT-1) C-band Synthetic Aperture Radar (SAR) data in σ°HH and σ°HV polarizations, in sync with on field measured soil and crop conditions. High correlations were observed between RISAT-1 HH and HV with model simulated σ°HH & σ°HV based on field measured soil with the coefficient of determination R2 varying from 0.84 to 0.77 and RMSE varying from 0.94 dB to 2.1 dB for bare soil. Whereas in case of winter wheat crop, coefficient of determination R2 varying from 0.84 to 0.79 and RMSE varying from 0.87 dB to 1.34 dB, corresponding to with vegetation water content values up to 3.4 kg/m2. Artificial Neural Network (ANN) methods were adopted for model-based soil moisture inversion. The training datasets for the NNs were

  6. Comparative Local Quality Assessment of 3D Medical Image Segmentations with Focus on Statistical Shape Model-Based Algorithms.

    PubMed

    Landesberger, Tatiana von; Basgier, Dennis; Becker, Meike

    2016-12-01

    The quality of automatic 3D medical segmentation algorithms needs to be assessed on test datasets comprising several 3D images (i.e., instances of an organ). The experts need to compare the segmentation quality across the dataset in order to detect systematic segmentation problems. However, such comparative evaluation is not supported well by current methods. We present a novel system for assessing and comparing segmentation quality in a dataset with multiple 3D images. The data is analyzed and visualized in several views. We detect and show regions with systematic segmentation quality characteristics. For this purpose, we extended a hierarchical clustering algorithm with a connectivity criterion. We combine quality values across the dataset for determining regions with characteristic segmentation quality across instances. Using our system, the experts can also identify 3D segmentations with extraordinary quality characteristics. While we focus on algorithms based on statistical shape models, our approach can also be applied to cases, where landmark correspondences among instances can be established. We applied our approach to three real datasets: liver, cochlea and facial nerve. The segmentation experts were able to identify organ regions with systematic segmentation characteristics as well as to detect outlier instances.

  7. A COMPARISON OF MODEL BASED AND DIRECT OPTIMIZATION BASED FILTERING ALGORITHMS FOR SHEARWAVE VELOCITY RECONSTRUCTION FOR ELECTRODE VIBRATION ELASTOGRAPHY.

    PubMed

    Ingle, Atul; Varghese, Tomy

    2013-04-01

    Tissue stiffness estimation plays an important role in cancer detection and treatment. The presence of stiffer regions in healthy tissue can be used as an indicator for the possibility of pathological changes. Electrode vibration elastography involves tracking of a mechanical shear wave in tissue using radio-frequency ultrasound echoes. Based on appropriate assumptions on tissue elasticity, this approach provides a direct way of measuring tissue stiffness from shear wave velocity, and enabling visualization in the form of tissue stiffness maps. In this study, two algorithms for shear wave velocity reconstruction in an electrode vibration setup are presented. The first method models the wave arrival time data using a hidden Markov model whose hidden states are local wave velocities that are estimated using a particle filter implementation. This is compared to a direct optimization-based function fitting approach that uses sequential quadratic programming to estimate the unknown velocities and locations of interfaces. The mean shear wave velocities obtained using the two algorithms are within 10%of each other. Moreover, the Young's modulus estimates obtained from an incompressibility assumption are within 15 kPa of those obtained from the true stiffness data obtained from mechanical testing. Based on visual inspection of the two filtering algorithms, the particle filtering method produces smoother velocity maps.

  8. C16S - a Hidden Markov Model based algorithm for taxonomic classification of 16S rRNA gene sequences.

    PubMed

    Ghosh, Tarini Shankar; Gajjalla, Purnachander; Mohammed, Monzoorul Haque; Mande, Sharmila S

    2012-04-01

    Recent advances in high throughput sequencing technologies and concurrent refinements in 16S rDNA isolation techniques have facilitated the rapid extraction and sequencing of 16S rDNA content of microbial communities. The taxonomic affiliation of these 16S rDNA fragments is subsequently obtained using either BLAST-based or word frequency based approaches. However, the classification accuracy of such methods is observed to be limited in typical metagenomic scenarios, wherein a majority of organisms are hitherto unknown. In this study, we present a 16S rDNA classification algorithm, called C16S, that uses genus-specific Hidden Markov Models for taxonomic classification of 16S rDNA sequences. Results obtained using C16S have been compared with the widely used RDP classifier. The performance of C16S algorithm was observed to be consistently higher than the RDP classifier. In some scenarios, this increase in accuracy is as high as 34%. A web-server for the C16S algorithm is available at http://metagenomics.atc.tcs.com/C16S/.

  9. Improvement of fluorescence-enhanced optical tomography with improved optical filtering and accurate model-based reconstruction algorithms.

    PubMed

    Lu, Yujie; Zhu, Banghe; Darne, Chinmay; Tan, I-Chih; Rasmussen, John C; Sevick-Muraca, Eva M

    2011-12-01

    The goal of preclinical fluorescence-enhanced optical tomography (FEOT) is to provide three-dimensional fluorophore distribution for a myriad of drug and disease discovery studies in small animals. Effective measurements, as well as fast and robust image reconstruction, are necessary for extensive applications. Compared to bioluminescence tomography (BLT), FEOT may result in improved image quality through higher detected photon count rates. However, background signals that arise from excitation illumination affect the reconstruction quality, especially when tissue fluorophore concentration is low and/or fluorescent target is located deeply in tissues. We show that near-infrared fluorescence (NIRF) imaging with an optimized filter configuration significantly reduces the background noise. Model-based reconstruction with a high-order approximation to the radiative transfer equation further improves the reconstruction quality compared to the diffusion approximation. Improvements in FEOT are demonstrated experimentally using a mouse-shaped phantom with targets of pico- and subpico-mole NIR fluorescent dye.

  10. Improvement of fluorescence-enhanced optical tomography with improved optical filtering and accurate model-based reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Lu, Yujie; Zhu, Banghe; Darne, Chinmay; Tan, I.-Chih; Rasmussen, John C.; Sevick-Muraca, Eva M.

    2011-12-01

    The goal of preclinical fluorescence-enhanced optical tomography (FEOT) is to provide three-dimensional fluorophore distribution for a myriad of drug and disease discovery studies in small animals. Effective measurements, as well as fast and robust image reconstruction, are necessary for extensive applications. Compared to bioluminescence tomography (BLT), FEOT may result in improved image quality through higher detected photon count rates. However, background signals that arise from excitation illumination affect the reconstruction quality, especially when tissue fluorophore concentration is low and/or fluorescent target is located deeply in tissues. We show that near-infrared fluorescence (NIRF) imaging with an optimized filter configuration significantly reduces the background noise. Model-based reconstruction with a high-order approximation to the radiative transfer equation further improves the reconstruction quality compared to the diffusion approximation. Improvements in FEOT are demonstrated experimentally using a mouse-shaped phantom with targets of pico- and subpico-mole NIR fluorescent dye.

  11. Crossover versus mutation: a comparative analysis of the evolutionary strategy of genetic algorithms applied to combinatorial optimization problems.

    PubMed

    Osaba, E; Carballedo, R; Diaz, F; Onieva, E; de la Iglesia, I; Perallos, A

    2014-01-01

    Since their first formulation, genetic algorithms (GAs) have been one of the most widely used techniques to solve combinatorial optimization problems. The basic structure of the GAs is known by the scientific community, and thanks to their easy application and good performance, GAs are the focus of a lot of research works annually. Although throughout history there have been many studies analyzing various concepts of GAs, in the literature there are few studies that analyze objectively the influence of using blind crossover operators for combinatorial optimization problems. For this reason, in this paper a deep study on the influence of using them is conducted. The study is based on a comparison of nine techniques applied to four well-known combinatorial optimization problems. Six of the techniques are GAs with different configurations, and the remaining three are evolutionary algorithms that focus exclusively on the mutation process. Finally, to perform a reliable comparison of these results, a statistical study of them is made, performing the normal distribution z-test.

  12. Classification of heavy metal ions present in multi-frequency multi-electrode potable water data using evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Karkra, Rashmi; Kumar, Prashant; Bansod, Baban K. S.; Bagchi, Sudeshna; Sharma, Pooja; Krishna, C. Rama

    2016-12-01

    Access to potable water for the common people is one of the most challenging tasks in the present era. Contamination of drinking water has become a serious problem due to various anthropogenic and geogenic events. The paper demonstrates the application of evolutionary algorithms, viz., particle swan optimization and genetic algorithm to 24 water samples containing eight different heavy metal ions (Cd, Cu, Co, Pb, Zn, Ar, Cr and Ni) for the optimal estimation of electrode and frequency to classify the heavy metal ions. The work has been carried out on multi-variate data, viz., single electrode multi-frequency, single frequency multi-electrode and multi-frequency multi-electrode water samples. The electrodes used are platinum, gold, silver nanoparticles and glassy carbon electrodes. Various hazardous metal ions present in the water samples have been optimally classified and validated by the application of Davis Bouldin index. Such studies are useful in the segregation of hazardous heavy metal ions found in water resources, thereby quantifying the degree of water quality.

  13. Crossover versus Mutation: A Comparative Analysis of the Evolutionary Strategy of Genetic Algorithms Applied to Combinatorial Optimization Problems

    PubMed Central

    Osaba, E.; Carballedo, R.; Diaz, F.; Onieva, E.; de la Iglesia, I.; Perallos, A.

    2014-01-01

    Since their first formulation, genetic algorithms (GAs) have been one of the most widely used techniques to solve combinatorial optimization problems. The basic structure of the GAs is known by the scientific community, and thanks to their easy application and good performance, GAs are the focus of a lot of research works annually. Although throughout history there have been many studies analyzing various concepts of GAs, in the literature there are few studies that analyze objectively the influence of using blind crossover operators for combinatorial optimization problems. For this reason, in this paper a deep study on the influence of using them is conducted. The study is based on a comparison of nine techniques applied to four well-known combinatorial optimization problems. Six of the techniques are GAs with different configurations, and the remaining three are evolutionary algorithms that focus exclusively on the mutation process. Finally, to perform a reliable comparison of these results, a statistical study of them is made, performing the normal distribution z-test. PMID:25165731

  14. Model-based analyses of whole-genome data reveal a complex evolutionary history involving archaic introgression in Central African Pygmies.

    PubMed

    Hsieh, PingHsun; Woerner, August E; Wall, Jeffrey D; Lachance, Joseph; Tishkoff, Sarah A; Gutenkunst, Ryan N; Hammer, Michael F

    2016-03-01

    Comparisons of whole-genome sequences from ancient and contemporary samples have pointed to several instances of archaic admixture through interbreeding between the ancestors of modern non-Africans and now extinct hominids such as Neanderthals and Denisovans. One implication of these findings is that some adaptive features in contemporary humans may have entered the population via gene flow with archaic forms in Eurasia. Within Africa, fossil evidence suggests that anatomically modern humans (AMH) and various archaic forms coexisted for much of the last 200,000 yr; however, the absence of ancient DNA in Africa has limited our ability to make a direct comparison between archaic and modern human genomes. Here, we use statistical inference based on high coverage whole-genome data (greater than 60×) from contemporary African Pygmy hunter-gatherers as an alternative means to study the evolutionary history of the genus Homo. Using whole-genome simulations that consider demographic histories that include both isolation and gene flow with neighboring farming populations, our inference method rejects the hypothesis that the ancestors of AMH were genetically isolated in Africa, thus providing the first whole genome-level evidence of African archaic admixture. Our inferences also suggest a complex human evolutionary history in Africa, which involves at least a single admixture event from an unknown archaic population into the ancestors of AMH, likely within the last 30,000 yr.

  15. Model-based analyses of whole-genome data reveal a complex evolutionary history involving archaic introgression in Central African Pygmies

    PubMed Central

    Hsieh, PingHsun; Woerner, August E.; Wall, Jeffrey D.; Lachance, Joseph; Tishkoff, Sarah A.; Gutenkunst, Ryan N.; Hammer, Michael F.

    2016-01-01

    Comparisons of whole-genome sequences from ancient and contemporary samples have pointed to several instances of archaic admixture through interbreeding between the ancestors of modern non-Africans and now extinct hominids such as Neanderthals and Denisovans. One implication of these findings is that some adaptive features in contemporary humans may have entered the population via gene flow with archaic forms in Eurasia. Within Africa, fossil evidence suggests that anatomically modern humans (AMH) and various archaic forms coexisted for much of the last 200,000 yr; however, the absence of ancient DNA in Africa has limited our ability to make a direct comparison between archaic and modern human genomes. Here, we use statistical inference based on high coverage whole-genome data (greater than 60×) from contemporary African Pygmy hunter-gatherers as an alternative means to study the evolutionary history of the genus Homo. Using whole-genome simulations that consider demographic histories that include both isolation and gene flow with neighboring farming populations, our inference method rejects the hypothesis that the ancestors of AMH were genetically isolated in Africa, thus providing the first whole genome-level evidence of African archaic admixture. Our inferences also suggest a complex human evolutionary history in Africa, which involves at least a single admixture event from an unknown archaic population into the ancestors of AMH, likely within the last 30,000 yr. PMID:26888264

  16. 5D respiratory motion model based image reconstruction algorithm for 4D cone-beam computed tomography

    NASA Astrophysics Data System (ADS)

    Liu, Jiulong; Zhang, Xue; Zhang, Xiaoqun; Zhao, Hongkai; Gao, Yu; Thomas, David; Low, Daniel A.; Gao, Hao

    2015-11-01

    4D cone-beam computed tomography (4DCBCT) reconstructs a temporal sequence of CBCT images for the purpose of motion management or 4D treatment in radiotherapy. However the image reconstruction often involves the binning of projection data to each temporal phase, and therefore suffers from deteriorated image quality due to inaccurate or uneven binning in phase, e.g., under the non-periodic breathing. A 5D model has been developed as an accurate model of (periodic and non-periodic) respiratory motion. That is, given the measurements of breathing amplitude and its time derivative, the 5D model parametrizes the respiratory motion by three time-independent variables, i.e., one reference image and two vector fields. In this work we aim to develop a new 4DCBCT reconstruction method based on 5D model. Instead of reconstructing a temporal sequence of images after the projection binning, the new method reconstructs time-independent reference image and vector fields with no requirement of binning. The image reconstruction is formulated as a optimization problem with total-variation regularization on both reference image and vector fields, and the problem is solved by the proximal alternating minimization algorithm, during which the split Bregman method is used to reconstruct the reference image, and the Chambolle's duality-based algorithm is used to reconstruct the vector fields. The convergence analysis of the proposed algorithm is provided for this nonconvex problem. Validated by the simulation studies, the new method has significantly improved image reconstruction accuracy due to no binning and reduced number of unknowns via the use of the 5D model.

  17. Confronting Decision Cliffs: Diagnostic Assessment of Multi-Objective Evolutionary Algorithms' Performance for Addressing Uncertain Environmental Thresholds

    NASA Astrophysics Data System (ADS)

    Ward, V. L.; Singh, R.; Reed, P. M.; Keller, K.

    2014-12-01

    As water resources problems typically involve several stakeholders with conflicting objectives, multi-objective evolutionary algorithms (MOEAs) are now key tools for understanding management tradeoffs. Given the growing complexity of water planning problems, it is important to establish if an algorithm can consistently perform well on a given class of problems. This knowledge allows the decision analyst to focus on eliciting and evaluating appropriate problem formulations. This study proposes a multi-objective adaptation of the classic environmental economics "Lake Problem" as a computationally simple but mathematically challenging MOEA benchmarking problem. The lake problem abstracts a fictional town on a lake which hopes to maximize its economic benefit without degrading the lake's water quality to a eutrophic (polluted) state through excessive phosphorus loading. The problem poses the challenge of maintaining economic activity while confronting the uncertainty of potentially crossing a nonlinear and potentially irreversible pollution threshold beyond which the lake is eutrophic. Objectives for optimization are maximizing economic benefit from lake pollution, maximizing water quality, maximizing the reliability of remaining below the environmental threshold, and minimizing the probability that the town will have to drastically change pollution policies in any given year. The multi-objective formulation incorporates uncertainty with a stochastic phosphorus inflow abstracting non-point source pollution. We performed comprehensive diagnostics using 6 algorithms: Borg, MOEAD, eMOEA, eNSGAII, GDE3, and NSGAII to ascertain their controllability, reliability, efficiency, and effectiveness. The lake problem abstracts elements of many current water resources and climate related management applications where there is the potential for crossing irreversible, nonlinear thresholds. We show that many modern MOEAs can fail on this test problem, indicating its suitability as a

  18. Bands selection and classification of hyperspectral images based on hybrid kernels SVM by evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Hu, Yan-Yan; Li, Dong-Sheng

    2016-01-01

    The hyperspectral images(HSI) consist of many closely spaced bands carrying the most object information. While due to its high dimensionality and high volume nature, it is hard to get satisfactory classification performance. In order to reduce HSI data dimensionality preparation for high classification accuracy, it is proposed to combine a band selection method of artificial immune systems (AIS) with a hybrid kernels support vector machine (SVM-HK) algorithm. In fact, after comparing different kernels for hyperspectral analysis, the approach mixed radial basis function kernel (RBF-K) with sigmoid kernel (Sig-K) and applied the optimized hybrid kernels in SVM classifiers. Then the SVM-HK algorithm used to induce the bands selection of an improved version of AIS. The AIS was composed of clonal selection and elite antibody mutation, including evaluation process with optional index factor (OIF). Experimental classification performance was on a San Diego Naval Base acquired by AVIRIS, the HRS dataset shows that the method is able to efficiently achieve bands redundancy removal while outperforming the traditional SVM classifier.

  19. An ARMA model based motion artifact reduction algorithm in fNIRS data through a Kalman filtering approach

    NASA Astrophysics Data System (ADS)

    Amian, M.; Setarehdan, S. Kamaledin; Yousefi, H.

    2014-09-01

    Functional Near infrared spectroscopy (fNIRS) is a newly noninvasive way to measure oxy hemoglobin and deoxy hemoglobin concentration changes of human brain. Relatively safe and affordable than other functional imaging techniques such as fMRI, it is widely used for some special applications such as infant examinations and pilot's brain monitoring. In such applications, fNIRS data sometimes suffer from undesirable movements of subject's head which called motion artifact and lead to a signal corruption. Motion artifact in fNIRS data may result in fallacy of concluding or diagnosis. In this work we try to reduce these artifacts by a novel Kalman filtering algorithm that is based on an autoregressive moving average (ARMA) model for fNIRS system. Our proposed method does not require to any additional hardware and sensor and also it does not need to whole data together that once were of ineluctable necessities in older algorithms such as adaptive filter and Wiener filtering. Results show that our approach is successful in cleaning contaminated fNIRS data.

  20. Registration of the Cone Beam CT and Blue-Ray Scanned Dental Model Based on the Improved ICP Algorithm

    PubMed Central

    Li, Zhenhua; Xu, Songsong; Guo, Xiaoyan

    2014-01-01

    Multimodality image registration and fusion has complementary significance for guiding dental implant surgery. As the needs of the different resolution image registration, we develop an improved Iterative Closest Point (ICP) algorithm that focuses on the registration of Cone Beam Computed Tomography (CT) image and high-resolution Blue-light scanner image. The proposed algorithm includes two major phases, coarse and precise registration. Firstly, for reducing the matching interference of human subjective factors, we extract feature points based on curvature characteristics and use the improved three point's translational transformation method to realize coarse registration. Then, the feature point set and reference point set, obtained by the initial registered transformation, are processed in the precise registration step. Even with the unsatisfactory initial values, this two steps registration method can guarantee the global convergence and the convergence precision. Experimental results demonstrate that the method has successfully realized the registration of the Cone Beam CT dental model and the blue-ray scanner model with higher accuracy. So the method could provide researching foundation for the relevant software development in terms of the registration of multi-modality medical data. PMID:24511309

  1. P-RnaPredict--a parallel evolutionary algorithm for RNA folding: effects of pseudorandom number quality.

    PubMed

    Wiese, Kay C; Hendriks, Andrew; Deschênes, Alain; Ben Youssef, Belgacem

    2005-09-01

    This paper presents a fully parallel version of RnaPredict, a genetic algorithm (GA) for RNA secondary structure prediction. The research presented here builds on previous work and examines the impact of three different pseudorandom number generators (PRNGs) on the GA's performance. The three generators tested are the C standard library PRNG RAND, a parallelized multiplicative congruential generator (MCG), and a parallelized Mersenne Twister (MT). A fully parallel version of RnaPredict using the Message Passing Interface (MPI) was implemented on a 128-node Beowulf cluster. The PRNG comparison tests were performed with known structures whose sequences are 118, 122, 468, 543, and 556 nucleotides in length. The effects of the PRNGs are investigated and the predicted structures are compared to known structures. Results indicate that P-RnaPredict demonstrated good prediction accuracy, particularly so for shorter sequences.

  2. Development of a Physical Model-Based Algorithm for the Detection of Single-Nucleotide Substitutions by Using Tiling Microarrays

    PubMed Central

    Ono, Naoaki; Suzuki, Shingo; Furusawa, Chikara; Shimizu, Hiroshi; Yomo, Tetsuya

    2013-01-01

    High-density DNA microarrays are useful tools for analyzing sequence changes in DNA samples. Although microarray analysis provides informative signals from a large number of probes, the analysis and interpretation of these signals have certain inherent limitations, namely, complex dependency of signals on the probe sequences and the existence of false signals arising from non-specific binding between probe and target. In this study, we have developed a novel algorithm to detect the single-base substitutions by using microarray data based on a thermodynamic model of hybridization. We modified the thermodynamic model by introducing a penalty for mismatches that represent the effects of substitutions on hybridization affinity. This penalty results in significantly higher detection accuracy than other methods, indicating that the incorporation of hybridization free energy can improve the analysis of sequence variants by using microarray data. PMID:23382915

  3. Application of wavelet neural network model based on genetic algorithm in the prediction of high-speed railway settlement

    NASA Astrophysics Data System (ADS)

    Tang, Shihua; Li, Feida; Liu, Yintao; Lan, Lan; Zhou, Conglin; Huang, Qing

    2015-12-01

    With the advantage of high speed, big transport capacity, low energy consumption, good economic benefits and so on, high-speed railway is becoming more and more popular all over the world. It can reach 350 kilometers per hour, which requires high security performances. So research on the prediction of high-speed railway settlement that as one of the important factors affecting the safety of high-speed railway becomes particularly important. This paper takes advantage of genetic algorithms to seek all the data in order to calculate the best result and combines the advantage of strong learning ability and high accuracy of wavelet neural network, then build the model of genetic wavelet neural network for the prediction of high-speed railway settlement. By the experiment of back propagation neural network, wavelet neural network and genetic wavelet neural network, it shows that the absolute value of residual errors in the prediction of high-speed railway settlement based on genetic algorithm is the smallest, which proves that genetic wavelet neural network is better than the other two methods. The correlation coefficient of predicted and observed value is 99.9%. Furthermore, the maximum absolute value of residual error, minimum absolute value of residual error-mean value of relative error and value of root mean squared error(RMSE) that predicted by genetic wavelet neural network are all smaller than the other two methods'. The genetic wavelet neural network in the prediction of high-speed railway settlement is more stable in terms of stability and more accurate in the perspective of accuracy.

  4. Model-Based Hookload Monitoring and Prediction at Drilling Rigs using Neural Networks and Forward-Selection Algorithm

    NASA Astrophysics Data System (ADS)

    Arnaout, A.; Fruhwirth, R.; Winter, M.; Esmael, B.; Thonhauser, G.

    2012-04-01

    The use of neural networks and advanced machine learning techniques in the oil & gas industry is a growing trend in the market. Especially in drilling oil & gas wells, prediction and monitoring different drilling parameters is an essential task to prevent serious problems like "Kick", "Lost Circulation" or "Stuck Pipe" among others. The hookload represents the weight load of the drill string at the crane hook. It is one of the most important parameters. During drilling the parameter "Weight on Bit" is controlled by the driller whereby the hookload is the only measure to monitor how much weight on bit is applied to the bit to generate the hole. Any changes in weight on bit will be directly reflected at the hookload. Furthermore any unwanted contact between the drill string and the wellbore - potentially leading to stuck pipe problem - will appear directly in the measurements of the hookload. Therefore comparison of the measured to the predicted hookload will not only give a clear idea on what is happening down-hole, it also enables the prediction of a number of important events that may cause problems in the borehole and yield in some - fortunately rare - cases in catastrophes like blow-outs. Heuristic models using highly sophisticated neural networks were designed for the hookload prediction; the training data sets were prepared in cooperation with drilling experts. Sensor measurements as well as a set of derived feature channels were used as input to the models. The contents of the final data set can be separated into (1) features based on rig operation states, (2) real-time sensors features and (3) features based on physics. A combination of novel neural network architecture - the Completely Connected Perceptron and parallel learning techniques which avoid trapping into local error minima - was used for building the models. In addition automatic network growing algorithms and highly sophisticated stopping criterions offer robust and efficient estimation of the

  5. Convergence analysis of evolutionary algorithms that are based on the paradigm of information geometry.

    PubMed

    Beyer, Hans-Georg

    2014-01-01

    The convergence behaviors of so-called natural evolution strategies (NES) and of the information-geometric optimization (IGO) approach are considered. After a review of the NES/IGO ideas, which are based on information geometry, the implications of this philosophy w.r.t. optimization dynamics are investigated considering the optimization performance on the class of positive quadratic objective functions (the ellipsoid model). Exact differential equations describing the approach to the optimizer are derived and solved. It is rigorously shown that the original NES philosophy optimizing the expected value of the objective functions leads to very slow (i.e., sublinear) convergence toward the optimizer. This is the real reason why state of the art implementations of IGO algorithms optimize the expected value of transformed objective functions, for example, by utility functions based on ranking. It is shown that these utility functions are localized fitness functions that change during the IGO flow. The governing differential equations describing this flow are derived. In the case of convergence, the solutions to these equations exhibit an exponentially fast approach to the optimizer (i.e., linear convergence order). Furthermore, it is proven that the IGO philosophy leads to an adaptation of the covariance matrix that equals in the asymptotic limit-up to a scalar factor-the inverse of the Hessian of the objective function considered.

  6. Application of an Evolutionary Algorithm for Parameter Optimization in a Gully Erosion Model

    SciTech Connect

    Rengers, Francis; Lunacek, Monte; Tucker, Gregory

    2016-06-01

    Herein we demonstrate how to use model optimization to determine a set of best-fit parameters for a landform model simulating gully incision and headcut retreat. To achieve this result we employed the Covariance Matrix Adaptation Evolution Strategy (CMA-ES), an iterative process in which samples are created based on a distribution of parameter values that evolve over time to better fit an objective function. CMA-ES efficiently finds optimal parameters, even with high-dimensional objective functions that are non-convex, multimodal, and non-separable. We ran model instances in parallel on a high-performance cluster, and from hundreds of model runs we obtained the best parameter choices. This method is far superior to brute-force search algorithms, and has great potential for many applications in earth science modeling. We found that parameters representing boundary conditions tended to converge toward an optimal single value, whereas parameters controlling geomorphic processes are defined by a range of optimal values.

  7. Multi-objective optimization of a low specific speed centrifugal pump using an evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    An, Zhao; Zhounian, Lai; Peng, Wu; Linlin, Cao; Dazhuan, Wu

    2016-07-01

    This paper describes the shape optimization of a low specific speed centrifugal pump at the design point. The target pump has already been manually modified on the basis of empirical knowledge. A genetic algorithm (NSGA-II) with certain enhancements is adopted to improve its performance further with respect to two goals. In order to limit the number of design variables without losing geometric information, the impeller is parametrized using the Bézier curve and a B-spline. Numerical simulation based on a Reynolds averaged Navier-Stokes (RANS) turbulent model is done in parallel to evaluate the flow field. A back-propagating neural network is constructed as a surrogate for performance prediction to save computing time, while initial samples are selected according to an orthogonal array. Then global Pareto-optimal solutions are obtained and analysed. The results manifest that unexpected flow structures, such as the secondary flow on the meridian plane, have diminished or vanished in the optimized pump.

  8. Annual Energy Production (AEP) optimization for tidal power plants based on Evolutionary Algorithms - Swansea Bay Tidal Power Plant AEP optimization

    NASA Astrophysics Data System (ADS)

    Kontoleontos, E.; Weissenberger, S.

    2016-11-01

    In order to be able to predict the maximum Annual Energy Production (AEP) for tidal power plants, an advanced AEP optimization procedure is required for solving the optimization problem which consists of a high number of design variables and constraints. This efficient AEP optimization procedure requires an advanced optimization tool (EASY software) and an AEP calculation tool that can simulate all different operating modes of the units (bidirectional turbine, pump and sluicing mode). The EASY optimization software is a metamodel-assisted Evolutionary Algorithm (MAEA) that can be used in both single- and multi-objective optimization problems. The AEP calculation tool, developed by ANDRITZ HYDRO, in combination with EASY is used to maximize the tidal annual energy produced by optimizing the plant operation throughout the year. For the Swansea Bay Tidal Power Plant project, the AEP optimization along with the hydraulic design optimization and the model testing was used to evaluate all different hydraulic and operating concepts and define the optimal concept that led to a significant increase of the AEP value. This new concept of a triple regulated “bi-directional bulb pump turbine” for Swansea Bay Tidal Power Plant (16 units, nominal power above 320 MW) along with its AEP optimization scheme will be presented in detail in the paper. Furthermore, the use of an online AEP optimization during operation of the power plant, that will provide the optimal operating points to the control system, will be also presented.

  9. A simple methodology for characterization of germanium coaxial detectors by using Monte Carlo simulation and evolutionary algorithms.

    PubMed

    Guerra, J G; Rubiano, J G; Winter, G; Guerra, A G; Alonso, H; Arnedo, M A; Tejera, A; Gil, J M; Rodríguez, R; Martel, P; Bolivar, J P

    2015-11-01

    The determination in a sample of the activity concentration of a specific radionuclide by gamma spectrometry needs to know the full energy peak efficiency (FEPE) for the energy of interest. The difficulties related to the experimental calibration make it advisable to have alternative methods for FEPE determination, such as the simulation of the transport of photons in the crystal by the Monte Carlo method, which requires an accurate knowledge of the characteristics and geometry of the detector. The characterization process is mainly carried out by Canberra Industries Inc. using proprietary techniques and methodologies developed by that company. It is a costly procedure (due to shipping and to the cost of the process itself) and for some research laboratories an alternative in situ procedure can be very useful. The main goal of this paper is to find an alternative to this costly characterization process, by establishing a method for optimizing the parameters of characterizing the detector, through a computational procedure which could be reproduced at a standard research lab. This method consists in the determination of the detector geometric parameters by using Monte Carlo simulation in parallel with an optimization process, based on evolutionary algorithms, starting from a set of reference FEPEs determined experimentally or computationally. The proposed method has proven to be effective and simple to implement. It provides a set of characterization parameters which it has been successfully validated for different source-detector geometries, and also for a wide range of environmental samples and certified materials.

  10. A Two-Phase Multiobjective Evolutionary Algorithm for Enhancing the Robustness of Scale-Free Networks Against Multiple Malicious Attacks.

    PubMed

    Zhou, Mingxing; Liu, Jing

    2017-02-01

    Designing robust networks has attracted increasing attentions in recent years. Most existing work focuses on improving the robustness of networks against a specific type of attacks. However, networks which are robust against one type of attacks may not be robust against another type of attacks. In the real-world situations, different types of attacks may happen simultaneously. Therefore, we use the Pearson's correlation coefficient to analyze the correlation between different types of attacks, model the robustness measures against different types of attacks which are negatively correlated as objectives, and model the problem of optimizing the robustness of networks against multiple malicious attacks as a multiobjective optimization problem. Furthermore, to effectively solve this problem, we propose a two-phase multiobjective evolutionary algorithm, labeled as MOEA-RSFMMA. In MOEA-RSFMMA, a single-objective sampling phase is first used to generate a good initial population for the later two-objective optimization phase. Such a two-phase optimizing pattern well balances the computational cost of the two objectives and improves the search efficiency. In the experiments, both synthetic scale-free networks and real-world networks are used to validate the performance of MOEA-RSFMMA. Moreover, both local and global characteristics of networks in different parts of the obtained Pareto fronts are studied. The results show that the networks in different parts of Pareto fronts reflect different properties, and provide various choices for decision makers.

  11. A new approach for analyzing average time complexity of population-based evolutionary algorithms on unimodal problems.

    PubMed

    Chen, Tianshi; He, Jun; Sun, Guangzhong; Chen, Guoliang; Yao, Xin

    2009-10-01

    In the past decades, many theoretical results related to the time complexity of evolutionary algorithms (EAs) on different problems are obtained. However, there is not any general and easy-to-apply approach designed particularly for population-based EAs on unimodal problems. In this paper, we first generalize the concept of the takeover time to EAs with mutation, then we utilize the generalized takeover time to obtain the mean first hitting time of EAs and, thus, propose a general approach for analyzing EAs on unimodal problems. As examples, we consider the so-called (N + N) EAs and we show that, on two well-known unimodal problems, leadingones and onemax , the EAs with the bitwise mutation and two commonly used selection schemes both need O(n ln n + n(2)/N) and O(n ln ln n + n ln n/N) generations to find the global optimum, respectively. Except for the new results above, our approach can also be applied directly for obtaining results for some population-based EAs on some other unimodal problems. Moreover, we also discuss when the general approach is valid to provide us tight bounds of the mean first hitting times and when our approach should be combined with problem-specific knowledge to get the tight bounds. It is the first time a general idea for analyzing population-based EAs on unimodal problems is discussed theoretically.

  12. A model based iterative reconstruction algorithm for high angle annular dark field-scanning transmission electron microscope (HAADF-STEM) tomography.

    PubMed

    Venkatakrishnan, S V; Drummy, Lawrence F; Jackson, Michael A; De Graef, Marc; Simmons, Jeff; Bouman, Charles A

    2013-11-01

    High angle annular dark field (HAADF)-scanning transmission electron microscope (STEM) data is increasingly being used in the physical sciences to research materials in 3D because it reduces the effects of Bragg diffraction seen in bright field TEM data. Typically, tomographic reconstructions are performed by directly applying either filtered back projection (FBP) or the simultaneous iterative reconstruction technique (SIRT) to the data. Since HAADF-STEM tomography is a limited angle tomography modality with low signal to noise ratio, these methods can result in significant artifacts in the reconstructed volume. In this paper, we develop a model based iterative reconstruction algorithm for HAADF-STEM tomography. We combine a model for image formation in HAADF-STEM tomography along with a prior model to formulate the tomographic reconstruction as a maximum a posteriori probability (MAP) estimation problem. Our formulation also accounts for certain missing measurements by treating them as nuisance parameters in the MAP estimation framework. We adapt the iterative coordinate descent algorithm to develop an efficient method to minimize the corresponding MAP cost function. Reconstructions of simulated as well as experimental data sets show results that are superior to FBP and SIRT reconstructions, significantly suppressing artifacts and enhancing contrast.

  13. Analyzing the evolutionary mechanisms of the Air Transportation System-of-Systems using network theory and machine learning algorithms

    NASA Astrophysics Data System (ADS)

    Kotegawa, Tatsuya

    Complexity in the Air Transportation System (ATS) arises from the intermingling of many independent physical resources, operational paradigms, and stakeholder interests, as well as the dynamic variation of these interactions over time. Currently, trade-offs and cost benefit analyses of new ATS concepts are carried out on system-wide evaluation simulations driven by air traffic forecasts that assume fixed airline routes. However, this does not well reflect reality as airlines regularly add and remove routes. A airline service route network evolution model that projects route addition and removal was created and combined with state-of-the-art air traffic forecast methods to better reflect the dynamic properties of the ATS in system-wide simulations. Guided by a system-of-systems framework, network theory metrics and machine learning algorithms were applied to develop the route network evolution models based on patterns extracted from historical data. Constructing the route addition section of the model posed the greatest challenge due to the large pool of new link candidates compared to the actual number of routes historically added to the network. Of the models explored, algorithms based on logistic regression, random forests, and support vector machines showed best route addition and removal forecast accuracies at approximately 20% and 40%, respectively, when validated with historical data. The combination of network evolution models and a system-wide evaluation tool quantified the impact of airline route network evolution on air traffic delay. The expected delay minutes when considering network evolution increased approximately 5% for a forecasted schedule on 3/19/2020. Performance trade-off studies between several airline route network topologies from the perspectives of passenger travel efficiency, fuel burn, and robustness were also conducted to provide bounds that could serve as targets for ATS transformation efforts. The series of analysis revealed that high

  14. From prompt gamma distribution to dose: a novel approach combining an evolutionary algorithm and filtering based on Gaussian-powerlaw convolutions.

    PubMed

    Schumann, A; Priegnitz, M; Schoene, S; Enghardt, W; Rohling, H; Fiedler, F

    2016-10-07

    Range verification and dose monitoring in proton therapy is considered as highly desirable. Different methods have been developed worldwide, like particle therapy positron emission tomography (PT-PET) and prompt gamma imaging (PGI). In general, these methods allow for a verification of the proton range. However, quantification of the dose from these measurements remains challenging. For the first time, we present an approach for estimating the dose from prompt γ-ray emission profiles. It combines a filtering procedure based on Gaussian-powerlaw convolution with an evolutionary algorithm. By means of convolving depth dose profiles with an appropriate filter kernel, prompt γ-ray depth profiles are obtained. In order to reverse this step, the evolutionary algorithm is applied. The feasibility of this approach is demonstrated for a spread-out Bragg-peak in a water target.

  15. From prompt gamma distribution to dose: a novel approach combining an evolutionary algorithm and filtering based on Gaussian-powerlaw convolutions

    NASA Astrophysics Data System (ADS)

    Schumann, A.; Priegnitz, M.; Schoene, S.; Enghardt, W.; Rohling, H.; Fiedler, F.

    2016-10-01

    Range verification and dose monitoring in proton therapy is considered as highly desirable. Different methods have been developed worldwide, like particle therapy positron emission tomography (PT-PET) and prompt gamma imaging (PGI). In general, these methods allow for a verification of the proton range. However, quantification of the dose from these measurements remains challenging. For the first time, we present an approach for estimating the dose from prompt γ-ray emission profiles. It combines a filtering procedure based on Gaussian-powerlaw convolution with an evolutionary algorithm. By means of convolving depth dose profiles with an appropriate filter kernel, prompt γ-ray depth profiles are obtained. In order to reverse this step, the evolutionary algorithm is applied. The feasibility of this approach is demonstrated for a spread-out Bragg-peak in a water target.

  16. Evolutionary model selection and parameter estimation for protein-protein interaction network based on differential evolution algorithm

    PubMed Central

    Huang, Lei; Liao, Li; Wu, Cathy H.

    2016-01-01

    Revealing the underlying evolutionary mechanism plays an important role in understanding protein interaction networks in the cell. While many evolutionary models have been proposed, the problem about applying these models to real network data, especially for differentiating which model can better describe evolutionary process for the observed network urgently remains as a challenge. The traditional way is to use a model with presumed parameters to generate a network, and then evaluate the fitness by summary statistics, which however cannot capture the complete network structures information and estimate parameter distribution. In this work we developed a novel method based on Approximate Bayesian Computation and modified Differential Evolution (ABC-DEP) that is capable of conducting model selection and parameter estimation simultaneously and detecting the underlying evolutionary mechanisms more accurately. We tested our method for its power in differentiating models and estimating parameters on the simulated data and found significant improvement in performance benchmark, as compared with a previous method. We further applied our method to real data of protein interaction networks in human and yeast. Our results show Duplication Attachment model as the predominant evolutionary mechanism for human PPI networks and Scale-Free model as the predominant mechanism for yeast PPI networks. PMID:26357273

  17. An impatient evolutionary algorithm with probabilistic tabu search for unified solution of some NP-hard problems in graph and set theory via clique finding.

    PubMed

    Guturu, Parthasarathy; Dantu, Ram

    2008-06-01

    Many graph- and set-theoretic problems, because of their tremendous application potential and theoretical appeal, have been well investigated by the researchers in complexity theory and were found to be NP-hard. Since the combinatorial complexity of these problems does not permit exhaustive searches for optimal solutions, only near-optimal solutions can be explored using either various problem-specific heuristic strategies or metaheuristic global-optimization methods, such as simulated annealing, genetic algorithms, etc. In this paper, we propose a unified evolutionary algorithm (EA) to the problems of maximum clique finding, maximum independent set, minimum vertex cover, subgraph and double subgraph isomorphism, set packing, set partitioning, and set cover. In the proposed approach, we first map these problems onto the maximum clique-finding problem (MCP), which is later solved using an evolutionary strategy. The proposed impatient EA with probabilistic tabu search (IEA-PTS) for the MCP integrates the best features of earlier successful approaches with a number of new heuristics that we developed to yield a performance that advances the state of the art in EAs for the exploration of the maximum cliques in a graph. Results of experimentation with the 37 DIMACS benchmark graphs and comparative analyses with six state-of-the-art algorithms, including two from the smaller EA community and four from the larger metaheuristics community, indicate that the IEA-PTS outperforms the EAs with respect to a Pareto-lexicographic ranking criterion and offers competitive performance on some graph instances when individually compared to the other heuristic algorithms. It has also successfully set a new benchmark on one graph instance. On another benchmark suite called Benchmarks with Hidden Optimal Solutions, IEA-PTS ranks second, after a very recent algorithm called COVER, among its peers that have experimented with this suite.

  18. Larger water clusters with edges and corners on their way to ice: structural trends elucidated with an improved parallel evolutionary algorithm.

    PubMed

    Bandow, Bernhard; Hartke, Bernd

    2006-05-04

    For the difficult task of finding global minimum energy structures for molecular clusters of nontrivial size, we present a highly efficient parallel implementation of an evolutionary algorithm. By completely abandoning the traditional concept of generations and by replacing it with a less rigid pool concept, we have managed to eliminate serial bottlenecks completely and can operate the algorithm efficiently on an arbitrary number of parallel processes. Nevertheless, our new algorithm still realizes all of the main features of our old, successful implementation. First tests of the new algorithm are shown for the highly demanding problem of water clusters modeled by a potential with flexible, polarizable monomers (TTM2-F). For this problem, our new algorithm not only reproduces all of the global minima proposed previously in considerably less CPU time but also leads to improved proposals in several cases. These, in turn, qualitatively change our earlier predictions concerning the transitions from all-surface structures to cages with a single interior molecule, and from one to two interior molecules. Furthermore, we compare preliminary results up to n = 105 with locally optimized cuts from several ice modifications. This comparison indicates that relaxed ice structures may start to be competitive already at cluster sizes above n = 90.

  19. Evolutionary Computing

    SciTech Connect

    Patton, Robert M; Cui, Xiaohui; Jiao, Yu; Potok, Thomas E

    2008-01-01

    The rate at which information overwhelms humans is significantly more than the rate at which humans have learned to process, analyze, and leverage this information. To overcome this challenge, new methods of computing must be formulated, and scientist and engineers have looked to nature for inspiration in developing these new methods. Consequently, evolutionary computing has emerged as new paradigm for computing, and has rapidly demonstrated its ability to solve real-world problems where traditional techniques have failed. This field of work has now become quite broad and encompasses areas ranging from artificial life to neural networks. This chapter focuses specifically on two sub-areas of nature-inspired computing: Evolutionary Algorithms and Swarm Intelligence.

  20. An efficient and accurate solution methodology for bilevel multi-objective programming problems using a hybrid evolutionary-local-search algorithm.

    PubMed

    Deb, Kalyanmoy; Sinha, Ankur

    2010-01-01

    Bilevel optimization problems involve two optimization tasks (upper and lower level), in which every feasible upper level solution must correspond to an optimal solution to a lower level optimization problem. These problems commonly appear in many practical problem solving tasks including optimal control, process optimization, game-playing strategy developments, transportation problems, and others. However, they are commonly converted into a single level optimization problem by using an approximate solution procedure to replace the lower level optimization task. Although there exist a number of theoretical, numerical, and evolutionary optimization studies involving single-objective bilevel programming problems, not many studies look at the context of multiple conflicting objectives in each level of a bilevel programming problem. In this paper, we address certain intricate issues related to solving multi-objective bilevel programming problems, present challenging test problems, and propose a viable and hybrid evolutionary-cum-local-search based algorithm as a solution methodology. The hybrid approach performs better than a number of existing methodologies and scales well up to 40-variable difficult test problems used in this study. The population sizing and termination criteria are made self-adaptive, so that no additional parameters need to be supplied by the user. The study indicates a clear niche of evolutionary algorithms in solving such difficult problems of practical importance compared to their usual solution by a computationally expensive nested procedure. The study opens up many issues related to multi-objective bilevel programming and hopefully this study will motivate EMO and other researchers to pay more attention to this important and difficult problem solving activity.

  1. Assessment of the dose reduction potential of a model-based iterative reconstruction algorithm using a task-based performance metrology

    SciTech Connect

    Samei, Ehsan; Richard, Samuel

    2015-01-15

    Purpose: Different computed tomography (CT) reconstruction techniques offer different image quality attributes of resolution and noise, challenging the ability to compare their dose reduction potential against each other. The purpose of this study was to evaluate and compare the task-based imaging performance of CT systems to enable the assessment of the dose performance of a model-based iterative reconstruction (MBIR) to that of an adaptive statistical iterative reconstruction (ASIR) and a filtered back projection (FBP) technique. Methods: The ACR CT phantom (model 464) was imaged across a wide range of mA setting on a 64-slice CT scanner (GE Discovery CT750 HD, Waukesha, WI). Based on previous work, the resolution was evaluated in terms of a task-based modulation transfer function (MTF) using a circular-edge technique and images from the contrast inserts located in the ACR phantom. Noise performance was assessed in terms of the noise-power spectrum (NPS) measured from the uniform section of the phantom. The task-based MTF and NPS were combined with a task function to yield a task-based estimate of imaging performance, the detectability index (d′). The detectability index was computed as a function of dose for two imaging tasks corresponding to the detection of a relatively small and a relatively large feature (1.5 and 25 mm, respectively). The performance of MBIR in terms of the d′ was compared with that of ASIR and FBP to assess its dose reduction potential. Results: Results indicated that MBIR exhibits a variability spatial resolution with respect to object contrast and noise while significantly reducing image noise. The NPS measurements for MBIR indicated a noise texture with a low-pass quality compared to the typical midpass noise found in FBP-based CT images. At comparable dose, the d′ for MBIR was higher than those of FBP and ASIR by at least 61% and 19% for the small feature and the large feature tasks, respectively. Compared to FBP and ASIR, MBIR

  2. Structure and stability of silicon nanoclusters passivated by hydrogen and oxygen: evolutionary algorithm and first- principles study

    NASA Astrophysics Data System (ADS)

    Baturin, V. S.; Lepeshkin, S. V.; Matsko, N. L.; Uspenskii, Yu A.

    2016-02-01

    We investigate the structural and thermodynamical properties of small silicon clusters. Using the graph theory applied to previously obtained structures of Si10H2m clusters we trace the connection between geometry and passivation degree. The existing data on these clusters and structures of Si10O4n clusters obtained here using evolutionary calculations allowed to analyze the features of Si10H2m clusters in hydrogen atmosphere and Si10O4n clusters in oxygen atmosphere. We have shown the basic differences between structures and thermodynamical properties of silicon clusters, passivated by hydrogen and silicon oxide clusters.

  3. Handling packet dropouts and random delays for unstable delayed processes in NCS by optimal tuning of PIλDμ controllers with evolutionary algorithms.

    PubMed

    Pan, Indranil; Das, Saptarshi; Gupta, Amitava

    2011-10-01

    The issues of stochastically varying network delays and packet dropouts in Networked Control System (NCS) applications have been simultaneously addressed by time domain optimal tuning of fractional order (FO) PID controllers. Different variants of evolutionary algorithms are used for the tuning process and their performances are compared. Also the effectiveness of the fractional order PI(λ)D(μ) controllers over their integer order counterparts is looked into. Two standard test bench plants with time delay and unstable poles which are encountered in process control applications are tuned with the proposed method to establish the validity of the tuning methodology. The proposed tuning methodology is independent of the specific choice of plant and is also applicable for less complicated systems. Thus it is useful in a wide variety of scenarios. The paper also shows the superiority of FOPID controllers over their conventional PID counterparts for NCS applications.

  4. Practical advantages of evolutionary computation

    NASA Astrophysics Data System (ADS)

    Fogel, David B.

    1997-10-01

    Evolutionary computation is becoming a common technique for solving difficult, real-world problems in industry, medicine, and defense. This paper reviews some of the practical advantages to using evolutionary algorithms as compared with classic methods of optimization or artificial intelligence. Specific advantages include the flexibility of the procedures, as well as their ability to self-adapt the search for optimum solutions on the fly. As desktop computers increase in speed, the application of evolutionary algorithms will become routine.

  5. LEED I/V determination of the structure of a MoO3 monolayer on Au(111): Testing the performance of the CMA-ES evolutionary strategy algorithm, differential evolution, a genetic algorithm and tensor LEED based structural optimization

    NASA Astrophysics Data System (ADS)

    Primorac, E.; Kuhlenbeck, H.; Freund, H.-J.

    2016-07-01

    The structure of a thin MoO3 layer on Au(111) with a c(4 × 2) superstructure was studied with LEED I/V analysis. As proposed previously (Quek et al., Surf. Sci. 577 (2005) L71), the atomic structure of the layer is similar to that of a MoO3 single layer as found in regular α-MoO3. The layer on Au(111) has a glide plane parallel to the short unit vector of the c(4 × 2) unit cell and the molybdenum atoms are bridge-bonded to two surface gold atoms with the structure of the gold surface being slightly distorted. The structural refinement of the structure was performed with the CMA-ES evolutionary strategy algorithm which could reach a Pendry R-factor of ∼ 0.044. In the second part the performance of CMA-ES is compared with that of the differential evolution method, a genetic algorithm and the Powell optimization algorithm employing I/V curves calculated with tensor LEED.

  6. Mono and multi-objective optimization techniques applied to a large range of industrial test cases using Metamodel assisted Evolutionary Algorithms

    NASA Astrophysics Data System (ADS)

    Fourment, Lionel; Ducloux, Richard; Marie, Stéphane; Ejday, Mohsen; Monnereau, Dominique; Massé, Thomas; Montmitonnet, Pierre

    2010-06-01

    The use of material processing numerical simulation allows a strategy of trial and error to improve virtual processes without incurring material costs or interrupting production and therefore save a lot of money, but it requires user time to analyze the results, adjust the operating conditions and restart the simulation. Automatic optimization is the perfect complement to simulation. Evolutionary Algorithm coupled with metamodelling makes it possible to obtain industrially relevant results on a very large range of applications within a few tens of simulations and without any specific automatic optimization technique knowledge. Ten industrial partners have been selected to cover the different area of the mechanical forging industry and provide different examples of the forming simulation tools. It aims to demonstrate that it is possible to obtain industrially relevant results on a very large range of applications within a few tens of simulations and without any specific automatic optimization technique knowledge. The large computational time is handled by a metamodel approach. It allows interpolating the objective function on the entire parameter space by only knowing the exact function values at a reduced number of "master points". Two algorithms are used: an evolution strategy combined with a Kriging metamodel and a genetic algorithm combined with a Meshless Finite Difference Method. The later approach is extended to multi-objective optimization. The set of solutions, which corresponds to the best possible compromises between the different objectives, is then computed in the same way. The population based approach allows using the parallel capabilities of the utilized computer with a high efficiency. An optimization module, fully embedded within the Forge2009 IHM, makes possible to cover all the defined examples, and the use of new multi-core hardware to compute several simulations at the same time reduces the needed time dramatically. The presented examples

  7. A Discussion on Uncertainty Representation and Interpretation in Model-Based Prognostics Algorithms based on Kalman Filter Estimation Applied to Prognostics of Electronics Components

    NASA Technical Reports Server (NTRS)

    Celaya, Jose R.; Saxen, Abhinav; Goebel, Kai

    2012-01-01

    This article discusses several aspects of uncertainty representation and management for model-based prognostics methodologies based on our experience with Kalman Filters when applied to prognostics for electronics components. In particular, it explores the implications of modeling remaining useful life prediction as a stochastic process and how it relates to uncertainty representation, management, and the role of prognostics in decision-making. A distinction between the interpretations of estimated remaining useful life probability density function and the true remaining useful life probability density function is explained and a cautionary argument is provided against mixing interpretations for the two while considering prognostics in making critical decisions.

  8. A mission-oriented orbit design method of remote sensing satellite for region monitoring mission based on evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Shen, Xin; Zhang, Jing; Yao, Huang

    2015-12-01

    Remote sensing satellites play an increasingly prominent role in environmental monitoring and disaster rescue. Taking advantage of almost the same sunshine condition to same place and global coverage, most of these satellites are operated on the sun-synchronous orbit. However, it brings some problems inevitably, the most significant one is that the temporal resolution of sun-synchronous orbit satellite can't satisfy the demand of specific region monitoring mission. To overcome the disadvantages, two methods are exploited: the first one is to build satellite constellation which contains multiple sunsynchronous satellites, just like the CHARTER mechanism has done; the second is to design non-predetermined orbit based on the concrete mission demand. An effective method for remote sensing satellite orbit design based on multiobjective evolution algorithm is presented in this paper. Orbit design problem is converted into a multi-objective optimization problem, and a fast and elitist multi-objective genetic algorithm is utilized to solve this problem. Firstly, the demand of the mission is transformed into multiple objective functions, and the six orbit elements of the satellite are taken as genes in design space, then a simulate evolution process is performed. An optimal resolution can be obtained after specified generation via evolution operation (selection, crossover, and mutation). To examine validity of the proposed method, a case study is introduced: Orbit design of an optical satellite for regional disaster monitoring, the mission demand include both minimizing the average revisit time internal of two objectives. The simulation result shows that the solution for this mission obtained by our method meet the demand the users' demand. We can draw a conclusion that the method presented in this paper is efficient for remote sensing orbit design.

  9. Optimal operational strategies for a day-ahead electricity market in the presence of market power using multi-objective evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Rodrigo, Deepal

    2007-12-01

    This dissertation introduces a novel approach for optimally operating a day-ahead electricity market not only by economically dispatching the generation resources but also by minimizing the influences of market manipulation attempts by the individual generator-owning companies while ensuring that the power system constraints are not violated. Since economic operation of the market conflicts with the individual profit maximization tactics such as market manipulation by generator-owning companies, a methodology that is capable of simultaneously optimizing these two competing objectives has to be selected. Although numerous previous studies have been undertaken on the economic operation of day-ahead markets and other independent studies have been conducted on the mitigation of market power, the operation of a day-ahead electricity market considering these two conflicting objectives simultaneously has not been undertaken previously. These facts provided the incentive and the novelty for this study. A literature survey revealed that many of the traditional solution algorithms convert multi-objective functions into either a single-objective function using weighting schemas or undertake optimization of one function at a time. Hence, these approaches do not truly optimize the multi-objectives concurrently. Due to these inherent deficiencies of the traditional algorithms, the use of alternative non-traditional solution algorithms for such problems has become popular and widely used. Of these, multi-objective evolutionary algorithms (MOEA) have received wide acceptance due to their solution quality and robustness. In the present research, three distinct algorithms were considered: a non-dominated sorting genetic algorithm II (NSGA II), a multi-objective tabu search algorithm (MOTS) and a hybrid of multi-objective tabu search and genetic algorithm (MOTS/GA). The accuracy and quality of the results from these algorithms for applications similar to the problem investigated here

  10. Design of a flexible component gathering algorithm for converting cell-based models to graph representations for use in evolutionary search

    PubMed Central

    2014-01-01

    Background The ability of science to produce experimental data has outpaced the ability to effectively visualize and integrate the data into a conceptual framework that can further higher order understanding. Multidimensional and shape-based observational data of regenerative biology presents a particularly daunting challenge in this regard. Large amounts of data are available in regenerative biology, but little progress has been made in understanding how organisms such as planaria robustly achieve and maintain body form. An example of this kind of data can be found in a new repository (PlanformDB) that encodes descriptions of planaria experiments and morphological outcomes using a graph formalism. Results We are developing a model discovery framework that uses a cell-based modeling platform combined with evolutionary search to automatically search for and identify plausible mechanisms for the biological behavior described in PlanformDB. To automate the evolutionary search we developed a way to compare the output of the modeling platform to the morphological descriptions stored in PlanformDB. We used a flexible connected component algorithm to create a graph representation of the virtual worm from the robust, cell-based simulation data. These graphs can then be validated and compared with target data from PlanformDB using the well-known graph-edit distance calculation, which provides a quantitative metric of similarity between graphs. The graph edit distance calculation was integrated into a fitness function that was able to guide automated searches for unbiased models of planarian regeneration. We present a cell-based model of planarian that can regenerate anatomical regions following bisection of the organism, and show that the automated model discovery framework is capable of searching for and finding models of planarian regeneration that match experimental data stored in PlanformDB. Conclusion The work presented here, including our algorithm for converting cell

  11. Assessing the performance of linear and non-linear soil carbon dynamics models using the Multi-Objective Evolutionary Algorithm Borg-MOEA

    NASA Astrophysics Data System (ADS)

    Ramcharan, A. M.; Kemanian, A.; Richard, T.

    2013-12-01

    The largest terrestrial carbon pool is soil, storing more carbon than present in above ground biomass (Jobbagy and Jackson, 2000). In this context, soil organic carbon has gained attention as a managed sink for atmospheric CO2 emissions. The variety of models that describe soil carbon cycling reflects the relentless effort to characterize the complex nature of soil and the carbon within it. Previous works have laid out the range of mathematical approaches to soil carbon cycling but few have compared model structure performance in diverse agricultural scenarios. As interest in increasing the temporal and spatial scale of models grows, assessing the performance of different model structures is essential to drawing reasonable conclusions from model outputs. This research will address this challenge using the Evolutionary Algorithm Borg-MOEA to optimize the functionality of carbon models in a multi-objective approach to parameter estimation. Model structure performance will be assessed through analysis of multi-objective trade-offs using experimental data from twenty long-term carbon experiments across the globe. Preliminary results show a successful test of this proof of concept using a non-linear soil carbon model structure. Soil carbon dynamics were based on the amount of carbon inputs to the soil and the degree of organic matter saturation of the soil. The degree of organic matter saturation of the soil was correlated with the soil clay content. Six parameters of the non-linear soil organic carbon model were successfully optimized to steady-state conditions using Borg-MOEA and datasets from five agricultural locations in the United States. Given that more than 50% of models rely on linear soil carbon decomposition dynamics, a linear model structure was also optimized and compared to the non-linear case. Results indicate linear dynamics had a significantly lower optimization performance. Results show promise in using the Evolutionary Algorithm Borg-MOEA to assess

  12. Optimizing the distribution of resources between enzymes of carbon metabolism can dramatically increase photosynthetic rate: a numerical simulation using an evolutionary algorithm.

    PubMed

    Zhu, Xin-Guang; de Sturler, Eric; Long, Stephen P

    2007-10-01

    The distribution of resources between enzymes of photosynthetic carbon metabolism might be assumed to have been optimized by natural selection. However, natural selection for survival and fecundity does not necessarily select for maximal photosynthetic productivity. Further, the concentration of a key substrate, atmospheric CO(2), has changed more over the past 100 years than the past 25 million years, with the likelihood that natural selection has had inadequate time to reoptimize resource partitioning for this change. Could photosynthetic rate be increased by altered partitioning of resources among the enzymes of carbon metabolism? This question is addressed using an "evolutionary" algorithm to progressively search for multiple alterations in partitioning that increase photosynthetic rate. To do this, we extended existing metabolic models of C(3) photosynthesis by including the photorespiratory pathway (PCOP) and metabolism to starch and sucrose to develop a complete dynamic model of photosynthetic carbon metabolism. The model consists of linked differential equations, each representing the change of concentration of one metabolite. Initial concentrations of metabolites and maximal activities of enzymes were extracted from the literature. The dynamics of CO(2) fixation and metabolite concentrations were realistically simulated by numerical integration, such that the model could mimic well-established physiological phenomena. For example, a realistic steady-state rate of CO(2) uptake was attained and then reattained after perturbing O(2) concentration. Using an evolutionary algorithm, partitioning of a fixed total amount of protein-nitrogen between enzymes was allowed to vary. The individual with the higher light-saturated photosynthetic rate was selected and used to seed the next generation. After 1,500 generations, photosynthesis was increased substantially. This suggests that the "typical" partitioning in C(3) leaves might be suboptimal for maximizing the light

  13. Evolutionary tree reconstruction

    NASA Technical Reports Server (NTRS)

    Cheeseman, Peter; Kanefsky, Bob

    1990-01-01

    It is described how Minimum Description Length (MDL) can be applied to the problem of DNA and protein evolutionary tree reconstruction. If there is a set of mutations that transform a common ancestor into a set of the known sequences, and this description is shorter than the information to encode the known sequences directly, then strong evidence for an evolutionary relationship has been found. A heuristic algorithm is described that searches for the simplest tree (smallest MDL) that finds close to optimal trees on the test data. Various ways of extending the MDL theory to more complex evolutionary relationships are discussed.

  14. GUESS-ing Polygenic Associations with Multiple Phenotypes Using a GPU-Based Evolutionary Stochastic Search Algorithm

    PubMed Central

    Hastie, David I.; Zeller, Tanja; Liquet, Benoit; Newcombe, Paul; Yengo, Loic; Wild, Philipp S.; Schillert, Arne; Ziegler, Andreas; Nielsen, Sune F.; Butterworth, Adam S.; Ho, Weang Kee; Castagné, Raphaële; Munzel, Thomas; Tregouet, David; Falchi, Mario; Cambien, François; Nordestgaard, Børge G.; Fumeron, Fredéric; Tybjærg-Hansen, Anne; Froguel, Philippe; Danesh, John; Petretto, Enrico; Blankenberg, Stefan; Tiret, Laurence; Richardson, Sylvia

    2013-01-01

    Genome-wide association studies (GWAS) yielded significant advances in defining the genetic architecture of complex traits and disease. Still, a major hurdle of GWAS is narrowing down multiple genetic associations to a few causal variants for functional studies. This becomes critical in multi-phenotype GWAS where detection and interpretability of complex SNP(s)-trait(s) associations are complicated by complex Linkage Disequilibrium patterns between SNPs and correlation between traits. Here we propose a computationally efficient algorithm (GUESS) to explore complex genetic-association models and maximize genetic variant detection. We integrated our algorithm with a new Bayesian strategy for multi-phenotype analysis to identify the specific contribution of each SNP to different trait combinations and study genetic regulation of lipid metabolism in the Gutenberg Health Study (GHS). Despite the relatively small size of GHS (n = 3,175), when compared with the largest published meta-GWAS (n>100,000), GUESS recovered most of the major associations and was better at refining multi-trait associations than alternative methods. Amongst the new findings provided by GUESS, we revealed a strong association of SORT1 with TG-APOB and LIPC with TG-HDL phenotypic groups, which were overlooked in the larger meta-GWAS and not revealed by competing approaches, associations that we replicated in two independent cohorts. Moreover, we demonstrated the increased power of GUESS over alternative multi-phenotype approaches, both Bayesian and non-Bayesian, in a simulation study that mimics real-case scenarios. We showed that our parallel implementation based on Graphics Processing Units outperforms alternative multi-phenotype methods. Beyond multivariate modelling of multi-phenotypes, our Bayesian model employs a flexible hierarchical prior structure for genetic effects that adapts to any correlation structure of the predictors and increases the power to identify associated variants. This

  15. a Gis-Based Model for Post-Earthquake Personalized Route Planning Using the Integration of Evolutionary Algorithm and Owa

    NASA Astrophysics Data System (ADS)

    Moradi, M.; Delavar, M. R.; Moradi, A.

    2015-12-01

    Being one of the natural disasters, earthquake can seriously damage buildings, urban facilities and cause road blockage. Post-earthquake route planning is problem that has been addressed in frequent researches. The main aim of this research is to present a route planning model for after earthquake. It is assumed in this research that no damage data is available. The presented model tries to find the optimum route based on a number of contributing factors which mainly indicate the length, width and safety of the road. The safety of the road is represented by a number of criteria such as distance to faults, percentage of non-standard buildings and percentage of high buildings around the route. An integration of genetic algorithm and ordered weighted averaging operator is employed in the model. The former searches the problem space among all alternatives, while the latter aggregates the scores of road segments to compute an overall score for each alternative. Ordered weighted averaging operator enables the users of the system to evaluate the alternative routes based on their decision strategy. Based on the proposed model, an optimistic user tries to find the shortest path between the two points, whereas a pessimistic user tends to pay more attention to safety parameters even if it enforces a longer route. The results depicts that decision strategy can considerably alter the optimum route. Moreover, post-earthquake route planning is a function of not only the length of the route but also the probability of the road blockage.

  16. Evolutionary Metric-Learning-Based Recognition Algorithm for Online Isolated Persian/Arabic Characters, Reconstructed Using Inertial Pen Signals.

    PubMed

    Sepahvand, Majid; Abdali-Mohammadi, Fardin; Mardukhi, Farhad

    2016-12-13

    The development of sensors with the microelectromechanical systems technology expedites the emergence of new tools for human-computer interaction, such as inertial pens. These pens, which are used as writing tools, do not depend on a specific embedded hardware, and thus, they are inexpensive. Most of the available inertial pen character recognition approaches use the low-level features of inertial signals. This paper introduces a Persian/Arabic handwriting character recognition system for inertial-sensor-equipped pens. First, the motion trajectory of the inertial pen is reconstructed to estimate the position signals by using the theory of inertial navigation systems. The position signals are then used to extract high-level geometrical features. A new metric learning technique is then adopted to enhance the accuracy of character classification. To this end, a characteristic function is calculated for each character using a genetic programming algorithm. These functions form a metric kernel classifying all the characters. The experimental results show that the performance of the proposed method is superior to that of one of the state-of-the-art works in terms of recognizing Persian/Arabic handwriting characters.

  17. Ligand Docking to Intermediate and Close-To-Bound Conformers Generated by an Elastic Network Model Based Algorithm for Highly Flexible Proteins

    PubMed Central

    Kurkcuoglu, Zeynep; Doruker, Pemra

    2016-01-01

    Incorporating receptor flexibility in small ligand-protein docking still poses a challenge for proteins undergoing large conformational changes. In the absence of bound structures, sampling conformers that are accessible by apo state may facilitate docking and drug design studies. For this aim, we developed an unbiased conformational search algorithm, by integrating global modes from elastic network model, clustering and energy minimization with implicit solvation. Our dataset consists of five diverse proteins with apo to complex RMSDs 4.7–15 Å. Applying this iterative algorithm on apo structures, conformers close to the bound-state (RMSD 1.4–3.8 Å), as well as the intermediate states were generated. Dockings to a sequence of conformers consisting of a closed structure and its “parents” up to the apo were performed to compare binding poses on different states of the receptor. For two periplasmic binding proteins and biotin carboxylase that exhibit hinge-type closure of two dynamics domains, the best pose was obtained for the conformer closest to the bound structure (ligand RMSDs 1.5–2 Å). In contrast, the best pose for adenylate kinase corresponded to an intermediate state with partially closed LID domain and open NMP domain, in line with recent studies (ligand RMSD 2.9 Å). The docking of a helical peptide to calmodulin was the most challenging case due to the complexity of its 15 Å transition, for which a two-stage procedure was necessary. The technique was first applied on the extended calmodulin to generate intermediate conformers; then peptide docking and a second generation stage on the complex were performed, which in turn yielded a final peptide RMSD of 2.9 Å. Our algorithm is effective in producing conformational states based on the apo state. This study underlines the importance of such intermediate states for ligand docking to proteins undergoing large transitions. PMID:27348230

  18. Optimization of bioenergy crop selection and placement based on a stream health indicator using an evolutionary algorithm.

    PubMed

    Herman, Matthew R; Nejadhashemi, A Pouyan; Daneshvar, Fariborz; Abouali, Mohammad; Ross, Dennis M; Woznicki, Sean A; Zhang, Zhen

    2016-10-01

    The emission of greenhouse gases continues to amplify the impacts of global climate change. This has led to the increased focus on using renewable energy sources, such as biofuels, due to their lower impact on the environment. However, the production of biofuels can still have negative impacts on water resources. This study introduces a new strategy to optimize bioenergy landscapes while improving stream health for the region. To accomplish this, several hydrological models including the Soil and Water Assessment Tool, Hydrologic Integrity Tool, and Adaptive Neruro Fuzzy Inference System, were linked to develop stream health predictor models. These models are capable of estimating stream health scores based on the Index of Biological Integrity. The coupling of the aforementioned models was used to guide a genetic algorithm to design watershed-scale bioenergy landscapes. Thirteen bioenergy managements were considered based on the high probability of adaptation by farmers in the study area. Results from two thousand runs identified an optimum bioenergy crops placement that maximized the stream health for the Flint River Watershed in Michigan. The final overall stream health score was 50.93, which was improved from the current stream health score of 48.19. This was shown to be a significant improvement at the 1% significant level. For this final bioenergy landscape the most often used management was miscanthus (27.07%), followed by corn-soybean-rye (19.00%), corn stover-soybean (18.09%), and corn-soybean (16.43%). The technique introduced in this study can be successfully modified for use in different regions and can be used by stakeholders and decision makers to develop bioenergy landscapes that maximize stream health in the area of interest.

  19. Evolving model-free scattering matrix via evolutionary algorithm: {sup 16}O-{sup 16}O elastic scattering at 350 MeV

    SciTech Connect

    Korda, V.Yu.; Molev, A.S.; Korda, L.P.

    2005-07-01

    We present a new procedure that enables us to extract a scattering matrix S(l) as a complex function of angular momentum directly from the scattering data without any a priori model assumptions implied. The key ingredient of the procedure is the evolutionary algorithm with diffused mutation that evolves the population of the scattering matrices by means of their smooth deformations from the primary arbitrary analytical S(l) shapes to the final ones, giving high-quality fits to the data. Because of the automatic monitoring of the scattering-matrix derivatives, the final S(l) shapes are monotonic and do not have any distortions. For the {sup 16}O-{sup 16}O elastic-scattering data at 350 MeV, we show the independence of the final results of the primary S(l) shapes. Contrary to other approaches, our procedure provides an excellent fit by the S(l) shapes that support the 'rainbow' interpretation of the data under analysis.

  20. Evolutionary algorithm based optimization of hydraulic machines utilizing a state-of-the-art block coupled CFD solver and parametric geometry and mesh generation tools

    NASA Astrophysics Data System (ADS)

    S, Kyriacou; E, Kontoleontos; S, Weissenberger; L, Mangani; E, Casartelli; I, Skouteropoulou; M, Gattringer; A, Gehrer; M, Buchmayr

    2014-03-01

    An efficient hydraulic optimization procedure, suitable for industrial use, requires an advanced optimization tool (EASY software), a fast solver (block coupled CFD) and a flexible geometry generation tool. EASY optimization software is a PCA-driven metamodel-assisted Evolutionary Algorithm (MAEA (PCA)) that can be used in both single- (SOO) and multiobjective optimization (MOO) problems. In MAEAs, low cost surrogate evaluation models are used to screen out non-promising individuals during the evolution and exclude them from the expensive, problem specific evaluation, here the solution of Navier-Stokes equations. For additional reduction of the optimization CPU cost, the PCA technique is used to identify dependences among the design variables and to exploit them in order to efficiently drive the application of the evolution operators. To further enhance the hydraulic optimization procedure, a very robust and fast Navier-Stokes solver has been developed. This incompressible CFD solver employs a pressure-based block-coupled approach, solving the governing equations simultaneously. This method, apart from being robust and fast, also provides a big gain in terms of computational cost. In order to optimize the geometry of hydraulic machines, an automatic geometry and mesh generation tool is necessary. The geometry generation tool used in this work is entirely based on b-spline curves and surfaces. In what follows, the components of the tool chain are outlined in some detail and the optimization results of hydraulic machine components are shown in order to demonstrate the performance of the presented optimization procedure.

  1. Integrating remotely sensed leaf area index and leaf nitrogen accumulation with RiceGrow model based on particle swarm optimization algorithm for rice grain yield assessment

    NASA Astrophysics Data System (ADS)

    Wang, Hang; Zhu, Yan; Li, Wenlong; Cao, Weixing; Tian, Yongchao

    2014-01-01

    A regional rice (Oryza sativa) grain yield prediction technique was proposed by integration of ground-based and spaceborne remote sensing (RS) data with the rice growth model (RiceGrow) through a new particle swarm optimization (PSO) algorithm. Based on an initialization/parameterization strategy (calibration), two agronomic indicators, leaf area index (LAI) and leaf nitrogen accumulation (LNA) remotely sensed by field spectra and satellite images, were combined to serve as an external assimilation parameter and integrated with the RiceGrow model for inversion of three model management parameters, including sowing date, sowing rate, and nitrogen rate. Rice grain yield was then predicted by inputting these optimized parameters into the reinitialized model. PSO was used for the parameterization and regionalization of the integrated model and compared with the shuffled complex evolution-University of Arizona (SCE-UA) optimization algorithm. The test results showed that LAI together with LNA as the integrated parameter performed better than each alone for crop model parameter initialization. PSO also performed better than SCE-UA in terms of running efficiency and assimilation results, indicating that PSO is a reliable optimization method for assimilating RS information and the crop growth model. The integrated model also had improved precision for predicting rice grain yield.

  2. In Silico Calculation of Infinite Dilution Activity Coefficients of Molecular Solutes in Ionic Liquids: Critical Review of Current Methods and New Models Based on Three Machine Learning Algorithms.

    PubMed

    Paduszyński, Kamil

    2016-08-22

    The aim of the paper is to address all the disadvantages of currently available models for calculating infinite dilution activity coefficients (γ(∞)) of molecular solutes in ionic liquids (ILs)-a relevant property from the point of view of many applications of ILs, particularly in separations. Three new models are proposed, each of them based on distinct machine learning algorithm: stepwise multiple linear regression (SWMLR), feed-forward artificial neural network (FFANN), and least-squares support vector machine (LSSVM). The models were established based on the most comprehensive γ(∞) data bank reported so far (>34 000 data points for 188 ILs and 128 solutes). Following the paper published previously [J. Chem. Inf. Model 2014, 54, 1311-1324], the ILs were treated in terms of group contributions, whereas the Abraham solvation parameters were used to quantify an impact of solute structure. Temperature is also included in the input data of the models so that they can be utilized to obtain temperature-dependent data and thus related thermodynamic functions. Both internal and external validation techniques were applied to assess the statistical significance and explanatory power of the final correlations. A comparative study of the overall performance of the investigated SWMLR/FFANN/LSSVM approaches is presented in terms of root-mean-square error and average absolute relative deviation between calculated and experimental γ(∞), evaluated for different families of ILs and solutes, as well as between calculated and experimental infinite dilution selectivity for separation problems benzene from n-hexane and thiophene from n-heptane. LSSVM is shown to be a method with the lowest values of both training and generalization errors. It is finally demonstrated that the established models exhibit an improved accuracy compared to the state-of-the-art model, namely, temperature-dependent group contribution linear solvation energy relationship, published in 2011 [J. Chem

  3. Model-based tomographic reconstruction

    DOEpatents

    Chambers, David H; Lehman, Sean K; Goodman, Dennis M

    2012-06-26

    A model-based approach to estimating wall positions for a building is developed and tested using simulated data. It borrows two techniques from geophysical inversion problems, layer stripping and stacking, and combines them with a model-based estimation algorithm that minimizes the mean-square error between the predicted signal and the data. The technique is designed to process multiple looks from an ultra wideband radar array. The processed signal is time-gated and each section processed to detect the presence of a wall and estimate its position, thickness, and material parameters. The floor plan of a building is determined by moving the array around the outside of the building. In this paper we describe how the stacking and layer stripping algorithms are combined and show the results from a simple numerical example of three parallel walls.

  4. Model-based machine learning.

    PubMed

    Bishop, Christopher M

    2013-02-13

    Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications.

  5. Model-based machine learning

    PubMed Central

    Bishop, Christopher M.

    2013-01-01

    Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications. PMID:23277612

  6. Validation of a method for in vivo 3D dose reconstruction for IMRT and VMAT treatments using on-treatment EPID images and a model-based forward-calculation algorithm

    SciTech Connect

    Van Uytven, Eric Van Beek, Timothy; McCowan, Peter M.; Chytyk-Praznik, Krista; Greer, Peter B.; McCurdy, Boyd M. C.

    2015-12-15

    Purpose: Radiation treatments are trending toward delivering higher doses per fraction under stereotactic radiosurgery and hypofractionated treatment regimens. There is a need for accurate 3D in vivo patient dose verification using electronic portal imaging device (EPID) measurements. This work presents a model-based technique to compute full three-dimensional patient dose reconstructed from on-treatment EPID portal images (i.e., transmission images). Methods: EPID dose is converted to incident fluence entering the patient using a series of steps which include converting measured EPID dose to fluence at the detector plane and then back-projecting the primary source component of the EPID fluence upstream of the patient. Incident fluence is then recombined with predicted extra-focal fluence and used to calculate 3D patient dose via a collapsed-cone convolution method. This method is implemented in an iterative manner, although in practice it provides accurate results in a single iteration. The robustness of the dose reconstruction technique is demonstrated with several simple slab phantom and nine anthropomorphic phantom cases. Prostate, head and neck, and lung treatments are all included as well as a range of delivery techniques including VMAT and dynamic intensity modulated radiation therapy (IMRT). Results: Results indicate that the patient dose reconstruction algorithm compares well with treatment planning system computed doses for controlled test situations. For simple phantom and square field tests, agreement was excellent with a 2%/2 mm 3D chi pass rate ≥98.9%. On anthropomorphic phantoms, the 2%/2 mm 3D chi pass rates ranged from 79.9% to 99.9% in the planning target volume (PTV) region and 96.5% to 100% in the low dose region (>20% of prescription, excluding PTV and skin build-up region). Conclusions: An algorithm to reconstruct delivered patient 3D doses from EPID exit dosimetry measurements was presented. The method was applied to phantom and patient

  7. Incorporation of an evolutionary algorithm to estimate transfer-functions for a parameter regionalization scheme of a rainfall-runoff model

    NASA Astrophysics Data System (ADS)

    Klotz, Daniel; Herrnegger, Mathew; Schulz, Karsten

    2016-04-01

    This contribution presents a framework, which enables the use of an Evolutionary Algorithm (EA) for the calibration and regionalization of the hydrological model COSEROreg. COSEROreg uses an updated version of the HBV-type model COSERO (Kling et al. 2014) for the modelling of hydrological processes and is embedded in a parameter regionalization scheme based on Samaniego et al. (2010). The latter uses subscale-information to estimate model via a-priori chosen transfer functions (often derived from pedotransfer functions). However, the transferability of the regionalization scheme to different model-concepts and the integration of new forms of subscale information is not straightforward. (i) The usefulness of (new) single sub-scale information layers is unknown beforehand. (ii) Additionally, the establishment of functional relationships between these (possibly meaningless) sub-scale information layers and the distributed model parameters remain a central challenge in the implementation of a regionalization procedure. The proposed method theoretically provides a framework to overcome this challenge. The implementation of the EA encompasses the following procedure: First, a formal grammar is specified (Ryan et al., 1998). The construction of the grammar thereby defines the set of possible transfer functions and also allows to incorporate hydrological domain knowledge into the search itself. The EA iterates over the given space by combining parameterized basic functions (e.g. linear- or exponential functions) and sub-scale information layers into transfer functions, which are then used in COSEROreg. However, a pre-selection model is applied beforehand to sort out unfeasible proposals by the EA and to reduce the necessary model runs. A second optimization routine is used to optimize the parameters of the transfer functions proposed by the EA. This concept, namely using two nested optimization loops, is inspired by the idea of Lamarckian Evolution and Baldwin Effect

  8. Evolutionary thinking

    PubMed Central

    Hunt, Tam

    2014-01-01

    Evolution as an idea has a lengthy history, even though the idea of evolution is generally associated with Darwin today. Rebecca Stott provides an engaging and thoughtful overview of this history of evolutionary thinking in her 2013 book, Darwin's Ghosts: The Secret History of Evolution. Since Darwin, the debate over evolution—both how it takes place and, in a long war of words with religiously-oriented thinkers, whether it takes place—has been sustained and heated. A growing share of this debate is now devoted to examining how evolutionary thinking affects areas outside of biology. How do our lives change when we recognize that all is in flux? What can we learn about life more generally if we study change instead of stasis? Carter Phipps’ book, Evolutionaries: Unlocking the Spiritual and Cultural Potential of Science's Greatest Idea, delves deep into this relatively new development. Phipps generally takes as a given the validity of the Modern Synthesis of evolutionary biology. His story takes us into, as the subtitle suggests, the spiritual and cultural implications of evolutionary thinking. Can religion and evolution be reconciled? Can evolutionary thinking lead to a new type of spirituality? Is our culture already being changed in ways that we don't realize by evolutionary thinking? These are all important questions and Phipps book is a great introduction to this discussion. Phipps is an author, journalist, and contributor to the emerging “integral” or “evolutionary” cultural movement that combines the insights of Integral Philosophy, evolutionary science, developmental psychology, and the social sciences. He has served as the Executive Editor of EnlightenNext magazine (no longer published) and more recently is the co-founder of the Institute for Cultural Evolution, a public policy think tank addressing the cultural roots of America's political challenges. What follows is an email interview with Phipps. PMID:26478766

  9. Effect of Radiation Dose Reduction and Reconstruction Algorithm on Image Noise, Contrast, Resolution, and Detectability of Subtle Hypoattenuating Liver Lesions at Multidetector CT: Filtered Back Projection versus a Commercial Model-based Iterative Reconstruction Algorithm.

    PubMed

    Solomon, Justin; Marin, Daniele; Roy Choudhury, Kingshuk; Patel, Bhavik; Samei, Ehsan

    2017-02-07

    Purpose To determine the effect of radiation dose and iterative reconstruction (IR) on noise, contrast, resolution, and observer-based detectability of subtle hypoattenuating liver lesions and to estimate the dose reduction potential of the IR algorithm in question. Materials and Methods This prospective, single-center, HIPAA-compliant study was approved by the institutional review board. A dual-source computed tomography (CT) system was used to reconstruct CT projection data from 21 patients into six radiation dose levels (12.5%, 25%, 37.5%, 50%, 75%, and 100%) on the basis of two CT acquisitions. A series of virtual liver lesions (five per patient, 105 total, lesion-to-liver prereconstruction contrast of -15 HU, 12-mm diameter) were inserted into the raw CT projection data and images were reconstructed with filtered back projection (FBP) (B31f kernel) and sinogram-affirmed IR (SAFIRE) (I31f-5 kernel). Image noise (pixel standard deviation), lesion contrast (after reconstruction), lesion boundary sharpness (average normalized gradient at lesion boundary), and contrast-to-noise ratio (CNR) were compared. Next, a two-alternative forced choice perception experiment was performed (16 readers [six radiologists, 10 medical physicists]). A linear mixed-effects statistical model was used to compare detection accuracy between FBP and SAFIRE and to estimate the radiation dose reduction potential of SAFIRE. Results Compared with FBP, SAFIRE reduced noise by a mean of 53% ± 5, lesion contrast by 12% ± 4, and lesion sharpness by 13% ± 10 but increased CNR by 89% ± 19. Detection accuracy was 2% higher on average with SAFIRE than with FBP (P = .03), which translated into an estimated radiation dose reduction potential (±95% confidence interval) of 16% ± 13. Conclusion SAFIRE increases detectability at a given radiation dose (approximately 2% increase in detection accuracy) and allows for imaging at reduced radiation dose (16% ± 13), while maintaining low

  10. Evolutionary rescue beyond the models

    PubMed Central

    Gomulkiewicz, Richard; Shaw, Ruth G.

    2013-01-01

    Laboratory model systems and mathematical models have shed considerable light on the fundamental properties and processes of evolutionary rescue. But it remains to determine the extent to which these model-based findings can help biologists predict when evolution will fail or succeed in rescuing natural populations that are facing novel conditions that threaten their persistence. In this article, we present a prospectus for transferring our basic understanding of evolutionary rescue to wild and other non-laboratory populations. Current experimental and theoretical results emphasize how the interplay between inheritance processes and absolute fitness in changed environments drive population dynamics and determine prospects of extinction. We discuss the challenge of inferring these elements of the evolutionary rescue process in field and natural settings. Addressing this challenge will contribute to a more comprehensive understanding of population persistence that combines processes of evolutionary rescue with developmental and ecological mechanisms. PMID:23209173

  11. Toward an evolutionary-predictive foundation for creativity : Commentary on "Human creativity, evolutionary algorithms, and predictive representations: The mechanics of thought trials" by Arne Dietrich and Hilde Haider, 2014 (Accepted pending minor revisions for publication in Psychonomic Bulletin & Review).

    PubMed

    Gabora, Liane; Kauffman, Stuart

    2016-04-01

    Dietrich and Haider (Psychonomic Bulletin & Review, 21 (5), 897-915, 2014) justify their integrative framework for creativity founded on evolutionary theory and prediction research on the grounds that "theories and approaches guiding empirical research on creativity have not been supported by the neuroimaging evidence." Although this justification is controversial, the general direction holds promise. This commentary clarifies points of disagreement and unresolved issues, and addresses mis-applications of evolutionary theory that lead the authors to adopt a Darwinian (versus Lamarckian) approach. To say that creativity is Darwinian is not to say that it consists of variation plus selection - in the everyday sense of the term - as the authors imply; it is to say that evolution is occurring because selection is affecting the distribution of randomly generated heritable variation across generations. In creative thought the distribution of variants is not key, i.e., one is not inclined toward idea A because 60 % of one's candidate ideas are variants of A while only 40 % are variants of B; one is inclined toward whichever seems best. The authors concede that creative variation is partly directed; however, the greater the extent to which variants are generated non-randomly, the greater the extent to which the distribution of variants can reflect not selection but the initial generation bias. Since each thought in a creative process can alter the selective criteria against which the next is evaluated, there is no demarcation into generations as assumed in a Darwinian model. We address the authors' claim that reduced variability and individuality are more characteristic of Lamarckism than Darwinian evolution, and note that a Lamarckian approach to creativity has addressed the challenge of modeling the emergent features associated with insight.

  12. Model based manipulator control

    NASA Technical Reports Server (NTRS)

    Petrosky, Lyman J.; Oppenheim, Irving J.

    1989-01-01

    The feasibility of using model based control (MBC) for robotic manipulators was investigated. A double inverted pendulum system was constructed as the experimental system for a general study of dynamically stable manipulation. The original interest in dynamically stable systems was driven by the objective of high vertical reach (balancing), and the planning of inertially favorable trajectories for force and payload demands. The model-based control approach is described and the results of experimental tests are summarized. Results directly demonstrate that MBC can provide stable control at all speeds of operation and support operations requiring dynamic stability such as balancing. The application of MBC to systems with flexible links is also discussed.

  13. Model Based Iterative Reconstruction for Bright Field Electron Tomography (Postprint)

    DTIC Science & Technology

    2013-02-01

    Reconstruction Technique ( SIRT ) are applied to the data. Model based iterative reconstruction (MBIR) provides a powerful framework for tomographic...the reconstruction when the typical algorithms such as Filtered Back Projection (FBP) and Simultaneous Iterative Reconstruction Technique ( SIRT ) are

  14. Evolutionary software for autonomous path planning

    SciTech Connect

    Couture, S; Hage, M

    1999-02-10

    This research project demonstrated the effectiveness of using evolutionary software techniques in the development of path-planning algorithms and control programs for mobile vehicles in radioactive environments. The goal was to take maximum advantage of the programmer's intelligence by tasking the programmer with encoding the measures of success for a path-planning algorithm, rather than developing the path-planning algorithms themselves. Evolutionary software development techniques could then be used to develop algorithms most suitable to the particular environments of interest. The measures of path-planning success were encoded in the form of a fitness function for an evolutionary software development engine. The task for the evolutionary software development engine was to evaluate the performance of individual algorithms, select the best performers for the population based on the fitness function, and breed them to evolve the next generation of algorithms. The process continued for a set number of generations or until the algorithm converged to an optimal solution. The task environment was the navigation of a rover from an initial location to a goal, then to a processing point, in an environment containing physical and radioactive obstacles. Genetic algorithms were developed for a variety of environmental configurations. Algorithms were simple and non-robust strings of behaviors, but they could be evolved to be nearly optimal for a given environment. In addition, a genetic program was evolved in the form of a control algorithm that operates at every motion of the robot. Programs were more complex than algorithms and less optimal in a given environment. However, after training in a variety of different environments, they were more robust and could perform acceptably in environments they were not trained in. This paper describes the evolutionary software development engine and the performance of algorithms and programs evolved by it for the chosen task.

  15. Evolutionary novelties.

    PubMed

    Wagner, Günter P; Lynch, Vincent J

    2010-01-26

    How novel traits arise in organisms has long been a major problem in biology. Indeed, the sharpest critiques of Darwin's theory of evolution by natural selection often centered on explaining how novel body parts arose. In his response to The Origin of Species, St. George J. Mivart challenged Darwin to explain the origin of evolutionary novelties such as the mammary gland, asking if it was "conceivable that the young of any animal was ever saved from destruction by accidentally sucking a drop of scarcely nutritious fluid from an accidentally hypertrophied cutaneous gland of its mother?" It is only now that modern molecular and genomic tools are being brought to bear on this question that we are finally in a position to answer Mivart's challenge and explain one of the most fundamental questions of biology: how does novelty arise in evolution?

  16. Model-based vision using geometric hashing

    NASA Astrophysics Data System (ADS)

    Akerman, Alexander, III; Patton, Ronald

    1991-04-01

    The Geometric Hashing technique developed by the NYU Courant Institute has been applied to various automatic target recognition applications. In particular, I-MATH has extended the hashing algorithm to perform automatic target recognition ofsynthetic aperture radar (SAR) imagery. For this application, the hashing is performed upon the geometric locations of dominant scatterers. In addition to being a robust model-based matching algorithm -- invariant under translation, scale, and 3D rotations of the target -- hashing is of particular utility because it can still perform effective matching when the target is partially obscured. Moreover, hashing is very amenable to a SIMD parallel processing architecture, and thus potentially realtime implementable.

  17. Model-Based Fault Tolerant Control

    NASA Technical Reports Server (NTRS)

    Kumar, Aditya; Viassolo, Daniel

    2008-01-01

    The Model Based Fault Tolerant Control (MBFTC) task was conducted under the NASA Aviation Safety and Security Program. The goal of MBFTC is to develop and demonstrate real-time strategies to diagnose and accommodate anomalous aircraft engine events such as sensor faults, actuator faults, or turbine gas-path component damage that can lead to in-flight shutdowns, aborted take offs, asymmetric thrust/loss of thrust control, or engine surge/stall events. A suite of model-based fault detection algorithms were developed and evaluated. Based on the performance and maturity of the developed algorithms two approaches were selected for further analysis: (i) multiple-hypothesis testing, and (ii) neural networks; both used residuals from an Extended Kalman Filter to detect the occurrence of the selected faults. A simple fusion algorithm was implemented to combine the results from each algorithm to obtain an overall estimate of the identified fault type and magnitude. The identification of the fault type and magnitude enabled the use of an online fault accommodation strategy to correct for the adverse impact of these faults on engine operability thereby enabling continued engine operation in the presence of these faults. The performance of the fault detection and accommodation algorithm was extensively tested in a simulation environment.

  18. Evolutionary Design in Biology

    NASA Astrophysics Data System (ADS)

    Wiese, Kay C.

    Much progress has been achieved in recent years in molecular biology and genetics. The sheer volume of data in the form of biological sequences has been enormous and efficient methods for dealing with these huge amounts of data are needed. In addition, the data alone does not provide information on the workings of biological systems; hence much research effort has focused on designing mathematical and computational models to address problems from molecular biology. Often, the terms bioinformatics and computational biology are used to refer to the research fields concerning themselves with designing solutions to molecular problems in biology. However, there is a slight distinction between bioinformatics and computational biology: the former is concerned with managing the enormous amounts of biological data and extracting information from it, while the latter is more concerned with the design and development of new algorithms to address problems such as protein or RNA folding. However, the boundary is blurry, and there is no consistent usage of the terms. We will use the term bioinformatics to encompass both fields. To cover all areas of research in bioinformatics is beyond the scope of this section and we refer the interested reader to [2] for a general introduction. A large part of what bioinformatics is concerned about is evolution and function of biological systems on a molecular level. Evolutionary computation and evolutionary design are concerned with developing computational systems that "mimic" certain aspects of natural evolution (mutation, crossover, selection, fitness). Much of the inner workings of natural evolutionary systems have been copied, sometimes in modified format into evolutionary computation systems. Artificial neural networks mimic the functioning of simple brain cell clusters. Fuzzy systems are concerned with the "fuzzyness" in decision making, similar to a human expert. These three computational paradigms fall into the category of

  19. Model-Based Systems

    NASA Technical Reports Server (NTRS)

    Frisch, Harold P.

    2007-01-01

    Engineers, who design systems using text specification documents, focus their work upon the completed system to meet Performance, time and budget goals. Consistency and integrity is difficult to maintain within text documents for a single complex system and more difficult to maintain as several systems are combined into higher-level systems, are maintained over decades, and evolve technically and in performance through updates. This system design approach frequently results in major changes during the system integration and test phase, and in time and budget overruns. Engineers who build system specification documents within a model-based systems environment go a step further and aggregate all of the data. They interrelate all of the data to insure consistency and integrity. After the model is constructed, the various system specification documents are prepared, all from the same database. The consistency and integrity of the model is assured, therefore the consistency and integrity of the various specification documents is insured. This article attempts to define model-based systems relative to such an environment. The intent is to expose the complexity of the enabling problem by outlining what is needed, why it is needed and how needs are being addressed by international standards writing teams.

  20. Evolutionary engineering for industrial microbiology.

    PubMed

    Vanee, Niti; Fisher, Adam B; Fong, Stephen S

    2012-01-01

    Superficially, evolutionary engineering is a paradoxical field that balances competing interests. In natural settings, evolution iteratively selects and enriches subpopulations that are best adapted to a particular ecological niche using random processes such as genetic mutation. In engineering desired approaches utilize rational prospective design to address targeted problems. When considering details of evolutionary and engineering processes, more commonality can be found. Engineering relies on detailed knowledge of the problem parameters and design properties in order to predict design outcomes that would be an optimized solution. When detailed knowledge of a system is lacking, engineers often employ algorithmic search strategies to identify empirical solutions. Evolution epitomizes this iterative optimization by continuously diversifying design options from a parental design, and then selecting the progeny designs that represent satisfactory solutions. In this chapter, the technique of applying the natural principles of evolution to engineer microbes for industrial applications is discussed to highlight the challenges and principles of evolutionary engineering.

  1. Toward a unifying framework for evolutionary processes

    PubMed Central

    Paixão, Tiago; Badkobeh, Golnaz; Barton, Nick; Çörüş, Doğan; Dang, Duc-Cuong; Friedrich, Tobias; Lehre, Per Kristian; Sudholt, Dirk; Sutton, Andrew M.; Trubenová, Barbora

    2015-01-01

    The theory of population genetics and evolutionary computation have been evolving separately for nearly 30 years. Many results have been independently obtained in both fields and many others are unique to its respective field. We aim to bridge this gap by developing a unifying framework for evolutionary processes that allows both evolutionary algorithms and population genetics models to be cast in the same formal framework. The framework we present here decomposes the evolutionary process into its several components in order to facilitate the identification of similarities between different models. In particular, we propose a classification of evolutionary operators based on the defining properties of the different components. We cast several commonly used operators from both fields into this common framework. Using this, we map different evolutionary and genetic algorithms to different evolutionary regimes and identify candidates with the most potential for the translation of results between the fields. This provides a unified description of evolutionary processes and represents a stepping stone towards new tools and results to both fields. PMID:26215686

  2. Evolutionary Multiobjective Design Targeting a Field Programmable Transistor Array

    NASA Technical Reports Server (NTRS)

    Aguirre, Arturo Hernandez; Zebulum, Ricardo S.; Coello, Carlos Coello

    2004-01-01

    This paper introduces the ISPAES algorithm for circuit design targeting a Field Programmable Transistor Array (FPTA). The use of evolutionary algorithms is common in circuit design problems, where a single fitness function drives the evolution process. Frequently, the design problem is subject to several goals or operating constraints, thus, designing a suitable fitness function catching all requirements becomes an issue. Such a problem is amenable for multi-objective optimization, however, evolutionary algorithms lack an inherent mechanism for constraint handling. This paper introduces ISPAES, an evolutionary optimization algorithm enhanced with a constraint handling technique. Several design problems targeting a FPTA show the potential of our approach.

  3. An inquiry into evolutionary inquiry

    NASA Astrophysics Data System (ADS)

    Donovan, Samuel S.

    2005-11-01

    While evolution education has received a great deal of attention within the science education research community it still poses difficult teaching and learning challenges. Understanding evolutionary biology has been given high priority in national science education policy because of its role in coordinating our understanding of the life sciences, its importance in our intellectual history, its role in the perception of humans' position in nature, and its impact on our current medical, agricultural, and conservation practices. The rhetoric used in evolution education policy statements emphasizes familiarity with the nature of scientific inquiry as an important learning outcome associated with understanding evolution but provide little guidance with respect to how one might achieve this goal. This dissertation project explores the nature of evolutionary inquiry and how understanding the details of disciplinary reasoning can inform evolution education. The first analysis involves recasting the existing evolution education research literature to assess educational outcomes related to students ability to reason about data using evolutionary biology methods and models. This is followed in the next chapter by a detailed historical and philosophical characterization of evolutionary biology with the goal of providing a richer context for considering what exactly it is we want students to know about evolution as a discipline. Chapter 4 describes the development and implementation of a high school evolution curriculum that engages students with many aspects of model based reasoning. The final component of this reframing of evolution education involves an empirical study characterizing students' understanding of evolutionary biology as a modeling enterprise. Each chapter addresses a different aspect of evolution education and explores the implications of foregrounding disciplinary reasoning as an educational outcome. The analyses are coordinated with one another in the sense

  4. Evolutionary dynamics of diploid populations

    NASA Astrophysics Data System (ADS)

    Desimone, Ralph; Newman, Timothy

    2003-10-01

    There has been much recent interest in constructing computer models of evolutionary dynamics. Typically these models focus on asexual population dynamics, which are appropriate for haploid organsims such as bacteria. Using a recently developed ``genome template'' model, we extend the algorithm to a sexual population of diploid organisms. We will present some early results showing the temporal evolution of mean fitness and genetic variation, and compare this to typical results from haploid populations.

  5. Model-based reconfiguration: Diagnosis and recovery

    NASA Technical Reports Server (NTRS)

    Crow, Judy; Rushby, John

    1994-01-01

    We extend Reiter's general theory of model-based diagnosis to a theory of fault detection, identification, and reconfiguration (FDIR). The generality of Reiter's theory readily supports an extension in which the problem of reconfiguration is viewed as a close analog of the problem of diagnosis. Using a reconfiguration predicate 'rcfg' analogous to the abnormality predicate 'ab,' we derive a strategy for reconfiguration by transforming the corresponding strategy for diagnosis. There are two obvious benefits of this approach: algorithms for diagnosis can be exploited as algorithms for reconfiguration and we have a theoretical framework for an integrated approach to FDIR. As a first step toward realizing these benefits we show that a class of diagnosis engines can be used for reconfiguration and we discuss algorithms for integrated FDIR. We argue that integrating recovery and diagnosis is an essential next step if this technology is to be useful for practical applications.

  6. Model-based target and background characterization

    NASA Astrophysics Data System (ADS)

    Mueller, Markus; Krueger, Wolfgang; Heinze, Norbert

    2000-07-01

    Up to now most approaches of target and background characterization (and exploitation) concentrate solely on the information given by pixels. In many cases this is a complex and unprofitable task. During the development of automatic exploitation algorithms the main goal is the optimization of certain performance parameters. These parameters are measured during test runs while applying one algorithm with one parameter set to images that constitute of image domains with very different domain characteristics (targets and various types of background clutter). Model based geocoding and registration approaches provide means for utilizing the information stored in GIS (Geographical Information Systems). The geographical information stored in the various GIS layers can define ROE (Regions of Expectations) and may allow for dedicated algorithm parametrization and development. ROI (Region of Interest) detection algorithms (in most cases MMO (Man- Made Object) detection) use implicit target and/or background models. The detection algorithms of ROIs utilize gradient direction models that have to be matched with transformed image domain data. In most cases simple threshold calculations on the match results discriminate target object signatures from the background. The geocoding approaches extract line-like structures (street signatures) from the image domain and match the graph constellation against a vector model extracted from a GIS (Geographical Information System) data base. Apart from geo-coding the algorithms can be also used for image-to-image registration (multi sensor and data fusion) and may be used for creation and validation of geographical maps.

  7. Model Based Definition

    NASA Technical Reports Server (NTRS)

    Rowe, Sidney E.

    2010-01-01

    In September 2007, the Engineering Directorate at the Marshall Space Flight Center (MSFC) created the Design System Focus Team (DSFT). MSFC was responsible for the in-house design and development of the Ares 1 Upper Stage and the Engineering Directorate was preparing to deploy a new electronic Configuration Management and Data Management System with the Design Data Management System (DDMS) based upon a Commercial Off The Shelf (COTS) Product Data Management (PDM) System. The DSFT was to establish standardized CAD practices and a new data life cycle for design data. Of special interest here, the design teams were to implement Model Based Definition (MBD) in support of the Upper Stage manufacturing contract. It is noted that this MBD does use partially dimensioned drawings for auxiliary information to the model. The design data lifecycle implemented several new release states to be used prior to formal release that allowed the models to move through a flow of progressive maturity. The DSFT identified some 17 Lessons Learned as outcomes of the standards development, pathfinder deployments and initial application to the Upper Stage design completion. Some of the high value examples are reviewed.

  8. Principles of models based engineering

    SciTech Connect

    Dolin, R.M.; Hefele, J.

    1996-11-01

    This report describes a Models Based Engineering (MBE) philosophy and implementation strategy that has been developed at Los Alamos National Laboratory`s Center for Advanced Engineering Technology. A major theme in this discussion is that models based engineering is an information management technology enabling the development of information driven engineering. Unlike other information management technologies, models based engineering encompasses the breadth of engineering information, from design intent through product definition to consumer application.

  9. Efficient Model-Based Diagnosis Engine

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Vatan, Farrokh; Barrett, Anthony; James, Mark; Mackey, Ryan; Williams, Colin

    2009-01-01

    An efficient diagnosis engine - a combination of mathematical models and algorithms - has been developed for identifying faulty components in a possibly complex engineering system. This model-based diagnosis engine embodies a twofold approach to reducing, relative to prior model-based diagnosis engines, the amount of computation needed to perform a thorough, accurate diagnosis. The first part of the approach involves a reconstruction of the general diagnostic engine to reduce the complexity of the mathematical-model calculations and of the software needed to perform them. The second part of the approach involves algorithms for computing a minimal diagnosis (the term "minimal diagnosis" is defined below). A somewhat lengthy background discussion is prerequisite to a meaningful summary of the innovative aspects of the present efficient model-based diagnosis engine. In model-based diagnosis, the function of each component and the relationships among all the components of the engineering system to be diagnosed are represented as a logical system denoted the system description (SD). Hence, the expected normal behavior of the engineering system is the set of logical consequences of the SD. Faulty components lead to inconsistencies between the observed behaviors of the system and the SD (see figure). Diagnosis - the task of finding faulty components - is reduced to finding those components, the abnormalities of which could explain all the inconsistencies. The solution of the diagnosis problem should be a minimal diagnosis, which is a minimal set of faulty components. A minimal diagnosis stands in contradistinction to the trivial solution, in which all components are deemed to be faulty, and which, therefore, always explains all inconsistencies.

  10. Evolutionary Tracks for Betelgeuse

    NASA Astrophysics Data System (ADS)

    Dolan, Michelle; Mathews, Grant; Dearborn, David

    2008-04-01

    We have constructed a series of quasi-hydrostatic evolutionary models for the M2 Iab supergiant Betelgeuse (Õrionis). Our models are constrained by the observed temperature, luminosity, surface composition and mass loss for this star, along with recent parallax measurements and high resolution imagery which directly determine its radius. The surface convective zone obtained in our model naturally accounts for observed variations in surface luminosity and the size of detected surface bright spots. In our models these result from upflowing convective material from regions of high temperature in a surface convective zone. We also account for the observed periodic variability as the result of the effective equation of state in a simple linear pulsation model. Based upon a comparison between the accumulated mass loss in the observed circumstellar shell, and the lower limit on luminosity we suggest that this star most likely has a mass of either 16 ±2 M if a Reimers lass loss rate applies or 20 ±2 for the de Jager mass loss rate. For any mass loss rate the star must be close to the tip of the first ascent up the giant branch.

  11. Hybrid Architectures for Evolutionary Computing Algorithms

    DTIC Science & Technology

    2008-01-01

    STATEMENT APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED. PA# WPAFB 08-0064 13. SUPPLEMENTARY NOTES 14 . ABSTRACT This report documents the...operator (in Xilinx StateCadtool) …….. 14 Figure 6. Partial code fragment of the declone operator, hand coded from state Diagram...Histogram of number of words added by exhaustive search for the runs of Figure 12 ………………………………………………………... 27 Figure 14 . Performance comparison of

  12. Hybrid Architectures for Evolutionary Computing Algorithms

    DTIC Science & Technology

    2006-01-01

    SIMBIOSYS program we managed. We developed prototype optimization software tools in three programming environments, Labview, Matlab, and compiled C, and...Optimization Toolbox from North Carolina State University, integrated MatLab bio-models from Purdue University SIMBIOSYS PI and also with C version...That work was done actually done as a part of the in-house component of our involvement in the DARPA SIMBIOSYS and BIOCOMP programs. Under those

  13. Overcoming limitations of model-based diagnostic reasoning systems

    NASA Technical Reports Server (NTRS)

    Holtzblatt, Lester J.; Marcotte, Richard A.; Piazza, Richard L.

    1989-01-01

    The development of a model-based diagnostic system to overcome the limitations of model-based reasoning systems is discussed. It is noted that model-based reasoning techniques can be used to analyze the failure behavior and diagnosability of system and circuit designs as part of the system process itself. One goal of current research is the development of a diagnostic algorithm which can reason efficiently about large numbers of diagnostic suspects and can handle both combinational and sequential circuits. A second goal is to address the model-creation problem by developing an approach for using design models to construct the GMODS model in an automated fashion.

  14. Evolutionary design of corrugated horn antennas

    NASA Technical Reports Server (NTRS)

    Hoorfar, F.; Manshadi, V.; Jamnejad, A.

    2002-01-01

    An evolutionary progranirnitzg (EP) algorithm is used to optimize pattern of a corrugated circularhorn subject to various constraints on return loss and antenna beamwidth and pattern circularity and low crosspolarization. The EP algorithm uses a Gaussian mutation operator. Examples on design synthesis of a 45 section corrugated horn, with a total of 90 optimization parameters, are presented. The results show excellent and efficient optimization of the desired horn parameters.

  15. Model-based 3D SAR reconstruction

    NASA Astrophysics Data System (ADS)

    Knight, Chad; Gunther, Jake; Moon, Todd

    2014-06-01

    Three dimensional scene reconstruction with synthetic aperture radar (SAR) is desirable for target recognition and improved scene interpretability. The vertical aperture, which is critical to reconstruct 3D SAR scenes, is almost always sparsely sampled due to practical limitations, which creates an underdetermined problem. This papers explores 3D scene reconstruction using a convex model-based approach. The approach developed is demonstrated on 3D scenes, but can be extended to SAR reconstruction of sparsely sampled signals in the spatial and, or, frequency domains. The model-based approach enables knowledge-aided image formation (KAIF) by incorporating spatial, aspect, and sparsity magnitude terms into the image reconstruction. The incorporation of these terms, which are based on prior scene knowledge, will demonstrate improved results compared to traditional image formation algorithms. The SAR image formation problem is formulated as a second order cone program (SOCP) and the results are demonstrated on 3D scenes using simulated data and data from the GOTCHA data collect.1 The model-based results are contrasted against traditional backprojected images.

  16. A comparative study of corrugated horn design by evolutionary techniques

    NASA Technical Reports Server (NTRS)

    Hoorfar, A.

    2003-01-01

    Here an evolutionary programming algorithm is used to optimize the pattern of a corrugated circular horn subject to various constraints on return loss, antenna beamwidth, pattern circularity, and low cross polarization.

  17. Evolutionary Multiobjective Optimization: Principles, Procedures, and Practices

    NASA Astrophysics Data System (ADS)

    Deb, Kalyanmoy

    2010-10-01

    Multi-objective optimization problems deal with multiple conflicting objectives. In principle, they give rise to a set of trade-off Pareto-optimal solutions. Over the past one-and-half decade, evolutionary multi-objective optimization (EMO) has established itself as a mature field of research and application with an extensive literature, commercial softwares, numerous freely downloadable codes, a dedicated biannual conference running successfully five times so far since 2001, special sessions and workshops held at all major evolutionary computing conferences, and full-time researchers from universities and industries from all around the globe. This is because evolutionary algorithms (EAs) work with a population of solutions and in solving multi-objective optimization problems, EAs can be modified to find and capture multiple solutions in a single simulation run. In this article, we make a brief outline of EMO principles, discuss one specific EMO algorithm, and present some current research issues of EMO.

  18. A tool for model based diagnostics of the AGS Booster

    SciTech Connect

    Luccio, A.

    1993-12-31

    A model-based algorithmic tool was developed to search for lattice errors by a systematic analysis of orbit data in the AGS Booster synchrotron. The algorithm employs transfer matrices calculated with MAD between points in the ring. Iterative model fitting of the data allows one to find and eventually correct magnet displacements and angles or field errors. The tool, implemented on a HP-Apollo workstation system, has proved very general and of immediate physical interpretation.

  19. Model-based multiple patterning layout decomposition

    NASA Astrophysics Data System (ADS)

    Guo, Daifeng; Tian, Haitong; Du, Yuelin; Wong, Martin D. F.

    2015-10-01

    As one of the most promising next generation lithography technologies, multiple patterning lithography (MPL) plays an important role in the attempts to keep in pace with 10 nm technology node and beyond. With feature size keeps shrinking, it has become impossible to print dense layouts within one single exposure. As a result, MPL such as double patterning lithography (DPL) and triple patterning lithography (TPL) has been widely adopted. There is a large volume of literature on DPL/TPL layout decomposition, and the current approach is to formulate the problem as a classical graph-coloring problem: Layout features (polygons) are represented by vertices in a graph G and there is an edge between two vertices if and only if the distance between the two corresponding features are less than a minimum distance threshold value dmin. The problem is to color the vertices of G using k colors (k = 2 for DPL, k = 3 for TPL) such that no two vertices connected by an edge are given the same color. This is a rule-based approach, which impose a geometric distance as a minimum constraint to simply decompose polygons within the distance into different masks. It is not desired in practice because this criteria cannot completely capture the behavior of the optics. For example, it lacks of sufficient information such as the optical source characteristics and the effects between the polygons outside the minimum distance. To remedy the deficiency, a model-based layout decomposition approach to make the decomposition criteria base on simulation results was first introduced at SPIE 2013.1 However, the algorithm1 is based on simplified assumption on the optical simulation model and therefore its usage on real layouts is limited. Recently AMSL2 also proposed a model-based approach to layout decomposition by iteratively simulating the layout, which requires excessive computational resource and may lead to sub-optimal solutions. The approach2 also potentially generates too many stiches. In this

  20. Multiobjective Multifactorial Optimization in Evolutionary Multitasking.

    PubMed

    Gupta, Abhishek; Ong, Yew-Soon; Feng, Liang; Tan, Kay Chen

    2016-05-03

    In recent decades, the field of multiobjective optimization has attracted considerable interest among evolutionary computation researchers. One of the main features that makes evolutionary methods particularly appealing for multiobjective problems is the implicit parallelism offered by a population, which enables simultaneous convergence toward the entire Pareto front. While a plethora of related algorithms have been proposed till date, a common attribute among them is that they focus on efficiently solving only a single optimization problem at a time. Despite the known power of implicit parallelism, seldom has an attempt been made to multitask, i.e., to solve multiple optimization problems simultaneously. It is contended that the notion of evolutionary multitasking leads to the possibility of automated transfer of information across different optimization exercises that may share underlying similarities, thereby facilitating improved convergence characteristics. In particular, the potential for automated transfer is deemed invaluable from the standpoint of engineering design exercises where manual knowledge adaptation and reuse are routine. Accordingly, in this paper, we present a realization of the evolutionary multitasking paradigm within the domain of multiobjective optimization. The efficacy of the associated evolutionary algorithm is demonstrated on some benchmark test functions as well as on a real-world manufacturing process design problem from the composites industry.

  1. Evolutionary stability on graphs

    PubMed Central

    Ohtsuki, Hisashi; Nowak, Martin A.

    2008-01-01

    Evolutionary stability is a fundamental concept in evolutionary game theory. A strategy is called an evolutionarily stable strategy (ESS), if its monomorphic population rejects the invasion of any other mutant strategy. Recent studies have revealed that population structure can considerably affect evolutionary dynamics. Here we derive the conditions of evolutionary stability for games on graphs. We obtain analytical conditions for regular graphs of degree k > 2. Those theoretical predictions are compared with computer simulations for random regular graphs and for lattices. We study three different update rules: birth-death (BD), death-birth (DB), and imitation (IM) updating. Evolutionary stability on sparse graphs does not imply evolutionary stability in a well-mixed population, nor vice versa. We provide a geometrical interpretation of the ESS condition on graphs. PMID:18295801

  2. Kinetic modeling based probabilistic segmentation for molecular images.

    PubMed

    Saad, Ahmed; Hamarneh, Ghassan; Möller, Torsten; Smith, Ben

    2008-01-01

    We propose a semi-supervised, kinetic modeling based segmentation technique for molecular imaging applications. It is an iterative, self-learning algorithm based on uncertainty principles, designed to alleviate low signal-to-noise ratio (SNR) and partial volume effect (PVE) problems. Synthetic fluorodeoxyglucose (FDG) and simulated Raclopride dynamic positron emission tomography (dPET) brain images with excessive noise levels are used to validate our algorithm. We show, qualitatively and quantitatively, that our algorithm outperforms state-of-the-art techniques in identifying different functional regions and recovering the kinetic parameters.

  3. Bell-Curve Based Evolutionary Strategies for Structural Optimization

    NASA Technical Reports Server (NTRS)

    Kincaid, Rex K.

    2001-01-01

    Evolutionary methods are exceedingly popular with practitioners of many fields; more so than perhaps any optimization tool in existence. Historically Genetic Algorithms (GAs) led the way in practitioner popularity. However, in the last ten years Evolutionary Strategies (ESs) and Evolutionary Programs (EPS) have gained a significant foothold. One partial explanation for this shift is the interest in using GAs to solve continuous optimization problems. The typical GA relies upon a cumbersome binary representation of the design variables. An ES or EP, however, works directly with the real-valued design variables. For detailed references on evolutionary methods in general and ES or EP in specific see Back and Dasgupta and Michalesicz. We call our evolutionary algorithm BCB (bell curve based) since it is based upon two normal distributions.

  4. Bell-Curve Based Evolutionary Strategies for Structural Optimization

    NASA Technical Reports Server (NTRS)

    Kincaid, Rex K.

    2000-01-01

    Evolutionary methods are exceedingly popular with practitioners of many fields; more so than perhaps any optimization tool in existence. Historically Genetic Algorithms (GAs) led the way in practitioner popularity (Reeves 1997). However, in the last ten years Evolutionary Strategies (ESs) and Evolutionary Programs (EPS) have gained a significant foothold (Glover 1998). One partial explanation for this shift is the interest in using GAs to solve continuous optimization problems. The typical GA relies upon a cumber-some binary representation of the design variables. An ES or EP, however, works directly with the real-valued design variables. For detailed references on evolutionary methods in general and ES or EP in specific see Back (1996) and Dasgupta and Michalesicz (1997). We call our evolutionary algorithm BCB (bell curve based) since it is based upon two normal distributions.

  5. Adaptive evolutionary artificial neural networks for pattern classification.

    PubMed

    Oong, Tatt Hee; Isa, Nor Ashidi Mat

    2011-11-01

    This paper presents a new evolutionary approach called the hybrid evolutionary artificial neural network (HEANN) for simultaneously evolving an artificial neural networks (ANNs) topology and weights. Evolutionary algorithms (EAs) with strong global search capabilities are likely to provide the most promising region. However, they are less efficient in fine-tuning the search space locally. HEANN emphasizes the balancing of the global search and local search for the evolutionary process by adapting the mutation probability and the step size of the weight perturbation. This is distinguishable from most previous studies that incorporate EA to search for network topology and gradient learning for weight updating. Four benchmark functions were used to test the evolutionary framework of HEANN. In addition, HEANN was tested on seven classification benchmark problems from the UCI machine learning repository. Experimental results show the superior performance of HEANN in fine-tuning the network complexity within a small number of generations while preserving the generalization capability compared with other algorithms.

  6. Remembering the evolutionary Freud.

    PubMed

    Young, Allan

    2006-03-01

    Throughout his career as a writer, Sigmund Freud maintained an interest in the evolutionary origins of the human mind and its neurotic and psychotic disorders. In common with many writers then and now, he believed that the evolutionary past is conserved in the mind and the brain. Today the "evolutionary Freud" is nearly forgotten. Even among Freudians, he is regarded to be a red herring, relevant only to the extent that he diverts attention from the enduring achievements of the authentic Freud. There are three ways to explain these attitudes. First, the evolutionary Freud's key work is the "Overview of the Transference Neurosis" (1915). But it was published at an inopportune moment, forty years after the author's death, during the so-called "Freud wars." Second, Freud eventually lost interest in the "Overview" and the prospect of a comprehensive evolutionary theory of psychopathology. The publication of The Ego and the Id (1923), introducing Freud's structural theory of the psyche, marked the point of no return. Finally, Freud's evolutionary theory is simply not credible. It is based on just-so stories and a thoroughly discredited evolutionary mechanism, Lamarckian use-inheritance. Explanations one and two are probably correct but also uninteresting. Explanation number three assumes that there is a fundamental difference between Freud's evolutionary narratives (not credible) and the evolutionary accounts of psychopathology that currently circulate in psychiatry and mainstream journals (credible). The assumption is mistaken but worth investigating.

  7. Selfish Gene Algorithm Vs Genetic Algorithm: A Review

    NASA Astrophysics Data System (ADS)

    Ariff, Norharyati Md; Khalid, Noor Elaiza Abdul; Hashim, Rathiah; Noor, Noorhayati Mohamed

    2016-11-01

    Evolutionary algorithm is one of the algorithms inspired by the nature. Within little more than a decade hundreds of papers have reported successful applications of EAs. In this paper, the Selfish Gene Algorithms (SFGA), as one of the latest evolutionary algorithms (EAs) inspired from the Selfish Gene Theory which is an interpretation of Darwinian Theory ideas from the biologist Richards Dawkins on 1989. In this paper, following a brief introduction to the Selfish Gene Algorithm (SFGA), the chronology of its evolution is presented. It is the purpose of this paper is to present an overview of the concepts of Selfish Gene Algorithm (SFGA) as well as its opportunities and challenges. Accordingly, the history, step involves in the algorithm are discussed and its different applications together with an analysis of these applications are evaluated.

  8. Model-Based Reasoning in Humans Becomes Automatic with Training.

    PubMed

    Economides, Marcos; Kurth-Nelson, Zeb; Lübbert, Annika; Guitart-Masip, Marc; Dolan, Raymond J

    2015-09-01

    Model-based and model-free reinforcement learning (RL) have been suggested as algorithmic realizations of goal-directed and habitual action strategies. Model-based RL is more flexible than model-free but requires sophisticated calculations using a learnt model of the world. This has led model-based RL to be identified with slow, deliberative processing, and model-free RL with fast, automatic processing. In support of this distinction, it has recently been shown that model-based reasoning is impaired by placing subjects under cognitive load--a hallmark of non-automaticity. Here, using the same task, we show that cognitive load does not impair model-based reasoning if subjects receive prior training on the task. This finding is replicated across two studies and a variety of analysis methods. Thus, task familiarity permits use of model-based reasoning in parallel with other cognitive demands. The ability to deploy model-based reasoning in an automatic, parallelizable fashion has widespread theoretical implications, particularly for the learning and execution of complex behaviors. It also suggests a range of important failure modes in psychiatric disorders.

  9. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  10. Expansion of biological pathways based on evolutionary inference

    PubMed Central

    Li, Yang; Calvo, Sarah E.; Gutman, Roee

    2014-01-01

    Summary Availability of diverse genomes makes it possible to predict gene function based on shared evolutionary history. This approach can be challenging, however, for pathways whose components do not exhibit a shared history, but rather, consist of distinct “evolutionary modules.” We introduce a computational algorithm, CLIME (clustering by inferred models of evolution), which inputs a eukaryotic species tree, homology matrix, and pathway (gene set) of interest. CLIME partitions the gene set into disjoint evolutionary modules, simultaneously learning the number of modules and a tree-based evolutionary history that defines each module. CLIME then expands each module by scanning the genome for new components that likely arose under the inferred evolutionary model. Application of CLIME to ∼1000 annotated human pathways, organelles and proteomes of yeast, red algae, and malaria, reveals unanticipated evolutionary modularity and novel, co-evolving components. CLIME is freely available and should become increasingly powerful with the growing wealth of eukaryotic genomes. PMID:24995987

  11. Control of quantum phenomena through evolutionary engineering: let the system do the thinking

    NASA Astrophysics Data System (ADS)

    Rabitz, Herschel H.; Bosacchi, Bruno

    2002-12-01

    We discuss a successful application of evolutionary algorithms and femtosecond pulse-shaping technology to the coherent control of quantum phenomena. After a brief review of the field of quantum control, we show how evolutionary algorithms provide an effective and, so far, the only general solution to the problem of managing the complex interplay of interference effects which characterize quantum phenomena. A representative list of experimental results is presented, and some directions for future developments are discussed. The success of evolutionary algorithms in quantum control is seen as a significant step in the evolution of computational intelligence, from evolutionary algorithms, to evolutionary programming, to evolutionary engineering, whereby a hardware system organizes itself and evolves on line to achieve a desired result.

  12. Polymorphic Evolutionary Games.

    PubMed

    Fishman, Michael A

    2016-06-07

    In this paper, I present an analytical framework for polymorphic evolutionary games suitable for explicitly modeling evolutionary processes in diploid populations with sexual reproduction. The principal aspect of the proposed approach is adding diploid genetics cum sexual recombination to a traditional evolutionary game, and switching from phenotypes to haplotypes as the new game׳s pure strategies. Here, the relevant pure strategy׳s payoffs derived by summing the payoffs of all the phenotypes capable of producing gametes containing that particular haplotype weighted by the pertinent probabilities. The resulting game is structurally identical to the familiar Evolutionary Games with non-linear pure strategy payoffs (Hofbauer and Sigmund, 1998. Cambridge University Press), and can be analyzed in terms of an established analytical framework for such games. And these results can be translated into the terms of genotypic, and whence, phenotypic evolutionary stability pertinent to the original game.

  13. Evolutionary Fingerprinting of Genes

    PubMed Central

    Kosakovsky Pond, Sergei L.; Scheffler, Konrad; Gravenor, Michael B.; Poon, Art F.Y.; Frost, Simon D.W.

    2010-01-01

    Over time, natural selection molds every gene into a unique mosaic of sites evolving rapidly or resisting change—an “evolutionary fingerprint” of the gene. Aspects of this evolutionary fingerprint, such as the site-specific ratio of nonsynonymous to synonymous substitution rates (dN/dS), are commonly used to identify genetic features of potential biological interest; however, no framework exists for comparing evolutionary fingerprints between genes. We hypothesize that protein-coding genes with similar protein structure and/or function tend to have similar evolutionary fingerprints and that comparing evolutionary fingerprints can be useful for discovering similarities between genes in a way that is analogous to, but independent of, discovery of similarity via sequence-based comparison tools such as Blast. To test this hypothesis, we develop a novel model of coding sequence evolution that uses a general bivariate discrete parameterization of the evolutionary rates. We show that this approach provides a better fit to the data using a smaller number of parameters than existing models. Next, we use the model to represent evolutionary fingerprints as probability distributions and present a methodology for comparing these distributions in a way that is robust against variations in data set size and divergence. Finally, using sequences of three rapidly evolving RNA viruses (HIV-1, hepatitis C virus, and influenza A virus), we demonstrate that genes within the same functional group tend to have similar evolutionary fingerprints. Our framework provides a sound statistical foundation for efficient inference and comparison of evolutionary rate patterns in arbitrary collections of gene alignments, clustering homologous and nonhomologous genes, and investigation of biological and functional correlates of evolutionary rates. PMID:19864470

  14. Evolutionary Optimization of a Quadrifilar Helical Antenna

    NASA Technical Reports Server (NTRS)

    Lohn, Jason D.; Kraus, William F.; Linden, Derek S.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Automated antenna synthesis via evolutionary design has recently garnered much attention in the research literature. Evolutionary algorithms show promise because, among search algorithms, they are able to effectively search large, unknown design spaces. NASA's Mars Odyssey spacecraft is due to reach final Martian orbit insertion in January, 2002. Onboard the spacecraft is a quadrifilar helical antenna that provides telecommunications in the UHF band with landed assets, such as robotic rovers. Each helix is driven by the same signal which is phase-delayed in 90 deg increments. A small ground plane is provided at the base. It is designed to operate in the frequency band of 400-438 MHz. Based on encouraging previous results in automated antenna design using evolutionary search, we wanted to see whether such techniques could improve upon Mars Odyssey antenna design. Specifically, a co-evolutionary genetic algorithm is applied to optimize the gain and size of the quadrifilar helical antenna. The optimization was performed in-situ in the presence of a neighboring spacecraft structure. On the spacecraft, a large aluminum fuel tank is adjacent to the antenna. Since this fuel tank can dramatically affect the antenna's performance, we leave it to the evolutionary process to see if it can exploit the fuel tank's properties advantageously. Optimizing in the presence of surrounding structures would be quite difficult for human antenna designers, and thus the actual antenna was designed for free space (with a small ground plane). In fact, when flying on the spacecraft, surrounding structures that are moveable (e.g., solar panels) may be moved during the mission in order to improve the antenna's performance.

  15. Model-Based Safety Analysis

    NASA Technical Reports Server (NTRS)

    Joshi, Anjali; Heimdahl, Mats P. E.; Miller, Steven P.; Whalen, Mike W.

    2006-01-01

    System safety analysis techniques are well established and are used extensively during the design of safety-critical systems. Despite this, most of the techniques are highly subjective and dependent on the skill of the practitioner. Since these analyses are usually based on an informal system model, it is unlikely that they will be complete, consistent, and error free. In fact, the lack of precise models of the system architecture and its failure modes often forces the safety analysts to devote much of their effort to gathering architectural details about the system behavior from several sources and embedding this information in the safety artifacts such as the fault trees. This report describes Model-Based Safety Analysis, an approach in which the system and safety engineers share a common system model created using a model-based development process. By extending the system model with a fault model as well as relevant portions of the physical system to be controlled, automated support can be provided for much of the safety analysis. We believe that by using a common model for both system and safety engineering and automating parts of the safety analysis, we can both reduce the cost and improve the quality of the safety analysis. Here we present our vision of model-based safety analysis and discuss the advantages and challenges in making this approach practical.

  16. Advanced Targeting Cost Function Design for Evolutionary Optimization of Control of Logistic Equation

    NASA Astrophysics Data System (ADS)

    Senkerik, Roman; Zelinka, Ivan; Davendra, Donald; Oplatkova, Zuzana

    2010-06-01

    This research deals with the optimization of the control of chaos by means of evolutionary algorithms. This work is aimed on an explanation of how to use evolutionary algorithms (EAs) and how to properly define the advanced targeting cost function (CF) securing very fast and precise stabilization of desired state for any initial conditions. As a model of deterministic chaotic system, the one dimensional Logistic equation was used. The evolutionary algorithm Self-Organizing Migrating Algorithm (SOMA) was used in four versions. For each version, repeated simulations were conducted to outline the effectiveness and robustness of used method and targeting CF.

  17. Eco-evolutionary feedbacks, adaptive dynamics and evolutionary rescue theory.

    PubMed

    Ferriere, Regis; Legendre, Stéphane

    2013-01-19

    Adaptive dynamics theory has been devised to account for feedbacks between ecological and evolutionary processes. Doing so opens new dimensions to and raises new challenges about evolutionary rescue. Adaptive dynamics theory predicts that successive trait substitutions driven by eco-evolutionary feedbacks can gradually erode population size or growth rate, thus potentially raising the extinction risk. Even a single trait substitution can suffice to degrade population viability drastically at once and cause 'evolutionary suicide'. In a changing environment, a population may track a viable evolutionary attractor that leads to evolutionary suicide, a phenomenon called 'evolutionary trapping'. Evolutionary trapping and suicide are commonly observed in adaptive dynamics models in which the smooth variation of traits causes catastrophic changes in ecological state. In the face of trapping and suicide, evolutionary rescue requires that the population overcome evolutionary threats generated by the adaptive process itself. Evolutionary repellors play an important role in determining how variation in environmental conditions correlates with the occurrence of evolutionary trapping and suicide, and what evolutionary pathways rescue may follow. In contrast with standard predictions of evolutionary rescue theory, low genetic variation may attenuate the threat of evolutionary suicide and small population sizes may facilitate escape from evolutionary traps.

  18. Unifying evolutionary and network dynamics

    NASA Astrophysics Data System (ADS)

    Swarup, Samarth; Gasser, Les

    2007-06-01

    Many important real-world networks manifest small-world properties such as scale-free degree distributions, small diameters, and clustering. The most common model of growth for these networks is preferential attachment, where nodes acquire new links with probability proportional to the number of links they already have. We show that preferential attachment is a special case of the process of molecular evolution. We present a single-parameter model of network growth that unifies varieties of preferential attachment with the quasispecies equation (which models molecular evolution), and also with the Erdős-Rényi random graph model. We suggest some properties of evolutionary models that might be applied to the study of networks. We also derive the form of the degree distribution resulting from our algorithm, and we show through simulations that the process also models aspects of network growth. The unification allows mathematical machinery developed for evolutionary dynamics to be applied in the study of network dynamics, and vice versa.

  19. Evolutionary behavioral genetics

    PubMed Central

    Zietsch, Brendan P.; de Candia, Teresa R; Keller, Matthew C.

    2014-01-01

    We describe the scientific enterprise at the intersection of evolutionary psychology and behavioral genetics—a field that could be termed Evolutionary Behavioral Genetics—and how modern genetic data is revolutionizing our ability to test questions in this field. We first explain how genetically informative data and designs can be used to investigate questions about the evolution of human behavior, and describe some of the findings arising from these approaches. Second, we explain how evolutionary theory can be applied to the investigation of behavioral genetic variation. We give examples of how new data and methods provide insight into the genetic architecture of behavioral variation and what this tells us about the evolutionary processes that acted on the underlying causal genetic variants. PMID:25587556

  20. Evolutionary Mechanisms for Loneliness

    PubMed Central

    Cacioppo, John T.; Cacioppo, Stephanie; Boomsma, Dorret I.

    2013-01-01

    Robert Weiss (1973) conceptualized loneliness as perceived social isolation, which he described as a gnawing, chronic disease without redeeming features. On the scale of everyday life, it is understandable how something as personally aversive as loneliness could be regarded as a blight on human existence. However, evolutionary time and evolutionary forces operate at such a different scale of organization than we experience in everyday life that personal experience is not sufficient to understand the role of loneliness in human existence. Research over the past decade suggests a very different view of loneliness than suggested by personal experience, one in which loneliness serves a variety of adaptive functions in specific habitats. We review evidence on the heritability of loneliness and outline an evolutionary theory of loneliness, with an emphasis on its potential adaptive value in an evolutionary timescale. PMID:24067110

  1. Evolutionary mechanisms for loneliness.

    PubMed

    Cacioppo, John T; Cacioppo, Stephanie; Boomsma, Dorret I

    2014-01-01

    Robert Weiss (1973) conceptualised loneliness as perceived social isolation, which he described as a gnawing, chronic disease without redeeming features. On the scale of everyday life, it is understandable how something as personally aversive as loneliness could be regarded as a blight on human existence. However, evolutionary time and evolutionary forces operate at such a different scale of organisation than we experience in everyday life that personal experience is not sufficient to understand the role of loneliness in human existence. Research over the past decade suggests a very different view of loneliness than suggested by personal experience, one in which loneliness serves a variety of adaptive functions in specific habitats. We review evidence on the heritability of loneliness and outline an evolutionary theory of loneliness, with an emphasis on its potential adaptive value in an evolutionary timescale.

  2. Automated Hardware Design via Evolutionary Search

    NASA Technical Reports Server (NTRS)

    Lohn, Jason D.; Colombano, Silvano P.

    2000-01-01

    The goal of this research is to investigate the application of evolutionary search to the process of automated engineering design. Evolutionary search techniques involve the simulation of Darwinian mechanisms by computer algorithms. In recent years, such techniques have attracted much attention because they are able to tackle a wide variety of difficult problems and frequently produce acceptable solutions. The results obtained are usually functional, often surprising, and typically "messy" because the algorithms are told to concentrate on the overriding objective and not elegance or simplicity. advantages. First, faster design cycles translate into time and, hence, cost savings. Second, automated design techniques can be made to scale well and hence better deal with increasing amounts of design complexity. Third, design quality can increase because design properties can be specified a priori. For example, size and weight specifications of a device, smaller and lighter than the best known design, might be optimized by the automated design technique. The domain of electronic circuit design is an advantageous platform in which to study automated design techniques because it is a rich design space that is well understood, permitting human-created designs to be compared to machine- generated designs. developed for circuit design was to automatically produce high-level integrated electronic circuit designs whose properties permit physical implementation in silicon. This process entailed designing an effective evolutionary algorithm and solving a difficult multiobjective optimization problem. FY 99 saw many accomplishments in this effort.

  3. Learning Intelligent Genetic Algorithms Using Japanese Nonograms

    ERIC Educational Resources Information Center

    Tsai, Jinn-Tsong; Chou, Ping-Yi; Fang, Jia-Cen

    2012-01-01

    An intelligent genetic algorithm (IGA) is proposed to solve Japanese nonograms and is used as a method in a university course to learn evolutionary algorithms. The IGA combines the global exploration capabilities of a canonical genetic algorithm (CGA) with effective condensed encoding, improved fitness function, and modified crossover and…

  4. Evolutionary processes and mother-child attachment in intentional change.

    PubMed

    Ho, S Shaun; Torres-Garcia, Adrianna; Swain, James E

    2014-08-01

    Behavioral change may occur through evolutionary processes such as running stochastic evolutionary algorithms, with a fitness function to determine a winning solution from many. A science of intentional change will therefore require identification of fitness functions - causal mechanisms of adaptation - that can be acquired only with analytical approaches. Fitness functions may be subject to early-life experiences with parents, which influence some of the very same brain circuits that may mediate behavioral change through interventions.

  5. Bio-inspired algorithms applied to molecular docking simulations.

    PubMed

    Heberlé, G; de Azevedo, W F

    2011-01-01

    Nature as a source of inspiration has been shown to have a great beneficial impact on the development of new computational methodologies. In this scenario, analyses of the interactions between a protein target and a ligand can be simulated by biologically inspired algorithms (BIAs). These algorithms mimic biological systems to create new paradigms for computation, such as neural networks, evolutionary computing, and swarm intelligence. This review provides a description of the main concepts behind BIAs applied to molecular docking simulations. Special attention is devoted to evolutionary algorithms, guided-directed evolutionary algorithms, and Lamarckian genetic algorithms. Recent applications of these methodologies to protein targets identified in the Mycobacterium tuberculosis genome are described.

  6. Proteomics in evolutionary ecology.

    PubMed

    Baer, B; Millar, A H

    2016-03-01

    Evolutionary ecologists are traditionally gene-focused, as genes propagate phenotypic traits across generations and mutations and recombination in the DNA generate genetic diversity required for evolutionary processes. As a consequence, the inheritance of changed DNA provides a molecular explanation for the functional changes associated with natural selection. A direct focus on proteins on the other hand, the actual molecular agents responsible for the expression of a phenotypic trait, receives far less interest from ecologists and evolutionary biologists. This is partially due to the central dogma of molecular biology that appears to define proteins as the 'dead-end of molecular information flow' as well as technical limitations in identifying and studying proteins and their diversity in the field and in many of the more exotic genera often favored in ecological studies. Here we provide an overview of a newly forming field of research that we refer to as 'Evolutionary Proteomics'. We point out that the origins of cellular function are related to the properties of polypeptide and RNA and their interactions with the environment, rather than DNA descent, and that the critical role of horizontal gene transfer in evolution is more about coopting new proteins to impact cellular processes than it is about modifying gene function. Furthermore, post-transcriptional and post-translational processes generate a remarkable diversity of mature proteins from a single gene, and the properties of these mature proteins can also influence inheritance through genetic and perhaps epigenetic mechanisms. The influence of post-transcriptional diversification on evolutionary processes could provide a novel mechanistic underpinning for elements of rapid, directed evolutionary changes and adaptations as observed for a variety of evolutionary processes. Modern state-of the art technologies based on mass spectrometry are now available to identify and quantify peptides, proteins, protein

  7. Applying Evolutionary Anthropology

    PubMed Central

    Gibson, Mhairi A; Lawson, David W

    2015-01-01

    Evolutionary anthropology provides a powerful theoretical framework for understanding how both current environments and legacies of past selection shape human behavioral diversity. This integrative and pluralistic field, combining ethnographic, demographic, and sociological methods, has provided new insights into the ultimate forces and proximate pathways that guide human adaptation and variation. Here, we present the argument that evolutionary anthropological studies of human behavior also hold great, largely untapped, potential to guide the design, implementation, and evaluation of social and public health policy. Focusing on the key anthropological themes of reproduction, production, and distribution we highlight classic and recent research demonstrating the value of an evolutionary perspective to improving human well-being. The challenge now comes in transforming relevance into action and, for that, evolutionary behavioral anthropologists will need to forge deeper connections with other applied social scientists and policy-makers. We are hopeful that these developments are underway and that, with the current tide of enthusiasm for evidence-based approaches to policy, evolutionary anthropology is well positioned to make a strong contribution. PMID:25684561

  8. Paleoanthropology and evolutionary theory.

    PubMed

    Tattersall, Ian

    2012-01-01

    Paleoanthropologists of the first half of the twentieth century were little concerned either with evolutionary theory or with the technicalities and broader implications of zoological nomenclature. In consequence, the paleoanthropological literature of the period consisted largely of a series of descriptions accompanied by authoritative pronouncements, together with a huge excess of hominid genera and species. Given the intellectual flimsiness of the resulting paleoanthropological framework, it is hardly surprising that in 1950 the ornithologist Ernst Mayr met little resistance when he urged the new postwar generation of paleoanthropologists to accept not only the elegant reductionism of the Evolutionary Synthesis but a vast oversimplification of hominid phylogenetic history and nomenclature. Indeed, the impact of Mayr's onslaught was so great that even when developments in evolutionary biology during the last quarter of the century brought other paleontologists to the realization that much more has been involved in evolutionary histories than the simple action of natural selection within gradually transforming lineages, paleoanthropologists proved highly reluctant to follow. Even today, paleoanthropologists are struggling to reconcile an intuitive realization that the burgeoning hominid fossil record harbors a substantial diversity of species (bringing hominid evolutionary patterns into line with that of other successful mammalian families), with the desire to cram a huge variety of morphologies into an unrealistically minimalist systematic framework. As long as this theoretical ambivalence persists, our perception of events in hominid phylogeny will continue to be distorted.

  9. Applying evolutionary anthropology.

    PubMed

    Gibson, Mhairi A; Lawson, David W

    2015-01-01

    Evolutionary anthropology provides a powerful theoretical framework for understanding how both current environments and legacies of past selection shape human behavioral diversity. This integrative and pluralistic field, combining ethnographic, demographic, and sociological methods, has provided new insights into the ultimate forces and proximate pathways that guide human adaptation and variation. Here, we present the argument that evolutionary anthropological studies of human behavior also hold great, largely untapped, potential to guide the design, implementation, and evaluation of social and public health policy. Focusing on the key anthropological themes of reproduction, production, and distribution we highlight classic and recent research demonstrating the value of an evolutionary perspective to improving human well-being. The challenge now comes in transforming relevance into action and, for that, evolutionary behavioral anthropologists will need to forge deeper connections with other applied social scientists and policy-makers. We are hopeful that these developments are underway and that, with the current tide of enthusiasm for evidence-based approaches to policy, evolutionary anthropology is well positioned to make a strong contribution.

  10. Model Based Autonomy for Robust Mars Operations

    NASA Technical Reports Server (NTRS)

    Kurien, James A.; Nayak, P. Pandurang; Williams, Brian C.; Lau, Sonie (Technical Monitor)

    1998-01-01

    Space missions have historically relied upon a large ground staff, numbering in the hundreds for complex missions, to maintain routine operations. When an anomaly occurs, this small army of engineers attempts to identify and work around the problem. A piloted Mars mission, with its multiyear duration, cost pressures, half-hour communication delays and two-week blackouts cannot be closely controlled by a battalion of engineers on Earth. Flight crew involvement in routine system operations must also be minimized to maximize science return. It also may be unrealistic to require the crew have the expertise in each mission subsystem needed to diagnose a system failure and effect a timely repair, as engineers did for Apollo 13. Enter model-based autonomy, which allows complex systems to autonomously maintain operation despite failures or anomalous conditions, contributing to safe, robust, and minimally supervised operation of spacecraft, life support, In Situ Resource Utilization (ISRU) and power systems. Autonomous reasoning is central to the approach. A reasoning algorithm uses a logical or mathematical model of a system to infer how to operate the system, diagnose failures and generate appropriate behavior to repair or reconfigure the system in response. The 'plug and play' nature of the models enables low cost development of autonomy for multiple platforms. Declarative, reusable models capture relevant aspects of the behavior of simple devices (e.g. valves or thrusters). Reasoning algorithms combine device models to create a model of the system-wide interactions and behavior of a complex, unique artifact such as a spacecraft. Rather than requiring engineers to all possible interactions and failures at design time or perform analysis during the mission, the reasoning engine generates the appropriate response to the current situation, taking into account its system-wide knowledge, the current state, and even sensor failures or unexpected behavior.

  11. A framework for evolutionary systems biology

    PubMed Central

    Loewe, Laurence

    2009-01-01

    Background Many difficult problems in evolutionary genomics are related to mutations that have weak effects on fitness, as the consequences of mutations with large effects are often simple to predict. Current systems biology has accumulated much data on mutations with large effects and can predict the properties of knockout mutants in some systems. However experimental methods are too insensitive to observe small effects. Results Here I propose a novel framework that brings together evolutionary theory and current systems biology approaches in order to quantify small effects of mutations and their epistatic interactions in silico. Central to this approach is the definition of fitness correlates that can be computed in some current systems biology models employing the rigorous algorithms that are at the core of much work in computational systems biology. The framework exploits synergies between the realism of such models and the need to understand real systems in evolutionary theory. This framework can address many longstanding topics in evolutionary biology by defining various 'levels' of the adaptive landscape. Addressed topics include the distribution of mutational effects on fitness, as well as the nature of advantageous mutations, epistasis and robustness. Combining corresponding parameter estimates with population genetics models raises the possibility of testing evolutionary hypotheses at a new level of realism. Conclusion EvoSysBio is expected to lead to a more detailed understanding of the fundamental principles of life by combining knowledge about well-known biological systems from several disciplines. This will benefit both evolutionary theory and current systems biology. Understanding robustness by analysing distributions of mutational effects and epistasis is pivotal for drug design, cancer research, responsible genetic engineering in synthetic biology and many other practical applications. PMID:19239699

  12. Ecological and evolutionary traps

    USGS Publications Warehouse

    Schlaepfer, Martin A.; Runge, M.C.; Sherman, P.W.

    2002-01-01

    Organisms often rely on environmental cues to make behavioral and life-history decisions. However, in environments that have been altered suddenly by humans, formerly reliable cues might no longer be associated with adaptive outcomes. In such cases, organisms can become 'trapped' by their evolutionary responses to the cues and experience reduced survival or reproduction. Ecological traps occur when organisms make poor habitat choices based on cues that correlated formerly with habitat quality. Ecological traps are part of a broader phenomenon, evolutionary traps, involving a dissociation between cues that organisms use to make any behavioral or life-history decision and outcomes normally associated with that decision. A trap can lead to extinction if a population falls below a critical size threshold before adaptation to the novel environment occurs. Conservation and management protocols must be designed in light of, rather than in spite of, the behavioral mechanisms and evolutionary history of populations and species to avoid 'trapping' them.

  13. Human nutrition: evolutionary perspectives.

    PubMed

    Barnicot, N A

    2005-01-01

    In recent decades, much new evidence relating to the ape forerunners of modern humans has come to hand and diet appears to be an important factor. At some stage, there must have been a transition from a largely vegetarian ape diet to a modern human hunting economy providing significant amounts of meat. On an even longer evolutionary time scale the change was more complex. The mechanisms of evolutionary change are now better understood than they were in Darwin's time, thanks largely to great advances in genetics, both experimental and theoretical. It is virtually certain that diet, as a major component of the human environment, must have exerted evolutionary effects, but researchers still have little good evidence.

  14. Evolutionary synthetic biology.

    PubMed

    Peisajovich, Sergio G

    2012-06-15

    Signaling networks process vast amounts of environmental information to generate specific cellular responses. As cellular environments change, signaling networks adapt accordingly. Here, I will discuss how the integration of synthetic biology and directed evolution approaches is shedding light on the molecular mechanisms that guide the evolution of signaling networks. In particular, I will review studies that demonstrate how different types of mutations, from the replacement of individual amino acids to the shuffling of modular domains, lead to markedly different evolutionary trajectories and consequently to diverse network rewiring. Moreover, I will argue that intrinsic evolutionary properties of signaling proteins, such as the robustness of wild type functions, the promiscuous nature of evolutionary intermediates, and the modular decoupling between binding and catalysis, play important roles in the evolution of signaling networks. Finally, I will argue that rapid advances in our ability to synthesize DNA will radically alter how we study signaling network evolution at the genome-wide level.

  15. Evolutionary Debunking Arguments.

    PubMed

    Kahane, Guy

    2011-03-01

    Evolutionary debunking arguments (EDAs) are arguments that appeal to the evolutionary origins of evaluative beliefs to undermine their justification. This paper aims to clarify the premises and presuppositions of EDAs-a form of argument that is increasingly put to use in normative ethics. I argue that such arguments face serious obstacles. It is often overlooked, for example, that they presuppose the truth of metaethical objectivism. More importantly, even if objectivism is assumed, the use of EDAs in normative ethics is incompatible with a parallel and more sweeping global evolutionary debunking argument that has been discussed in recent metaethics. After examining several ways of responding to this global debunking argument, I end by arguing that even if we could resist it, this would still not rehabilitate the current targeted use of EDAs in normative ethics given that, if EDAs work at all, they will in any case lead to a truly radical revision of our evaluative outlook.

  16. Evolutionary Debunking Arguments

    PubMed Central

    Kahane, Guy

    2011-01-01

    Evolutionary debunking arguments (EDAs) are arguments that appeal to the evolutionary origins of evaluative beliefs to undermine their justification. This paper aims to clarify the premises and presuppositions of EDAs—a form of argument that is increasingly put to use in normative ethics. I argue that such arguments face serious obstacles. It is often overlooked, for example, that they presuppose the truth of metaethical objectivism. More importantly, even if objectivism is assumed, the use of EDAs in normative ethics is incompatible with a parallel and more sweeping global evolutionary debunking argument that has been discussed in recent metaethics. After examining several ways of responding to this global debunking argument, I end by arguing that even if we could resist it, this would still not rehabilitate the current targeted use of EDAs in normative ethics given that, if EDAs work at all, they will in any case lead to a truly radical revision of our evaluative outlook. PMID:21949447

  17. Model-Based Method for Sensor Validation

    NASA Technical Reports Server (NTRS)

    Vatan, Farrokh

    2012-01-01

    Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in situ platforms. One of NASA s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation can be considered to be part of the larger effort of improving reliability and safety. The standard methods for solving the sensor validation problem are based on probabilistic analysis of the system, from which the method based on Bayesian networks is most popular. Therefore, these methods can only predict the most probable faulty sensors, which are subject to the initial probabilities defined for the failures. The method developed in this work is based on a model-based approach and provides the faulty sensors (if any), which can be logically inferred from the model of the system and the sensor readings (observations). The method is also more suitable for the systems when it is hard, or even impossible, to find the probability functions of the system. The method starts by a new mathematical description of the problem and develops a very efficient and systematic algorithm for its solution. The method builds on the concepts of analytical redundant relations (ARRs).

  18. 3-D model-based vehicle tracking.

    PubMed

    Lou, Jianguang; Tan, Tieniu; Hu, Weiming; Yang, Hao; Maybank, Steven J

    2005-10-01

    This paper aims at tracking vehicles from monocular intensity image sequences and presents an efficient and robust approach to three-dimensional (3-D) model-based vehicle tracking. Under the weak perspective assumption and the ground-plane constraint, the movements of model projection in the two-dimensional image plane can be decomposed into two motions: translation and rotation. They are the results of the corresponding movements of 3-D translation on the ground plane (GP) and rotation around the normal of the GP, which can be determined separately. A new metric based on point-to-line segment distance is proposed to evaluate the similarity between an image region and an instantiation of a 3-D vehicle model under a given pose. Based on this, we provide an efficient pose refinement method to refine the vehicle's pose parameters. An improved EKF is also proposed to track and to predict vehicle motion with a precise kinematics model. Experimental results with both indoor and outdoor data show that the algorithm obtains desirable performance even under severe occlusion and clutter.

  19. Investigating human evolutionary history

    PubMed Central

    WOOD, BERNARD

    2000-01-01

    We rely on fossils for the interpretation of more than 95% of our evolutionary history. Fieldwork resulting in the recovery of fresh fossil evidence is an important component of reconstructing human evolutionary history, but advances can also be made by extracting additional evidence for the existing fossil record, and by improving the methods used to interpret the fossil evidence. This review shows how information from imaging and dental microstructure has contributed to improving our understanding of the hominin fossil record. It also surveys recent advances in the use of the fossil record for phylogenetic inference. PMID:10999269

  20. Evolutionary Design in Art

    NASA Astrophysics Data System (ADS)

    McCormack, Jon

    Evolution is one of the most interesting and creative processes we currently understand, so it should come as no surprise that artists and designers are embracing the use of evolution in problems of artistic creativity. The material in this section illustrates the diversity of approaches being used by artists and designers in relation to evolution at the boundary of art and science. While conceptualising human creativity as an evolutionary process in itself may be controversial, what is clear is that evolutionary processes can be used to complement, even enhance human creativity, as the chapters in this section aptly demonstrate.

  1. The fastest evolutionary trajectory

    PubMed Central

    Traulsen, Arne; Iwasa, Yoh; Nowak, Martin A.

    2008-01-01

    Given two mutants, A and B, separated by n mutational steps, what is the evolutionary trajectory which allows a homogeneous population of A to reach B in the shortest time? We show that the optimum evolutionary trajectory (fitness landscape) has the property that the relative fitness increase between any two consecutive steps is constant. Hence, the optimum fitness landscape between A and B is given by an exponential function. Our result is precise for small mutation rates and excluding back mutations. We discuss deviations for large mutation rates and including back mutations. For very large mutation rates, the optimum fitness landscape is flat and has a single peak at type B. PMID:17900629

  2. Model-based fault diagnosis in continuous dynamic systems.

    PubMed

    Lo, C H; Wong, Y K; Rad, A B

    2004-07-01

    Traditional fault detection and isolation methods are based on quantitative models which are sometimes difficult and costly to obtain. In this paper, qualitative bond graph (QBG) reasoning is adopted as the modeling scheme to generate a set of qualitative equations. The QBG method provides a unified approach for modeling engineering systems, in particular, mechatronic systems. An input-output qualitative equation derived from QBG formalism performs continuous system monitoring. Fault diagnosis is activated when a discrepancy is observed between measured abnormal behavior and predicted system behavior. Genetic algorithms (GA's) are then used to search for possible faulty components among a system of qualitative equations. In order to demonstrate the performance of the proposed algorithm, we have tested it on a laboratory scale servo-tank liquid process rig. Results of the proposed model-based fault detection and diagnosis algorithm for the process rig are presented and discussed.

  3. EVOLUTIONARY FOUNDATIONS FOR MOLECULAR MEDICINE

    PubMed Central

    Nesse, Randolph M.; Ganten, Detlev; Gregory, T. Ryan; Omenn, Gilbert S.

    2015-01-01

    Evolution has long provided a foundation for population genetics, but many major advances in evolutionary biology from the 20th century are only now being applied in molecular medicine. They include the distinction between proximate and evolutionary explanations, kin selection, evolutionary models for cooperation, and new strategies for tracing phylogenies and identifying signals of selection. Recent advances in genomics are further transforming evolutionary biology and creating yet more opportunities for progress at the interface of evolution with genetics, medicine, and public health. This article reviews 15 evolutionary principles and their applications in molecular medicine in hopes that readers will use them and others to speed the development of evolutionary molecular medicine. PMID:22544168

  4. Evolutionary Optimization of Yagi-Uda Antennas

    NASA Technical Reports Server (NTRS)

    Lohn, Jason D.; Kraus, William F.; Linden, Derek S.; Colombano, Silvano P.

    2001-01-01

    Yagi-Uda antennas are known to be difficult to design and optimize due to their sensitivity at high gain, and the inclusion of numerous parasitic elements. We present a genetic algorithm-based automated antenna optimization system that uses a fixed Yagi-Uda topology and a byte-encoded antenna representation. The fitness calculation allows the implicit relationship between power gain and sidelobe/backlobe loss to emerge naturally, a technique that is less complex than previous approaches. The genetic operators used are also simpler. Our results include Yagi-Uda antennas that have excellent bandwidth and gain properties with very good impedance characteristics. Results exceeded previous Yagi-Uda antennas produced via evolutionary algorithms by at least 7.8% in mainlobe gain. We also present encouraging preliminary results where a coevolutionary genetic algorithm is used.

  5. Evolutionary Developmental Psychology.

    ERIC Educational Resources Information Center

    Geary, David C.; Bjorklund, David F.

    2000-01-01

    Describes evolutionary developmental psychology as the study of the genetic and ecological mechanisms that govern the development of social and cognitive competencies common to all human beings and the epigenetic (gene-environment interactions) processes that adapt these competencies to local conditions. Outlines basic assumptions and domains of…

  6. Evolutionary developmental psychology.

    PubMed

    King, Ashley C; Bjorklund, David F

    2010-02-01

    The field of evolutionary developmental psychology can potentially broaden the horizons of mainstream evolutionary psychology by combining the principles of Darwinian evolution by natural selection with the study of human development, focusing on the epigenetic effects that occur between humans and their environment in a way that attempts to explain how evolved psychological mechanisms become expressed in the phenotypes of adults. An evolutionary developmental perspective includes an appreciation of comparative research and we, among others, argue that contrasting the cognition of humans with that of nonhuman primates can provide a framework with which to understand how human cognitive abilities and intelligence evolved. Furthermore, we argue that several aspects of childhood (e.g., play and immature cognition) serve both as deferred adaptations as well as imparting immediate benefits. Intense selection pressure was surely exerted on childhood over human evolutionary history and, as a result, neglecting to consider the early developmental period of children when studying their later adulthood produces an incomplete picture of the evolved adaptations expressed through human behavior and cognition.

  7. Learning: An Evolutionary Analysis

    ERIC Educational Resources Information Center

    Swann, Joanna

    2009-01-01

    This paper draws on the philosophy of Karl Popper to present a descriptive evolutionary epistemology that offers philosophical solutions to the following related problems: "What happens when learning takes place?" and "What happens in human learning?" It provides a detailed analysis of how learning takes place without any direct transfer of…

  8. Evolutionary Theory under Fire.

    ERIC Educational Resources Information Center

    Lewin, Roger

    1980-01-01

    Summarizes events of a conference on evolutionary biology in Chicago entitled: "Macroevolution." Reviews the theory of modern synthesis, a term used to explain Darwinism in terms of population biology and genetics. Issues presented at the conference are discussed in detail. (CS)

  9. Evolutionary Theories of Detection

    SciTech Connect

    Fitch, J P

    2005-04-29

    Current, mid-term and long range technologies for detection of pathogens and toxins are briefly described in the context of performance metrics and operational scenarios. Predictive (evolutionary) and speculative (revolutionary) assessments are given with trade-offs identified, where possible, among competing performance goals.

  10. Model-based Utility Functions

    NASA Astrophysics Data System (ADS)

    Hibbard, Bill

    2012-05-01

    Orseau and Ring, as well as Dewey, have recently described problems, including self-delusion, with the behavior of agents using various definitions of utility functions. An agent's utility function is defined in terms of the agent's history of interactions with its environment. This paper argues, via two examples, that the behavior problems can be avoided by formulating the utility function in two steps: 1) inferring a model of the environment from interactions, and 2) computing utility as a function of the environment model. Basing a utility function on a model that the agent must learn implies that the utility function must initially be expressed in terms of specifications to be matched to structures in the learned model. These specifications constitute prior assumptions about the environment so this approach will not work with arbitrary environments. But the approach should work for agents designed by humans to act in the physical world. The paper also addresses the issue of self-modifying agents and shows that if provided with the possibility to modify their utility functions agents will not choose to do so, under some usual assumptions.

  11. A Model-Based Prognostics Approach Applied to Pneumatic Valves

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Goebel, Kai

    2011-01-01

    Within the area of systems health management, the task of prognostics centers on predicting when components will fail. Model-based prognostics exploits domain knowledge of the system, its components, and how they fail by casting the underlying physical phenomena in a physics-based model that is derived from first principles. Uncertainty cannot be avoided in prediction, therefore, algorithms are employed that help in managing these uncertainties. The particle filtering algorithm has become a popular choice for model-based prognostics due to its wide applicability, ease of implementation, and support for uncertainty management. We develop a general model-based prognostics methodology within a robust probabilistic framework using particle filters. As a case study, we consider a pneumatic valve from the Space Shuttle cryogenic refueling system. We develop a detailed physics-based model of the pneumatic valve, and perform comprehensive simulation experiments to illustrate our prognostics approach and evaluate its effectiveness and robustness. The approach is demonstrated using historical pneumatic valve data from the refueling system.

  12. Product Mix Selection Using AN Evolutionary Technique

    NASA Astrophysics Data System (ADS)

    Tsoulos, Ioannis G.; Vasant, Pandian

    2009-08-01

    This paper proposes an evolutionary technique for the solution of a real—life industrial problem and particular for the product mix selection problem. The evolutionary technique is a combination of a genetic algorithm that preserves the feasibility of the trial solutions with penalties and some local optimization method. The goal of this paper has been achieved in finding the best near optimal solution for the profit fitness function respect to vagueness factor and level of satisfaction. The findings of the profit values will be very useful for the decision makers in the industrial engineering sector for the implementation purpose. It's possible to improve the solutions obtained in this study by employing other meta-heuristic methods such as simulated annealing, tabu Search, ant colony optimization, particle swarm optimization and artificial immune systems.

  13. Evolutionary link community structure discovery in dynamic weighted networks

    NASA Astrophysics Data System (ADS)

    Liu, Qiang; Liu, Caihong; Wang, Jiajia; Wang, Xiang; Zhou, Bin; Zou, Peng

    2017-01-01

    Traditional community detection methods are often restricted in static network analysis. In fact, most of networks in real world obviously show dynamic characteristics with time passing. In this paper, we design a link community structure discovery algorithm in dynamic weighted networks, which can not only reveal the evolutionary link community structure, but also detect overlapping communities by mapping link communities to node communities. Meanwhile, our algorithm can also get the hierarchical structure of link communities by tuning a parameter. The proposed algorithm is based on weighted edge fitness and weighted partition density so as to determine whether to add a link to a community and whether to merge two communities to form a new link community. Experiments on both synthetic and real world networks demonstrate the proposed algorithm can detect evolutionary link community structure in dynamic weighted networks effectively.

  14. Expediting model-based optoacoustic reconstructions with tomographic symmetries

    SciTech Connect

    Lutzweiler, Christian; Deán-Ben, Xosé Luís; Razansky, Daniel

    2014-01-15

    Purpose: Image quantification in optoacoustic tomography implies the use of accurate forward models of excitation, propagation, and detection of optoacoustic signals while inversions with high spatial resolution usually involve very large matrices, leading to unreasonably long computation times. The development of fast and memory efficient model-based approaches represents then an important challenge to advance on the quantitative and dynamic imaging capabilities of tomographic optoacoustic imaging. Methods: Herein, a method for simplification and acceleration of model-based inversions, relying on inherent symmetries present in common tomographic acquisition geometries, has been introduced. The method is showcased for the case of cylindrical symmetries by using polar image discretization of the time-domain optoacoustic forward model combined with efficient storage and inversion strategies. Results: The suggested methodology is shown to render fast and accurate model-based inversions in both numerical simulations andpost mortem small animal experiments. In case of a full-view detection scheme, the memory requirements are reduced by one order of magnitude while high-resolution reconstructions are achieved at video rate. Conclusions: By considering the rotational symmetry present in many tomographic optoacoustic imaging systems, the proposed methodology allows exploiting the advantages of model-based algorithms with feasible computational requirements and fast reconstruction times, so that its convenience and general applicability in optoacoustic imaging systems with tomographic symmetries is anticipated.

  15. 3-D model-based tracking for UAV indoor localization.

    PubMed

    Teulière, Céline; Marchand, Eric; Eck, Laurent

    2015-05-01

    This paper proposes a novel model-based tracking approach for 3-D localization. One main difficulty of standard model-based approach lies in the presence of low-level ambiguities between different edges. In this paper, given a 3-D model of the edges of the environment, we derive a multiple hypotheses tracker which retrieves the potential poses of the camera from the observations in the image. We also show how these candidate poses can be integrated into a particle filtering framework to guide the particle set toward the peaks of the distribution. Motivated by the UAV indoor localization problem where GPS signal is not available, we validate the algorithm on real image sequences from UAV flights.

  16. Evolutionary Computational Methods for Identifying Emergent Behavior in Autonomous Systems

    NASA Technical Reports Server (NTRS)

    Terrile, Richard J.; Guillaume, Alexandre

    2011-01-01

    A technique based on Evolutionary Computational Methods (ECMs) was developed that allows for the automated optimization of complex computationally modeled systems, such as autonomous systems. The primary technology, which enables the ECM to find optimal solutions in complex search spaces, derives from evolutionary algorithms such as the genetic algorithm and differential evolution. These methods are based on biological processes, particularly genetics, and define an iterative process that evolves parameter sets into an optimum. Evolutionary computation is a method that operates on a population of existing computational-based engineering models (or simulators) and competes them using biologically inspired genetic operators on large parallel cluster computers. The result is the ability to automatically find design optimizations and trades, and thereby greatly amplify the role of the system engineer.

  17. Evolutionary lunar transportation family

    NASA Technical Reports Server (NTRS)

    Capps, Stephen

    1992-01-01

    The development of an evolutionary lunar transportation family (LTF) that can accommodate evolving human exploration goals is discussed. An evolutionary system is aimed at minimizing program costs while preserving programmatic versatility. Technical requirements that affect the design strategy for LTF include aerobraking technology and packaging constraints; mixed, unsymmetrical payload manifests; crew and payload exchange operations; crew and cargo off-loading on the lunar surface; and cryogenic lunar transfer and storage. It is concluded that the LTF is capable of meeting exploration goals, which include the provision for a significant early manned lunar surface science and exploration capability, the avoidance or reduction of some major operational and infrastructure requirements, and the incorporation of common vehicle designs and existing/near-term technology.

  18. Evolutionary Determinants of Cancer

    PubMed Central

    Greaves, Mel

    2015-01-01

    ‘Nothing in biology makes sense except in the light of evolution’ Th. Dobzhansky, 1973 Our understanding of cancer is being transformed by exploring clonal diversity, drug resistance and causation within an evolutionary framework. The therapeutic resilience of advanced cancer is a consequence of its character as complex, dynamic and adaptive ecosystem engendering robustness, underpinned by genetic diversity and epigenetic plasticity. The risk of mutation-driven escape by self-renewing cells is intrinsic to multicellularity but is countered by multiple restraints facilitating increasing complexity and longevity of species. But our own has disrupted this historical narrative by rapidly escalating intrinsic risk. Evolutionary principles illuminate these challenges and provide new avenues to explore for more effective control. PMID:26193902

  19. Gillespie eco-evolutionary models (GEMs) reveal the role of heritable trait variation in eco-evolutionary dynamics.

    PubMed

    DeLong, John P; Gibert, Jean P

    2016-02-01

    Heritable trait variation is a central and necessary ingredient of evolution. Trait variation also directly affects ecological processes, generating a clear link between evolutionary and ecological dynamics. Despite the changes in variation that occur through selection, drift, mutation, and recombination, current eco-evolutionary models usually fail to track how variation changes through time. Moreover, eco-evolutionary models assume fitness functions for each trait and each ecological context, which often do not have empirical validation. We introduce a new type of model, Gillespie eco-evolutionary models (GEMs), that resolves these concerns by tracking distributions of traits through time as eco-evolutionary dynamics progress. This is done by allowing change to be driven by the direct fitness consequences of model parameters within the context of the underlying ecological model, without having to assume a particular fitness function. GEMs work by adding a trait distribution component to the standard Gillespie algorithm - an approach that models stochastic systems in nature that are typically approximated through ordinary differential equations. We illustrate GEMs with the Rosenzweig-MacArthur consumer-resource model. We show not only how heritable trait variation fuels trait evolution and influences eco-evolutionary dynamics, but also how the erosion of variation through time may hinder eco-evolutionary dynamics in the long run. GEMs can be developed for any parameter in any ordinary differential equation model and, furthermore, can enable modeling of multiple interacting traits at the same time. We expect GEMs will open the door to a new direction in eco-evolutionary and evolutionary modeling by removing long-standing modeling barriers, simplifying the link between traits, fitness, and dynamics, and expanding eco-evolutionary treatment of a greater diversity of ecological interactions. These factors make GEMs much more than a modeling advance, but an important

  20. Predicting evolutionary dynamics

    NASA Astrophysics Data System (ADS)

    Balazsi, Gabor

    We developed an ordinary differential equation-based model to predict the evolutionary dynamics of yeast cells carrying a synthetic gene circuit. The predicted aspects included the speed at which the ancestral genotype disappears from the population; as well as the types of mutant alleles that establish in each environmental condition. We validated these predictions by experimental evolution. The agreement between our predictions and experimental findings suggests that cellular and population fitness landscapes can be useful to predict short-term evolution.

  1. Adaptive noise cancellation based on beehive pattern evolutionary digital filter

    NASA Astrophysics Data System (ADS)

    Zhou, Xiaojun; Shao, Yimin

    2014-01-01

    Evolutionary digital filtering (EDF) exhibits the advantage of avoiding the local optimum problem by using cloning and mating searching rules in an adaptive noise cancellation system. However, convergence performance is restricted by the large population of individuals and the low level of information communication among them. The special beehive structure enables the individuals on neighbour beehive nodes to communicate with each other and thus enhance the information spread and random search ability of the algorithm. By introducing the beehive pattern evolutionary rules into the original EDF, this paper proposes an improved beehive pattern evolutionary digital filter (BP-EDF) to overcome the defects of the original EDF. In the proposed algorithm, a new evolutionary rule which combines competing cloning, complete cloning and assistance mating methods is constructed to enable the individuals distributed on the beehive to communicate with their neighbours. Simulation results are used to demonstrate the improved performance of the proposed algorithm in terms of convergence speed to the global optimum compared with the original methods. Experimental results also verify the effectiveness of the proposed algorithm in extracting feature signals that are contaminated by significant amounts of noise during the fault diagnosis task.

  2. Evolutionary mysteries in meiosis.

    PubMed

    Lenormand, Thomas; Engelstädter, Jan; Johnston, Susan E; Wijnker, Erik; Haag, Christoph R

    2016-10-19

    Meiosis is a key event of sexual life cycles in eukaryotes. Its mechanistic details have been uncovered in several model organisms, and most of its essential features have received various and often contradictory evolutionary interpretations. In this perspective, we present an overview of these often 'weird' features. We discuss the origin of meiosis (origin of ploidy reduction and recombination, two-step meiosis), its secondary modifications (in polyploids or asexuals, inverted meiosis), its importance in punctuating life cycles (meiotic arrests, epigenetic resetting, meiotic asymmetry, meiotic fairness) and features associated with recombination (disjunction constraints, heterochiasmy, crossover interference and hotspots). We present the various evolutionary scenarios and selective pressures that have been proposed to account for these features, and we highlight that their evolutionary significance often remains largely mysterious. Resolving these mysteries will likely provide decisive steps towards understanding why sex and recombination are found in the majority of eukaryotes.This article is part of the themed issue 'Weird sex: the underappreciated diversity of sexual reproduction'.

  3. Task-Based Design of Fluence Field Modulation in CT for Model-Based Iterative Reconstruction.

    PubMed

    Gang, Grace J; Siewerdsen, Jeffrey H; Stayman, J Webster

    2016-07-01

    A task-driven imaging framework for prospective fluence field modulation (FFM) is developed in this paper. The design approach uses a system model that includes a parameterized FFM acquisition and model-based iterative reconstruction (MBIR) for image formation. Using prior anatomical knowledge (e.g. from a low-dose 3D scout image), accurate predictions of spatial resolution and noise as a function of FFM are integrated into a task-based objective function. Specifically, detectability index (d'), a common metric for task-based image quality assessment, is computed for a specific formulation of the imaging task. To optimize imaging performance in across an image volume, a maximin objective function was adopted to maximize the minimum detectability index for many locations sampled throughout the volume. To reduce the dimensionality, FFM patterns were represented using wavelet bases, the coefficients of which were optimized using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The optimization was performed for a mid-frequency discrimination task involving a cluster of micro-calcifications in an abdomen phantom. The task-driven design yielded FFM patterns that were significantly different from traditional strategies proposed for FBP reconstruction. In addition to a higher minimum d' consistent with the objective function, the task-driven approach also improved d' to a greater extent over a larger area of the phantom. Results from this work suggests that FFM strategies suitable for FBP reconstruction need to be reevaluated in the context of MBIR and that a task-driven imaging framework provides a promising approach for such optimization.

  4. Inferring Evolutionary Scenarios for Protein Domain Compositions

    NASA Astrophysics Data System (ADS)

    Wiedenhoeft, John; Krause, Roland; Eulenstein, Oliver

    Essential cellular processes are controlled by functional interactions of protein domains, which can be inferred from their evolutionary histories. Methods to reconstruct these histories are challenged by the complexity of reconstructing macroevolutionary events. In this work we model these events using a novel network-like structure that represents the evolution of domain combinations, called plexus. We describe an algorithm to find a plexus that represents the evolution of a given collection of domain histories as phylogenetic trees with the minimum number of macroevolutionary events, and demonstrate its effectiveness in practice.

  5. Purely optical navigation with model-based state prediction

    NASA Astrophysics Data System (ADS)

    Sendobry, Alexander; Graber, Thorsten; Klingauf, Uwe

    2010-10-01

    State-of-the-art Inertial Navigation Systems (INS) based on Micro-Electro-Mechanical Systems (MEMS) have a lack of precision especially in GPS denied environments like urban canyons or in pure indoor missions. The proposed Optical Navigation System (ONS) provides bias free ego-motion estimates using triple redundant sensor information. In combination with a model based state prediction our system is able to estimate velocity, position and attitude of an arbitrary aircraft. Simulating a high performance flow-field estimator the algorithm can compete with conventional low-cost INS. By using measured velocities instead of accelerations the system states drift behavior is not as distinctive as for an INS.

  6. Practical Application of Model-based Programming and State-based Architecture to Space Missions

    NASA Technical Reports Server (NTRS)

    Horvath, Gregory; Ingham, Michel; Chung, Seung; Martin, Oliver; Williams, Brian

    2006-01-01

    A viewgraph presentation to develop models from systems engineers that accomplish mission objectives and manage the health of the system is shown. The topics include: 1) Overview; 2) Motivation; 3) Objective/Vision; 4) Approach; 5) Background: The Mission Data System; 6) Background: State-based Control Architecture System; 7) Background: State Analysis; 8) Overview of State Analysis; 9) Background: MDS Software Frameworks; 10) Background: Model-based Programming; 10) Background: Titan Model-based Executive; 11) Model-based Execution Architecture; 12) Compatibility Analysis of MDS and Titan Architectures; 13) Integrating Model-based Programming and Execution into the Architecture; 14) State Analysis and Modeling; 15) IMU Subsystem State Effects Diagram; 16) Titan Subsystem Model: IMU Health; 17) Integrating Model-based Programming and Execution into the Software IMU; 18) Testing Program; 19) Computationally Tractable State Estimation & Fault Diagnosis; 20) Diagnostic Algorithm Performance; 21) Integration and Test Issues; 22) Demonstrated Benefits; and 23) Next Steps

  7. Evidence for model-based computations in the human amygdala during Pavlovian conditioning.

    PubMed

    Prévost, Charlotte; McNamee, Daniel; Jessup, Ryan K; Bossaerts, Peter; O'Doherty, John P

    2013-01-01

    Contemporary computational accounts of instrumental conditioning have emphasized a role for a model-based system in which values are computed with reference to a rich model of the structure of the world, and a model-free system in which values are updated without encoding such structure. Much less studied is the possibility of a similar distinction operating at the level of Pavlovian conditioning. In the present study, we scanned human participants while they participated in a Pavlovian conditioning task with a simple structure while measuring activity in the human amygdala using a high-resolution fMRI protocol. After fitting a model-based algorithm and a variety of model-free algorithms to the fMRI data, we found evidence for the superiority of a model-based algorithm in accounting for activity in the amygdala compared to the model-free counterparts. These findings support an important role for model-based algorithms in describing the processes underpinning Pavlovian conditioning, as well as providing evidence of a role for the human amygdala in model-based inference.

  8. Evolutionary stability and resistance to cheating in an indirect reciprocity model based on reputation

    NASA Astrophysics Data System (ADS)

    Martinez-Vaquero, Luis A.; Cuesta, José A.

    2013-05-01

    Indirect reciprocity is one of the main mechanisms to explain the emergence and sustainment of altruism in societies. The standard approach to indirect reciprocity is reputation models. These are games in which players base their decisions on their opponent's reputation gained in past interactions with other players (moral assessment). The combination of actions and moral assessment leads to a large diversity of strategies; thus determining the stability of any of them against invasions by all the others is a difficult task. We use a variant of a previously introduced reputation-based model that let us systematically analyze all these invasions and determine which ones are successful. Accordingly, we are able to identify the third-order strategies (those which, apart from the action, judge considering both the reputation of the donor and that of the recipient) that are evolutionarily stable. Our results reveal that if a strategy resists the invasion of any other one sharing its same moral assessment, it can resist the invasion of any other strategy. However, if actions are not always witnessed, cheaters (i.e., individuals with a probability of defecting regardless of the opponent's reputation) have a chance to defeat the stable strategies for some choices of the probabilities of cheating and of being witnessed. Remarkably, by analyzing this issue with adaptive dynamics we find that whether an honest population resists the invasion of cheaters is determined by a Hamilton-like rule, with the probability that the cheat is discovered playing the role of the relatedness parameter.

  9. Evolutionary stability and resistance to cheating in an indirect reciprocity model based on reputation.

    PubMed

    Martinez-Vaquero, Luis A; Cuesta, José A

    2013-05-01

    Indirect reciprocity is one of the main mechanisms to explain the emergence and sustainment of altruism in societies. The standard approach to indirect reciprocity is reputation models. These are games in which players base their decisions on their opponent's reputation gained in past interactions with other players (moral assessment). The combination of actions and moral assessment leads to a large diversity of strategies; thus determining the stability of any of them against invasions by all the others is a difficult task. We use a variant of a previously introduced reputation-based model that let us systematically analyze all these invasions and determine which ones are successful. Accordingly, we are able to identify the third-order strategies (those which, apart from the action, judge considering both the reputation of the donor and that of the recipient) that are evolutionarily stable. Our results reveal that if a strategy resists the invasion of any other one sharing its same moral assessment, it can resist the invasion of any other strategy. However, if actions are not always witnessed, cheaters (i.e., individuals with a probability of defecting regardless of the opponent's reputation) have a chance to defeat the stable strategies for some choices of the probabilities of cheating and of being witnessed. Remarkably, by analyzing this issue with adaptive dynamics we find that whether an honest population resists the invasion of cheaters is determined by a Hamilton-like rule, with the probability that the cheat is discovered playing the role of the relatedness parameter.

  10. Model-based condition monitoring for lithium-ion batteries

    NASA Astrophysics Data System (ADS)

    Kim, Taesic; Wang, Yebin; Fang, Huazhen; Sahinoglu, Zafer; Wada, Toshihiro; Hara, Satoshi; Qiao, Wei

    2015-11-01

    Condition monitoring for batteries involves tracking changes in physical parameters and operational states such as state of health (SOH) and state of charge (SOC), and is fundamentally important for building high-performance and safety-critical battery systems. A model-based condition monitoring strategy is developed in this paper for Lithium-ion batteries on the basis of an electrical circuit model incorporating hysteresis effect. It systematically integrates 1) a fast upper-triangular and diagonal recursive least squares algorithm for parameter identification of the battery model, 2) a smooth variable structure filter for the SOC estimation, and 3) a recursive total least squares algorithm for estimating the maximum capacity, which indicates the SOH. The proposed solution enjoys advantages including high accuracy, low computational cost, and simple implementation, and therefore is suitable for deployment and use in real-time embedded battery management systems (BMSs). Simulations and experiments validate effectiveness of the proposed strategy.

  11. Convex recoloring as an evolutionary marker.

    PubMed

    Frenkel, Zeev; Kiat, Yosef; Izhaki, Ido; Snir, Sagi

    2017-02-01

    With the availability of enormous quantities of genetic data it has become common to construct very accurate trees describing the evolutionary history of the species under study, as well as every single gene of these species. These trees allow us to examine the evolutionary compliance of given markers (characters). A marker compliant with the history of the species investigated, has undergone mutations along the species tree branches, such that every subtree of that tree exhibits a different state. Convex recoloring (CR) uses combinatorial representation to measure the adequacy of a taxonomic classifier to a given tree. Despite its biological origins, research on CR has been almost exclusively dedicated to mathematical properties of the problem, or variants of it with little, if any, relationship to taxonomy. In this work we return to the origins of CR. We put CR in a statistical framework and introduce and learn the notion of the statistical significance of a character. We apply this measure to two data sets - Passerine birds and prokaryotes, and four examples. These examples demonstrate various applications of CR, from evolutionary relatedness, through lateral evolution, to supertree construction. The above study was done with a new software that we provide, containing algorithmic improvement with a graphical output of a (optimally) recolored tree.

  12. Evolutionary tracks of massive stars during formation

    NASA Astrophysics Data System (ADS)

    Smith, Michael D.

    2014-02-01

    A model for massive stars is constructed by piecing together evolutionary algorithms for the protostellar structure, the environment, the inflow and the radiation feedback. We investigate specified accretion histories of constant, decelerating and accelerating forms and consider both hot and cold accretion, identified with spherical free-fall and disc accretion, respectively. Diagnostic tools for the interpretation of the phases of massive star formation and testing the evolutionary models are then developed. Evolutionary tracks able to fit Herschel Space Telescope data require the generated stars to be three to four times less massive than in previous interpretations, thus being consistent with clump star formation efficiencies of 10-15 per cent. However, for these cold Herschel clumps, the bolometric temperature is not a good diagnostic to differentiate between accretion models. We also find that neither spherical nor disc accretion can explain the high radio luminosities of many protostars. Nevertheless, we discover a solution in which the extreme ultraviolet flux needed to explain the radio emission is produced if the accretion flow is via free-fall on to hotspots covering less than 10 per cent of the surface area. Moreover, the protostar must be compact, and so has formed through cold accretion. We show that these conclusions are independent of the imposed accretion history. This suggests that massive stars form via gas accretion through discs which, in the phase before the star bloats, download their mass via magnetic flux tubes on to the protostar.

  13. Evolutionary Cost-Sensitive Extreme Learning Machine.

    PubMed

    Zhang, Lei; Zhang, David

    2016-10-11

    Conventional extreme learning machines (ELMs) solve a Moore-Penrose generalized inverse of hidden layer activated matrix and analytically determine the output weights to achieve generalized performance, by assuming the same loss from different types of misclassification. The assumption may not hold in cost-sensitive recognition tasks, such as face recognition-based access control system, where misclassifying a stranger as a family member may result in more serious disaster than misclassifying a family member as a stranger. Though recent cost-sensitive learning can reduce the total loss with a given cost matrix that quantifies how severe one type of mistake against another, in many realistic cases, the cost matrix is unknown to users. Motivated by these concerns, this paper proposes an evolutionary cost-sensitive ELM, with the following merits: 1) to the best of our knowledge, it is the first proposal of ELM in evolutionary cost-sensitive classification scenario; 2) it well addresses the open issue of how to define the cost matrix in cost-sensitive learning tasks; and 3) an evolutionary backtracking search algorithm is induced for adaptive cost matrix optimization. Experiments in a variety of cost-sensitive tasks well demonstrate the effectiveness of the proposed approaches, with about 5%-10% improvements.

  14. Landscape evolutionary genomics.

    PubMed

    Lowry, David B

    2010-08-23

    Tremendous advances in genetic and genomic techniques have resulted in the capacity to identify genes involved in adaptive evolution across numerous biological systems. One of the next major steps in evolutionary biology will be to determine how landscape-level geographical and environmental features are involved in the distribution of this functional adaptive genetic variation. Here, I outline how an emerging synthesis of multiple disciplines has and will continue to facilitate a deeper understanding of the ways in which heterogeneity of the natural landscapes mould the genomes of organisms.

  15. Evolutionary dynamics of enzymes.

    PubMed

    Demetrius, L

    1995-08-01

    This paper codifies and rationalizes the large diversity in reaction rates and substrate specificity of enzymes in terms of a model which postulates that the kinetic properties of present-day enzymes are the consequence of the evolutionary force of mutation and selection acting on a class of primordial enzymes with poor catalytic activity and broad substrate specificity. Enzymes are classified in terms of their thermodynamic parameters, activation enthalpy delta H* and activation entropy delta S*, in their kinetically significant transition states as follows: type 1, delta H* > 0, delta S* < 0; type 2, delta H* < or = 0, delta S* < or = 0; type 3, delta H* > 0, delta S* > 0. We study the evolutionary dynamics of these three classes of enzymes subject to mutation, which acts at the level of the gene which codes for the enzyme and selection, which acts on the organism that contains the enzyme. Our model predicts the following evolutionary trends in the reaction rate and binding specificity for the three classes of molecules. In type 1 enzymes, evolution results in random, non-directional changes in the reaction rate and binding specificity. In type 2 and 3 enzymes, evolution results in a unidirectional increase in both the reaction rate and binding specificity. We exploit these results in order to codify the diversity in functional properties of present-day enzymes. Type 1 molecules will be described by intermediate reaction rates and broad substrate specificity. Type 2 enzymes will be characterized by diffusion-controlled rates and absolute substrate specificity. The type 3 catalysts can be further subdivided in terms of their activation enthalpy into two classes: type 3a (delta H* small) and type 3b (delta H* large). We show that type 3a will be represented by the same functional properties that identify type 2, namely, diffusion-controlled rates and absolute substrate specificity, whereas type 3b will be characterized by non-diffusion-controlled rates and absolute

  16. Thermodynamics and evolutionary genetics

    NASA Astrophysics Data System (ADS)

    Müller, Ingo

    2010-03-01

    Thermodynamics and evolutionary genetics have something in common. Thus, the randomness of mutation of cells may be likened to the random thermal fluctuations in a gas. And the probabilistic nature of entropy in statistical thermodynamics can be carried over to a population of haploid and diploid cells without any conceptual change. The energetic potential wells in which the atoms of a liquid are caught correspond to selective advantages for some phenotype over others. Thus, the eventual stable state in a population comes about as a compromise in the universal competition between entropy and energy.

  17. PPIevo: protein-protein interaction prediction from PSSM based evolutionary information.

    PubMed

    Zahiri, Javad; Yaghoubi, Omid; Mohammad-Noori, Morteza; Ebrahimpour, Reza; Masoudi-Nejad, Ali

    2013-10-01

    Protein-protein interactions regulate a variety of cellular processes. There is a great need for computational methods as a complement to experimental methods with which to predict protein interactions due to the existence of many limitations involved in experimental techniques. Here, we introduce a novel evolutionary based feature extraction algorithm for protein-protein interaction (PPI) prediction. The algorithm is called PPIevo and extracts the evolutionary feature from Position-Specific Scoring Matrix (PSSM) of protein with known sequence. The algorithm does not depend on the protein annotations, and the features are based on the evolutionary history of the proteins. This enables the algorithm to have more power for predicting protein-protein interaction than many sequence based algorithms. Results on the HPRD database show better performance and robustness of the proposed method. They also reveal that the negative dataset selection could lead to an acute performance overestimation which is the principal drawback of the available methods.

  18. Transition matrix model for evolutionary game dynamics

    NASA Astrophysics Data System (ADS)

    Ermentrout, G. Bard; Griffin, Christopher; Belmonte, Andrew

    2016-03-01

    We study an evolutionary game model based on a transition matrix approach, in which the total change in the proportion of a population playing a given strategy is summed directly over contributions from all other strategies. This general approach combines aspects of the traditional replicator model, such as preserving unpopulated strategies, with mutation-type dynamics, which allow for nonzero switching to unpopulated strategies, in terms of a single transition function. Under certain conditions, this model yields an endemic population playing non-Nash-equilibrium strategies. In addition, a Hopf bifurcation with a limit cycle may occur in the generalized rock-scissors-paper game, unlike the replicator equation. Nonetheless, many of the Folk Theorem results are shown to hold for this model.

  19. Optimizing a reconfigurable material via evolutionary computation

    NASA Astrophysics Data System (ADS)

    Wilken, Sam; Miskin, Marc Z.; Jaeger, Heinrich M.

    2015-08-01

    Rapid prototyping by combining evolutionary computation with simulations is becoming a powerful tool for solving complex design problems in materials science. This method of optimization operates in a virtual design space that simulates potential material behaviors and after completion needs to be validated by experiment. However, in principle an evolutionary optimizer can also operate on an actual physical structure or laboratory experiment directly, provided the relevant material parameters can be accessed by the optimizer and information about the material's performance can be updated by direct measurements. Here we provide a proof of concept of such direct, physical optimization by showing how a reconfigurable, highly nonlinear material can be tuned to respond to impact. We report on an entirely computer controlled laboratory experiment in which a 6 ×6 grid of electromagnets creates a magnetic field pattern that tunes the local rigidity of a concentrated suspension of ferrofluid and iron filings. A genetic algorithm is implemented and tasked to find field patterns that minimize the force transmitted through the suspension. Searching within a space of roughly 1010 possible configurations, after testing only 1500 independent trials the algorithm identifies an optimized configuration of layered rigid and compliant regions.

  20. Evolutionary Computing for Low-thrust Navigation

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon; Fink, Wolfgang; vonAllmed, Paul; Petropoulos, Anastassios E.; Russell, Ryan P.; Terrile, Richard J.

    2005-01-01

    The development of new mission concepts requires efficient methodologies to analyze, design and simulate the concepts before implementation. New mission concepts are increasingly considering the use of ion thrusters for fuel-efficient navigation in deep space. This paper presents parallel, evolutionary computing methods to design trajectories of spacecraft propelled by ion thrusters and to assess the trade-off between delivered payload mass and required flight time. The developed methods utilize a distributed computing environment in order to speed up computation, and use evolutionary algorithms to find globally Pareto-optimal solutions. The methods are coupled with two main traditional trajectory design approaches, which are called direct and indirect. In the direct approach, thrust control is discretized in either arc time or arc length, and the resulting discrete thrust vectors are optimized. In the indirect approach, a thrust control problem is transformed into a costate control problem, and the initial values of the costate vector are optimized. The developed methods are applied to two problems: 1) an orbit transfer around the Earth and 2) a transfer between two distance retrograde orbits around Europa, the closest to Jupiter of the icy Galilean moons. The optimal solutions found with the present methods are comparable to other state-of-the-art trajectory optimizers and to analytical approximations for optimal transfers, while the required computational time is several orders of magnitude shorter than other optimizers thanks to an intelligent design of control vector discretization, advanced algorithmic parameterization, and parallel computing.

  1. Evolutionary status of Polaris

    NASA Astrophysics Data System (ADS)

    Fadeyev, Yu. A.

    2015-05-01

    Hydrodynamic models of short-period Cepheids were computed to determine the pulsation period as a function of evolutionary time during the first and third crossings of the instability strip. The equations of radiation hydrodynamics and turbulent convection for radial stellar pulsations were solved with the initial conditions obtained from the evolutionary models of Population I stars (X = 0.7, Z = 0.02) with masses from 5.2 to 6.5 M⊙ and the convective core overshooting parameter 0.1 ≤ αov ≤ 0.3. In Cepheids with period of 4 d the rate of pulsation period change during the first crossing of the instability strip is over 50 times larger than that during the third crossing. Polaris is shown to cross the instability strip for the first time and to be the fundamental mode pulsator. The best agreement between the predicted and observed rates of period change was obtained for the model with mass of 5.4 M⊙ and the overshooting parameter αov = 0.25. The bolometric luminosity and radius are L = 1.26 × 103 L⊙ and R = 37.5 R⊙, respectively. In the HR diagram, Polaris is located at the red edge of the instability strip.

  2. On evolutionary systems.

    PubMed

    Alvarez de Lorenzana, J M; Ward, L M

    1987-01-01

    This paper develops a metatheoretical framework for understanding evolutionary systems (systems that develop in ways that increase their own variety). The framework addresses shortcomings seen in other popular systems theories. It concerns both living and nonliving systems, and proposes a metahierarchy of hierarchical systems. Thus, it potentially addresses systems at all descriptive levels. We restrict our definition of system to that of a core system whose parts have a different ontological status than the system, and characterize the core system in terms of five global properties: minimal length interval, minimal time interval, system cycle, total receptive capacity, and system potential. We propose two principles through the interaction of which evolutionary systems develop. The Principle of Combinatorial Expansion describes how a core system realizes its developmental potential through a process of progressive differentiation of the single primal state up to a limit stage. The Principle of Generative Condensation describes how the components of the last stage of combinatorial expansion condense and become the environment for and components of new, enriched systems. The early evolution of the Universe after the "big bang" is discussed in light of these ideas as an example of the application of the framework.

  3. Multiple Damage Progression Paths in Model-Based Prognostics

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew; Goebel, Kai Frank

    2011-01-01

    Model-based prognostics approaches employ domain knowledge about a system, its components, and how they fail through the use of physics-based models. Component wear is driven by several different degradation phenomena, each resulting in their own damage progression path, overlapping to contribute to the overall degradation of the component. We develop a model-based prognostics methodology using particle filters, in which the problem of characterizing multiple damage progression paths is cast as a joint state-parameter estimation problem. The estimate is represented as a probability distribution, allowing the prediction of end of life and remaining useful life within a probabilistic framework that supports uncertainty management. We also develop a novel variance control mechanism that maintains an uncertainty bound around the hidden parameters to limit the amount of estimation uncertainty and, consequently, reduce prediction uncertainty. We construct a detailed physics-based model of a centrifugal pump, to which we apply our model-based prognostics algorithms. We illustrate the operation of the prognostic solution with a number of simulation-based experiments and demonstrate the performance of the chosen approach when multiple damage mechanisms are active

  4. a Simple Evolutionary Model for Cancer Cell Population and its Implications on Cancer Therapy

    NASA Astrophysics Data System (ADS)

    Yao, Peng; Wen, Shutang; Li, Baoshun; Li, Yuxiao

    We established a simple evolutionary model based on the cancer stem cell hypothesis. By taking cellular interactions into consideration, we introduced the evolutionary games theory into the quasispecies model. The fitness values are determined by both genotypes and cellular interactions. In the evolutionary model, a cancer cell population can evolve in different patterns. For single peak intrinsic fitness landscape, the evolution pattern can transit with increasing differentiation probability from malignant cells to benign cells in four different modes. For a large enough value of differentiation probability, the evolution is always the case that the malignant cells extinct ultimately, which might give some implications on cancer therapy.

  5. Specialization and evolutionary branching within migratory populations.

    PubMed

    Torney, Colin J; Levin, Simon A; Couzin, Iain D

    2010-11-23

    Understanding the mechanisms that drive specialization and speciation within initially homogeneous populations is a fundamental challenge for evolutionary theory. It is an issue of relevance for significant open questions in biology concerning the generation and maintenance of biodiversity, the origins of reciprocal cooperation, and the efficient division of labor in social or colonial organisms. Several mathematical frameworks have been developed to address this question and models based on evolutionary game theory or the adaptive dynamics of phenotypic mutation have demonstrated the emergence of polymorphic, specialized populations. Here we focus on a ubiquitous biological phenomenon, migration. Individuals in our model may evolve the capacity to detect and follow an environmental cue that indicates a preferred migration route. The strategy space is defined by the level of investment in acquiring personal information about this route or the alternative tendency to follow the direction choice of others. The result is a relation between the migratory process and a game theoretic dynamic that is generally applicable to situations where information may be considered a public good. Through the use of an approximation of social interactions, we demonstrate the emergence of a stable, polymorphic population consisting of an uninformed subpopulation that is dependent upon a specialized group of leaders. The branching process is classified using the techniques of adaptive dynamics.

  6. Kitaev models based on unitary quantum groupoids

    SciTech Connect

    Chang, Liang

    2014-04-15

    We establish a generalization of Kitaev models based on unitary quantum groupoids. In particular, when inputting a Kitaev-Kong quantum groupoid H{sub C}, we show that the ground state manifold of the generalized model is canonically isomorphic to that of the Levin-Wen model based on a unitary fusion category C. Therefore, the generalized Kitaev models provide realizations of the target space of the Turaev-Viro topological quantum field theory based on C.

  7. Evolutionary Design of Controlled Structures

    NASA Technical Reports Server (NTRS)

    Masters, Brett P.; Crawley, Edward F.

    1997-01-01

    Basic physical concepts of structural delay and transmissibility are provided for simple rod and beam structures. Investigations show the sensitivity of these concepts to differing controlled-structures variables, and to rational system modeling effects. An evolutionary controls/structures design method is developed. The basis of the method is an accurate model formulation for dynamic compensator optimization and Genetic Algorithm based updating of sensor/actuator placement and structural attributes. One and three dimensional examples from the literature are used to validate the method. Frequency domain interpretation of these controlled structure systems provide physical insight as to how the objective is optimized and consequently what is important in the objective. Several disturbance rejection type controls-structures systems are optimized for a stellar interferometer spacecraft application. The interferometric designs include closed loop tracking optics. Designs are generated for differing structural aspect ratios, differing disturbance attributes, and differing sensor selections. Physical limitations in achieving performance are given in terms of average system transfer function gains and system phase loss. A spacecraft-like optical interferometry system is investigated experimentally over several different optimized controlled structures configurations. Configurations represent common and not-so-common approaches to mitigating pathlength errors induced by disturbances of two different spectra. Results show that an optimized controlled structure for low frequency broadband disturbances achieves modest performance gains over a mass equivalent regular structure, while an optimized structure for high frequency narrow band disturbances is four times better in terms of root-mean-square pathlength. These results are predictable given the nature of the physical system and the optimization design variables. Fundamental limits on controlled performance are discussed

  8. Spore: Spawning Evolutionary Misconceptions?

    NASA Astrophysics Data System (ADS)

    Bean, Thomas E.; Sinatra, Gale M.; Schrader, P. G.

    2010-10-01

    The use of computer simulations as educational tools may afford the means to develop understanding of evolution as a natural, emergent, and decentralized process. However, special consideration of developmental constraints on learning may be necessary when using these technologies. Specifically, the essentialist (biological forms possess an immutable essence), teleological (assignment of purpose to living things and/or parts of living things that may not be purposeful), and intentionality (assumption that events are caused by an intelligent agent) biases may be reinforced through the use of computer simulations, rather than addressed with instruction. We examine the video game Spore for its depiction of evolutionary content and its potential to reinforce these cognitive biases. In particular, we discuss three pedagogical strategies to mitigate weaknesses of Spore and other computer simulations: directly targeting misconceptions through refutational approaches, targeting specific principles of scientific inquiry, and directly addressing issues related to models as cognitive tools.

  9. Anxiety: an evolutionary approach.

    PubMed

    Bateson, Melissa; Brilot, Ben; Nettle, Daniel

    2011-12-01

    Anxiety disorders are among the most common mental illnesses, with huge attendant suffering. Current treatments are not universally effective, suggesting that a deeper understanding of the causes of anxiety is needed. To understand anxiety disorders better, it is first necessary to understand the normal anxiety response. This entails considering its evolutionary function as well as the mechanisms underlying it. We argue that the function of the human anxiety response, and homologues in other species, is to prepare the individual to detect and deal with threats. We use a signal detection framework to show that the threshold for expressing the anxiety response ought to vary with the probability of threats occurring, and the individual's vulnerability to them if they do occur. These predictions are consistent with major patterns in the epidemiology of anxiety. Implications for research and treatment are discussed.

  10. Novel web service selection model based on discrete group search.

    PubMed

    Zhai, Jie; Shao, Zhiqing; Guo, Yi; Zhang, Haiteng

    2014-01-01

    In our earlier work, we present a novel formal method for the semiautomatic verification of specifications and for describing web service composition components by using abstract concepts. After verification, the instantiations of components were selected to satisfy the complex service performance constraints. However, selecting an optimal instantiation, which comprises different candidate services for each generic service, from a large number of instantiations is difficult. Therefore, we present a new evolutionary approach on the basis of the discrete group search service (D-GSS) model. With regard to obtaining the optimal multiconstraint instantiation of the complex component, the D-GSS model has competitive performance compared with other service selection models in terms of accuracy, efficiency, and ability to solve high-dimensional service composition component problems. We propose the cost function and the discrete group search optimizer (D-GSO) algorithm and study the convergence of the D-GSS model through verification and test cases.

  11. Open Issues in Evolutionary Robotics.

    PubMed

    Silva, Fernando; Duarte, Miguel; Correia, Luís; Oliveira, Sancho Moura; Christensen, Anders Lyhne

    2016-01-01

    One of the long-term goals in evolutionary robotics is to be able to automatically synthesize controllers for real autonomous robots based only on a task specification. While a number of studies have shown the applicability of evolutionary robotics techniques for the synthesis of behavioral control, researchers have consistently been faced with a number of issues preventing the widespread adoption of evolutionary robotics for engineering purposes. In this article, we review and discuss the open issues in evolutionary robotics. First, we analyze the benefits and challenges of simulation-based evolution and subsequent deployment of controllers versus evolution on real robotic hardware. Second, we discuss specific evolutionary computation issues that have plagued evolutionary robotics: (1) the bootstrap problem, (2) deception, and (3) the role of genomic encoding and genotype-phenotype mapping in the evolution of controllers for complex tasks. Finally, we address the absence of standard research practices in the field. We also discuss promising avenues of research. Our underlying motivation is the reduction of the current gap between evolutionary robotics and mainstream robotics, and the establishment of evolutionary robotics as a canonical approach for the engineering of autonomous robots.

  12. Observability in dynamic evolutionary models.

    PubMed

    López, I; Gámez, M; Carreño, R

    2004-02-01

    In the paper observability problems are considered in basic dynamic evolutionary models for sexual and asexual populations. Observability means that from the (partial) knowledge of certain phenotypic characteristics the whole evolutionary process can be uniquely recovered. Sufficient conditions are given to guarantee observability for both sexual and asexual populations near an evolutionarily stable state.

  13. Model-Based Diagnosis in a Power Distribution Test-Bed

    NASA Technical Reports Server (NTRS)

    Scarl, E.; McCall, K.

    1998-01-01

    The Rodon model-based diagnosis shell was applied to a breadboard test-bed, modeling an automated power distribution system. The constraint-based modeling paradigm and diagnostic algorithm were found to adequately represent the selected set of test scenarios.

  14. Understanding Air Transportation Market Dynamics Using a Search Algorithm for Calibrating Travel Demand and Price

    NASA Technical Reports Server (NTRS)

    Kumar, Vivek; Horio, Brant M.; DeCicco, Anthony H.; Hasan, Shahab; Stouffer, Virginia L.; Smith, Jeremy C.; Guerreiro, Nelson M.

    2015-01-01

    This paper presents a search algorithm based framework to calibrate origin-destination (O-D) market specific airline ticket demands and prices for the Air Transportation System (ATS). This framework is used for calibrating an agent based model of the air ticket buy-sell process - Airline Evolutionary Simulation (Airline EVOS) -that has fidelity of detail that accounts for airline and consumer behaviors and the interdependencies they share between themselves and the NAS. More specificially, this algorithm simultaneous calibrates demand and airfares for each O-D market, to within specified threshold of a pre-specified target value. The proposed algorithm is illustrated with market data targets provided by the Transportation System Analysis Model (TSAM) and Airline Origin and Destination Survey (DB1B). Although we specify these models and datasources for this calibration exercise, the methods described in this paper are applicable to calibrating any low-level model of the ATS to some other demand forecast model-based data. We argue that using a calibration algorithm such as the one we present here to synchronize ATS models with specialized forecast demand models, is a powerful tool for establishing credible baseline conditions in experiments analyzing the effects of proposed policy changes to the ATS.

  15. Evolutionary Dynamic Multiobjective Optimization Via Kalman Filter Prediction.

    PubMed

    Muruganantham, Arrchana; Tan, Kay Chen; Vadakkepat, Prahlad

    2016-12-01

    Evolutionary algorithms are effective in solving static multiobjective optimization problems resulting in the emergence of a number of state-of-the-art multiobjective evolutionary algorithms (MOEAs). Nevertheless, the interest in applying them to solve dynamic multiobjective optimization problems has only been tepid. Benchmark problems, appropriate performance metrics, as well as efficient algorithms are required to further the research in this field. One or more objectives may change with time in dynamic optimization problems. The optimization algorithm must be able to track the moving optima efficiently. A prediction model can learn the patterns from past experience and predict future changes. In this paper, a new dynamic MOEA using Kalman filter (KF) predictions in decision space is proposed to solve the aforementioned problems. The predictions help to guide the search toward the changed optima, thereby accelerating convergence. A scoring scheme is devised to hybridize the KF prediction with a random reinitialization method. Experimental results and performance comparisons with other state-of-the-art algorithms demonstrate that the proposed algorithm is capable of significantly improving the dynamic optimization performance.

  16. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  17. Evaluation of Generation Alternation Models in Evolutionary Robotics

    NASA Astrophysics Data System (ADS)

    Oiso, Masashi; Matsumura, Yoshiyuki; Yasuda, Toshiyuki; Ohkura, Kazuhiro

    For efficient implementation of Evolutionary Algorithms (EA) to a desktop grid computing environment, we propose a new generation alternation model called Grid-Oriented-Deletion (GOD) based on comparison with the conventional techniques. In previous research, generation alternation models are generally evaluated by using test functions. However, their exploration performance on the real problems such as Evolutionary Robotics (ER) has not been made very clear yet. Therefore we investigate the relationship between the exploration performance of EA on an ER problem and its generation alternation model. We applied four generation alternation models to the Evolutionary Multi-Robotics (EMR), which is the package-pushing problem to investigate their exploration performance. The results show that GOD is more effective than the other conventional models.

  18. Colour spaces in ecology and evolutionary biology.

    PubMed

    Renoult, Julien P; Kelber, Almut; Schaefer, H Martin

    2017-02-01

    The recognition that animals sense the world in a different way than we do has unlocked important lines of research in ecology and evolutionary biology. In practice, the subjective study of natural stimuli has been permitted by perceptual spaces, which are graphical models of how stimuli are perceived by a given animal. Because colour vision is arguably the best-known sensory modality in most animals, a diversity of colour spaces are now available to visual ecologists, ranging from generalist and basic models allowing rough but robust predictions on colour perception, to species-specific, more complex models giving accurate but context-dependent predictions. Selecting among these models is most often influenced by historical contingencies that have associated models to specific questions and organisms; however, these associations are not always optimal. The aim of this review is to provide visual ecologists with a critical perspective on how models of colour space are built, how well they perform and where their main limitations are with regard to their most frequent uses in ecology and evolutionary biology. We propose a classification of models based on their complexity, defined as whether and how they model the mechanisms of chromatic adaptation and receptor opponency, the nonlinear association between the stimulus and its perception, and whether or not models have been fitted to experimental data. Then, we review the effect of modelling these mechanisms on predictions of colour detection and discrimination, colour conspicuousness, colour diversity and diversification, and for comparing the perception of colour traits between distinct perceivers. While a few rules emerge (e.g. opponent log-linear models should be preferred when analysing very distinct colours), in general model parameters still have poorly known effects. Colour spaces have nonetheless permitted significant advances in ecology and evolutionary biology, and more progress is expected if ecologists

  19. RNA based evolutionary optimization

    NASA Astrophysics Data System (ADS)

    Schuster, Peter

    1993-12-01

    The notion of an RNA world has been introduced for a prebiotic scenario that is dominated by RNA molecules and their properties, in particular their capabilities to act as templates for reproduction and as catalysts for several cleavage and ligation reactions of polynucleotides and polypeptides. This notion is used here also for simple experimental assays which are well suited to study evolution in the test tube. In molecular evolution experiments fitness is determined in essence by the molecular structures of RNA molecules. Evidence is presented for adaptation to environment in cell-free media. RNA based molecular evolution experiments have led to interesting spin-offs in biotechnology, commonly called ‘applied molecular evolution’, which make use of Darwinian trial-and-error strategies in order to synthesize new pharmacological compounds and other advanced materials on a biological basis. Error-propagation in RNA replication leads to formation of mutant spectra called ‘quasispecies’. An increase in the error rate broadens the mutant spectrum. There exists a sharply defined threshold beyond which heredity breaks down and evolutionary adaptation becomes impossible. Almost all RNA viruses studied so far operate at conditions close to this error threshold. Quasispecies and error thresholds are important for an understanding of RNA virus evolution, and they may help to develop novel antiviral strategies. Evolution of RNA molecules can be studied and interpreted by considering secondary structures. The notion of sequence space introduces a distance between pairs of RNA sequences which is tantamount to counting the minimal number of point mutations required to convert the sequences into each other. The mean sensitivity of RNA secondary structures to mutation depends strongly on the base pairing alphabet: structures from sequences which contain only one base pair (GC or AU are much less stable against mutation than those derived from the natural (AUGC) sequences

  20. Model-based fault detection and identification with online aerodynamic model structure selection

    NASA Astrophysics Data System (ADS)

    Lombaerts, T.

    2013-12-01

    This publication describes a recursive algorithm for the approximation of time-varying nonlinear aerodynamic models by means of a joint adaptive selection of the model structure and parameter estimation. This procedure is called adaptive recursive orthogonal least squares (AROLS) and is an extension and modification of the previously developed ROLS procedure. This algorithm is particularly useful for model-based fault detection and identification (FDI) of aerospace systems. After the failure, a completely new aerodynamic model can be elaborated recursively with respect to structure as well as parameter values. The performance of the identification algorithm is demonstrated on a simulation data set.

  1. Evolutionary theory, psychiatry, and psychopharmacology.

    PubMed

    Stein, Dan J

    2006-07-01

    Darwin's seminal publications in the nineteenth century laid the foundation for an evolutionary approach to psychology and psychiatry. Advances in 20th century evolutionary theory facilitated the development of evolutionary psychology and psychiatry as recognized areas of scientific investigation. In this century, advances in understanding the molecular basis of evolution, of the mind, and of psychopathology, offer the possibility of an integrated approach to understanding the proximal (psychobiological) and distal (evolutionary) mechanisms of interest to psychiatry and psychopharmacology. There is, for example, growing interest in the question of whether specific genetic variants mediate psychobiological processes that have evolutionary value in specific contexts, and of the implications of this for understanding the vulnerability to psychopathology and for considering the advantages and limitations of pharmacotherapy. The evolutionary value, and gene-environmental mediation, of early life programming is potentially a particularly rich area of investigation. Although evolutionary approaches to psychology and to medicine face important conceptual and methodological challenges, current work is increasingly sophisticated, and may prove to be an important foundational discipline for clinicians and researchers in psychiatry and psychopharmacology.

  2. System Design under Uncertainty: Evolutionary Optimization of the Gravity Probe-B Spacecraft

    NASA Technical Reports Server (NTRS)

    Pullen, Samuel P.; Parkinson, Bradford W.

    1994-01-01

    This paper discusses the application of evolutionary random-search algorithms (Simulated Annealing and Genetic Algorithms) to the problem of spacecraft design under performance uncertainty. Traditionally, spacecraft performance uncertainty has been measured by reliability. Published algorithms for reliability optimization are seldom used in practice because they oversimplify reality. The algorithm developed here uses random-search optimization to allow us to model the problem more realistically. Monte Carlo simulations are used to evaluate the objective function for each trial design solution. These methods have been applied to the Gravity Probe-B (GP-B) spacecraft being developed at Stanford University for launch in 1999, Results of the algorithm developed here for GP-13 are shown, and their implications for design optimization by evolutionary algorithms are discussed.

  3. Testing Strategies for Model-Based Development

    NASA Technical Reports Server (NTRS)

    Heimdahl, Mats P. E.; Whalen, Mike; Rajan, Ajitha; Miller, Steven P.

    2006-01-01

    This report presents an approach for testing artifacts generated in a model-based development process. This approach divides the traditional testing process into two parts: requirements-based testing (validation testing) which determines whether the model implements the high-level requirements and model-based testing (conformance testing) which determines whether the code generated from a model is behaviorally equivalent to the model. The goals of the two processes differ significantly and this report explores suitable testing metrics and automation strategies for each. To support requirements-based testing, we define novel objective requirements coverage metrics similar to existing specification and code coverage metrics. For model-based testing, we briefly describe automation strategies and examine the fault-finding capability of different structural coverage metrics using tests automatically generated from the model.

  4. Model-Based Prognostics of Hybrid Systems

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew; Roychoudhury, Indranil; Bregon, Anibal

    2015-01-01

    Model-based prognostics has become a popular approach to solving the prognostics problem. However, almost all work has focused on prognostics of systems with continuous dynamics. In this paper, we extend the model-based prognostics framework to hybrid systems models that combine both continuous and discrete dynamics. In general, most systems are hybrid in nature, including those that combine physical processes with software. We generalize the model-based prognostics formulation to hybrid systems, and describe the challenges involved. We present a general approach for modeling hybrid systems, and overview methods for solving estimation and prediction in hybrid systems. As a case study, we consider the problem of conflict (i.e., loss of separation) prediction in the National Airspace System, in which the aircraft models are hybrid dynamical systems.

  5. Evolutionary and Developmental Modules

    PubMed Central

    Lacquaniti, Francesco; Ivanenko, Yuri P.; d’Avella, Andrea; Zelik, Karl E.; Zago, Myrka

    2013-01-01

    The identification of biological modules at the systems level often follows top-down decomposition of a task goal, or bottom-up decomposition of multidimensional data arrays into basic elements or patterns representing shared features. These approaches traditionally have been applied to mature, fully developed systems. Here we review some results from two other perspectives on modularity, namely the developmental and evolutionary perspective. There is growing evidence that modular units of development were highly preserved and recombined during evolution. We first consider a few examples of modules well identifiable from morphology. Next we consider the more difficult issue of identifying functional developmental modules. We dwell especially on modular control of locomotion to argue that the building blocks used to construct different locomotor behaviors are similar across several animal species, presumably related to ancestral neural networks of command. A recurrent theme from comparative studies is that the developmental addition of new premotor modules underlies the postnatal acquisition and refinement of several different motor behaviors in vertebrates. PMID:23730285

  6. Evolutionary cytogenetics in salamanders.

    PubMed

    Sessions, Stanley K

    2008-01-01

    Salamanders (Amphibia: Caudata/Urodela) have been the subject of numerous cytogenetic studies, and data on karyotypes and genome sizes are available for most groups. Salamanders show a more-or-less distinct dichotomy between families with large chromosome numbers and interspecific variation in chromosome number, relative size, and shape (i.e. position of the centromere), and those that exhibit very little variation in these karyological features. This dichotomy is the basis of a major model of karyotype evolution in salamanders involving a kind of 'karyotypic orthoselection'. Salamanders are also characterized by extremely large genomes (in terms of absolute mass of nuclear DNA) and extensive variation in genome size (and overall size of the chromosomes), which transcends variation in chromosome number and shape. The biological significance and evolution of chromosome number and shape within the karyotype is not yet understood, but genome size variation has been found to have strong phenotypic, biogeographic, and phylogenetic correlates that reveal information about the biological significance of this cytogenetic variable. Urodeles also present the advantage of only 10 families and less than 600 species, which facilitates the analysis of patterns within the entire order. The purpose of this review is to present a summary of what is currently known about overall patterns of variation in karyology and genome size in salamanders. These patterns are discussed within an evolutionary context.

  7. Evolutionary financial market models

    NASA Astrophysics Data System (ADS)

    Ponzi, A.; Aizawa, Y.

    2000-12-01

    We study computer simulations of two financial market models, the second a simplified model of the first. The first is a model of the self-organized formation and breakup of crowds of traders, motivated by the dynamics of competitive evolving systems which shows interesting self-organized critical (SOC)-type behaviour without any fine tuning of control parameters. This SOC-type avalanching and stasis appear as realistic volatility clustering in the price returns time series. The market becomes highly ordered at ‘crashes’ but gradually loses this order through randomization during the intervening stasis periods. The second model is a model of stocks interacting through a competitive evolutionary dynamic in a common stock exchange. This model shows a self-organized ‘market-confidence’. When this is high the market is stable but when it gets low the market may become highly volatile. Volatile bursts rapidly increase the market confidence again. This model shows a phase transition as temperature parameter is varied. The price returns time series in the transition region is very realistic power-law truncated Levy distribution with clustered volatility and volatility superdiffusion. This model also shows generally positive stock cross-correlations as is observed in real markets. This model may shed some light on why such phenomena are observed.

  8. Online clustering algorithms for radar emitter classification.

    PubMed

    Liu, Jun; Lee, Jim P Y; Senior; Li, Lingjie; Luo, Zhi-Quan; Wong, K Max

    2005-08-01

    Radar emitter classification is a special application of data clustering for classifying unknown radar emitters from received radar pulse samples. The main challenges of this task are the high dimensionality of radar pulse samples, small sample group size, and closely located radar pulse clusters. In this paper, two new online clustering algorithms are developed for radar emitter classification: One is model-based using the Minimum Description Length (MDL) criterion and the other is based on competitive learning. Computational complexity is analyzed for each algorithm and then compared. Simulation results show the superior performance of the model-based algorithm over competitive learning in terms of better classification accuracy, flexibility, and stability.

  9. Distributed model-based nonlinear sensor fault diagnosis in wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Lo, Chun; Lynch, Jerome P.; Liu, Mingyan

    2016-01-01

    Wireless sensors operating in harsh environments have the potential to be error-prone. This paper presents a distributive model-based diagnosis algorithm that identifies nonlinear sensor faults. The diagnosis algorithm has advantages over existing fault diagnosis methods such as centralized model-based and distributive model-free methods. An algorithm is presented for detecting common non-linearity faults without using reference sensors. The study introduces a model-based fault diagnosis framework that is implemented within a pair of wireless sensors. The detection of sensor nonlinearities is shown to be equivalent to solving the largest empty rectangle (LER) problem, given a set of features extracted from an analysis of sensor outputs. A low-complexity algorithm that gives an approximate solution to the LER problem is proposed for embedment in resource constrained wireless sensors. By solving the LER problem, sensors corrupted by non-linearity faults can be isolated and identified. Extensive analysis evaluates the performance of the proposed algorithm through simulation.

  10. Discovery and Optimization of Materials Using Evolutionary Approaches.

    PubMed

    Le, Tu C; Winkler, David A

    2016-05-25

    Materials science is undergoing a revolution, generating valuable new materials such as flexible solar panels, biomaterials and printable tissues, new catalysts, polymers, and porous materials with unprecedented properties. However, the number of potentially accessible materials is immense. Artificial evolutionary methods such as genetic algorithms, which explore large, complex search spaces very efficiently, can be applied to the identification and optimization of novel materials more rapidly than by physical experiments alone. Machine learning models can augment experimental measurements of materials fitness to accelerate identification of useful and novel materials in vast materials composition or property spaces. This review discusses the problems of large materials spaces, the types of evolutionary algorithms employed to identify or optimize materials, and how materials can be represented mathematically as genomes, describes fitness landscapes and mutation operators commonly employed in materials evolution, and provides a comprehensive summary of published research on the use of evolutionary methods to generate new catalysts, phosphors, and a range of other materials. The review identifies the potential for evolutionary methods to revolutionize a wide range of manufacturing, medical, and materials based industries.

  11. Model-based online learning with kernels.

    PubMed

    Li, Guoqi; Wen, Changyun; Li, Zheng Guo; Zhang, Aimin; Yang, Feng; Mao, Kezhi

    2013-03-01

    New optimization models and algorithms for online learning with Kernels (OLK) in classification, regression, and novelty detection are proposed in a reproducing Kernel Hilbert space. Unlike the stochastic gradient descent algorithm, called the naive online Reg minimization algorithm (NORMA), OLK algorithms are obtained by solving a constrained optimization problem based on the proposed models. By exploiting the techniques of the Lagrange dual problem like Vapnik's support vector machine (SVM), the solution of the optimization problem can be obtained iteratively and the iteration process is similar to that of the NORMA. This further strengthens the foundation of OLK and enriches the research area of SVM. We also apply the obtained OLK algorithms to problems in classification, regression, and novelty detection, including real time background substraction, to show their effectiveness. It is illustrated that, based on the experimental results of both classification and regression, the accuracy of OLK algorithms is comparable with traditional SVM-based algorithms, such as SVM and least square SVM (LS-SVM), and with the state-of-the-art algorithms, such as Kernel recursive least square (KRLS) method and projectron method, while it is slightly higher than that of NORMA. On the other hand, the computational cost of the OLK algorithm is comparable with or slightly lower than existing online methods, such as above mentioned NORMA, KRLS, and projectron methods, but much lower than that of SVM-based algorithms. In addition, different from SVM and LS-SVM, it is possible for OLK algorithms to be applied to non-stationary problems. Also, the applicability of OLK in novelty detection is illustrated by simulation results.

  12. Multimode model based defect characterization in composites

    NASA Astrophysics Data System (ADS)

    Roberts, R.; Holland, S.; Gregory, E.

    2016-02-01

    A newly-initiated research program for model-based defect characterization in CFRP composites is summarized. The work utilizes computational models of the interaction of NDE probing energy fields (ultrasound and thermography), to determine 1) the measured signal dependence on material and defect properties (forward problem), and 2) an assessment of performance-critical defect properties from analysis of measured NDE signals (inverse problem). Work is reported on model implementation for inspection of CFRP laminates containing delamination and porosity. Forward predictions of measurement response are presented, as well as examples of model-based inversion of measured data for the estimation of defect parameters.

  13. Evolutionary adaptive eye tracking for low-cost human computer interaction applications

    NASA Astrophysics Data System (ADS)

    Shen, Yan; Shin, Hak Chul; Sung, Won Jun; Khim, Sarang; Kim, Honglak; Rhee, Phill Kyu

    2013-01-01

    We present an evolutionary adaptive eye-tracking framework aiming for low-cost human computer interaction. The main focus is to guarantee eye-tracking performance without using high-cost devices and strongly controlled situations. The performance optimization of eye tracking is formulated into the dynamic control problem of deciding on an eye tracking algorithm structure and associated thresholds/parameters, where the dynamic control space is denoted by genotype and phenotype spaces. The evolutionary algorithm is responsible for exploring the genotype control space, and the reinforcement learning algorithm organizes the evolved genotype into a reactive phenotype. The evolutionary algorithm encodes an eye-tracking scheme as a genetic code based on image variation analysis. Then, the reinforcement learning algorithm defines internal states in a phenotype control space limited by the perceived genetic code and carries out interactive adaptations. The proposed method can achieve optimal performance by compromising the difficulty in the real-time performance of the evolutionary algorithm and the drawback of the huge search space of the reinforcement learning algorithm. Extensive experiments were carried out using webcam image sequences and yielded very encouraging results. The framework can be readily applied to other low-cost vision-based human computer interactions in solving their intrinsic brittleness in unstable operational environments.

  14. Evolutionary L-systems

    NASA Astrophysics Data System (ADS)

    McCormack, Jon

    The problem confronting any contemporary artist wishing to use technology is in the relationship between algorithmic and creative processes. This relationship is traditionally a conflicting one, with the artist trying to bend and adapt to the rigour and exactness of the computational process, while aspiring for an unbounded freedom of expression. Software for creative applications has typically looked to artforms and processes from non-computational media as its primary source of inspiration and metaphor (e.g. the photographic darkroom, cinema and theatre, multi-track tape recording, etc.).

  15. Evolutionary Computing Methods for Spectral Retrieval

    NASA Technical Reports Server (NTRS)

    Terrile, Richard; Fink, Wolfgang; Huntsberger, Terrance; Lee, Seugwon; Tisdale, Edwin; VonAllmen, Paul; Tinetti, Geivanna

    2009-01-01

    A methodology for processing spectral images to retrieve information on underlying physical, chemical, and/or biological phenomena is based on evolutionary and related computational methods implemented in software. In a typical case, the solution (the information that one seeks to retrieve) consists of parameters of a mathematical model that represents one or more of the phenomena of interest. The methodology was developed for the initial purpose of retrieving the desired information from spectral image data acquired by remote-sensing instruments aimed at planets (including the Earth). Examples of information desired in such applications include trace gas concentrations, temperature profiles, surface types, day/night fractions, cloud/aerosol fractions, seasons, and viewing angles. The methodology is also potentially useful for retrieving information on chemical and/or biological hazards in terrestrial settings. In this methodology, one utilizes an iterative process that minimizes a fitness function indicative of the degree of dissimilarity between observed and synthetic spectral and angular data. The evolutionary computing methods that lie at the heart of this process yield a population of solutions (sets of the desired parameters) within an accuracy represented by a fitness-function value specified by the user. The evolutionary computing methods (ECM) used in this methodology are Genetic Algorithms and Simulated Annealing, both of which are well-established optimization techniques and have also been described in previous NASA Tech Briefs articles. These are embedded in a conceptual framework, represented in the architecture of the implementing software, that enables automatic retrieval of spectral and angular data and analysis of the retrieved solutions for uniqueness.

  16. On the Performance of Stochastic Model-Based Image Segmentation

    NASA Astrophysics Data System (ADS)

    Lei, Tianhu; Sewchand, Wilfred

    1989-11-01

    A new stochastic model-based image segmentation technique for X-ray CT image has been developed and has been extended to the more general nondiffraction CT images which include MRI, SPELT, and certain type of ultrasound images [1,2]. The nondiffraction CT image is modeled by a Finite Normal Mixture. The technique utilizes the information theoretic criterion to detect the number of the region images, uses the Expectation-Maximization algorithm to estimate the parameters of the image, and uses the Bayesian classifier to segment the observed image. How does this technique over/under-estimate the number of the region images? What is the probability of errors in the segmentation of this technique? This paper addresses these two problems and is a continuation of [1,2].

  17. Subwavelength Lattice Optics by Evolutionary Design

    PubMed Central

    2015-01-01

    This paper describes a new class of structured optical materials—lattice opto-materials—that can manipulate the flow of visible light into a wide range of three-dimensional profiles using evolutionary design principles. Lattice opto-materials are based on the discretization of a surface into a two-dimensional (2D) subwavelength lattice whose individual lattice sites can be controlled to achieve a programmed optical response. To access a desired optical property, we designed a lattice evolutionary algorithm that includes and optimizes contributions from every element in the lattice. Lattice opto-materials can exhibit simple properties, such as on- and off-axis focusing, and can also concentrate light into multiple, discrete spots. We expanded the unit cell shapes of the lattice to achieve distinct, polarization-dependent optical responses from the same 2D patterned substrate. Finally, these lattice opto-materials can also be combined into architectures that resemble a new type of compound flat lens. PMID:25380062

  18. Evolutionary Optimization of a Geometrically Refined Truss

    NASA Technical Reports Server (NTRS)

    Hull, P. V.; Tinker, M. L.; Dozier, G. V.

    2007-01-01

    Structural optimization is a field of research that has experienced noteworthy growth for many years. Researchers in this area have developed optimization tools to successfully design and model structures, typically minimizing mass while maintaining certain deflection and stress constraints. Numerous optimization studies have been performed to minimize mass, deflection, and stress on a benchmark cantilever truss problem. Predominantly traditional optimization theory is applied to this problem. The cross-sectional area of each member is optimized to minimize the aforementioned objectives. This Technical Publication (TP) presents a structural optimization technique that has been previously applied to compliant mechanism design. This technique demonstrates a method that combines topology optimization, geometric refinement, finite element analysis, and two forms of evolutionary computation: genetic algorithms and differential evolution to successfully optimize a benchmark structural optimization problem. A nontraditional solution to the benchmark problem is presented in this TP, specifically a geometrically refined topological solution. The design process begins with an alternate control mesh formulation, multilevel geometric smoothing operation, and an elastostatic structural analysis. The design process is wrapped in an evolutionary computing optimization toolset.

  19. Army ants: an evolutionary bestseller?

    PubMed

    Berghoff, Stefanie M

    2003-09-02

    Army ants are characterized by a complex combination of behavioral and morphological traits. Molecular data now indicate that army ant behavior has a unique evolutionary origin and has been conserved for over more than 100 million years.

  20. Algorithmic methods to infer the evolutionary trajectories in cancer progression

    PubMed Central

    Graudenzi, Alex; Ramazzotti, Daniele; Sanz-Pamplona, Rebeca; De Sano, Luca; Mauri, Giancarlo; Moreno, Victor; Antoniotti, Marco; Mishra, Bud

    2016-01-01

    The genomic evolution inherent to cancer relates directly to a renewed focus on the voluminous next-generation sequencing data and machine learning for the inference of explanatory models of how the (epi)genomic events are choreographed in cancer initiation and development. However, despite the increasing availability of multiple additional -omics data, this quest has been frustrated by various theoretical and technical hurdles, mostly stemming from the dramatic heterogeneity of the disease. In this paper, we build on our recent work on the “selective advantage” relation among driver mutations in cancer progression and investigate its applicability to the modeling problem at the population level. Here, we introduce PiCnIc (Pipeline for Cancer Inference), a versatile, modular, and customizable pipeline to extract ensemble-level progression models from cross-sectional sequenced cancer genomes. The pipeline has many translational implications because it combines state-of-the-art techniques for sample stratification, driver selection, identification of fitness-equivalent exclusive alterations, and progression model inference. We demonstrate PiCnIc’s ability to reproduce much of the current knowledge on colorectal cancer progression as well as to suggest novel experimentally verifiable hypotheses. PMID:27357673

  1. PARALLEL MULTIOBJECTIVE EVOLUTIONARY ALGORITHMS FOR WASTE SOLVENT RECYCLING

    EPA Science Inventory

    Waste solvents are of great concern to the chemical process industries and to the public, and many technologies have been suggested and implemented in the chemical process industries to reduce waste and associated environmental impacts. In this article we have developed a novel p...

  2. Evolutionary algorithm for the neutrino factory front end design

    SciTech Connect

    Poklonskiy, Alexey A.; Neuffer, David; /Fermilab

    2009-01-01

    The Neutrino Factory is an important tool in the long-term neutrino physics program. Substantial effort is put internationally into designing this facility in order to achieve desired performance within the allotted budget. This accelerator is a secondary beam machine: neutrinos are produced by means of the decay of muons. Muons, in turn, are produced by the decay of pions, produced by hitting the target by a beam of accelerated protons suitable for acceleration. Due to the physics of this process, extra conditioning of the pion beam coming from the target is needed in order to effectively perform subsequent acceleration. The subsystem of the Neutrino Factory that performs this conditioning is called Front End, its main performance characteristic is the number of the produced muons.

  3. UAV Swarm Mission Planning Development Using Evolutionary Algorithms - Part I

    DTIC Science & Technology

    2008-05-01

    0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing ...instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send...that the system proba- bilistically chooses between mitosis, which facilitates exploitation, and meiosis , which enables exploration, recombination

  4. Evolutionary optimization of rotational population transfer

    SciTech Connect

    Rouzee, Arnaud; Vrakking, Marc J. J.; Ghafur, Omair; Gijsbertsen, Arjan; Vidma, Konstantin; Meijer, Afric; Zande, Wim J. van der; Parker, David; Shir, Ofer M.; Baeck, Thomas

    2011-09-15

    We present experimental and numerical studies on control of rotational population transfer of NO(J=1/2) molecules to higher rotational states. We are able to transfer 57% of the population to the J=5/2 state and 46% to J=9/2, in good agreement with quantum mechanical simulations. The optimal pulse shapes are composed of pulse sequences with delays corresponding to the beat frequencies of states on the rotational ladder. The evolutionary algorithm is limited by experimental constraints such as volume averaging and the finite laser intensity used, the latter to circumvent ionization. Without these constraints, near-perfect control (>98%) is possible. In addition, we show that downward control, moving molecules from high to low rotational states, is also possible.

  5. Evolutionary complexity for protection of critical assets.

    SciTech Connect

    Battaile, Corbett Chandler; Chandross, Michael Evan

    2005-01-01

    This report summarizes the work performed as part of a one-year LDRD project, 'Evolutionary Complexity for Protection of Critical Assets.' A brief introduction is given to the topics of genetic algorithms and genetic programming, followed by a discussion of relevant results obtained during the project's research, and finally the conclusions drawn from those results. The focus is on using genetic programming to evolve solutions for relatively simple algebraic equations as a prototype application for evolving complexity in computer codes. The results were obtained using the lil-gp genetic program, a C code for evolving solutions to user-defined problems and functions. These results suggest that genetic programs are not well-suited to evolving complexity for critical asset protection because they cannot efficiently evolve solutions to complex problems, and introduce unacceptable performance penalties into solutions for simple ones.

  6. Evolutionary Technique for Designing Optimized Arrays

    NASA Astrophysics Data System (ADS)

    Villazón, J.; Ibañez, A.

    2011-06-01

    Many ultrasonic inspection applications in the industry could benefit from the use of phased array distributions specifically designed for them. Some common design requirements are: to adapt the shape of the array to that of the part to be inspected, to use large apertures for increasing lateral resolution, to find a layout of elements that avoids artifacts produced by lateral and/or grating lobes, to maintain the total number of independent elements (and the number of control channels) as low as possible to reduce complexity and cost of the inspection system. Recent advances in transducer technology have made possible to design and build arrays whit non-regular layout of elements. In this paper we propose to use Evolutionary Algorithms to find layouts of ultrasonic arrays (whether 1D or 2D array) that approach a set of specified beampattern characteristics using a low number of elements.

  7. Using Evolutionary Computation on GPS Position Correction

    PubMed Central

    2014-01-01

    More and more devices are equipped with global positioning system (GPS). However, those handheld devices with consumer-grade GPS receivers usually have low accuracy in positioning. A position correction algorithm is therefore useful in this case. In this paper, we proposed an evolutionary computation based technique to generate a correction function by two GPS receivers and a known reference location. Locating one GPS receiver on the known location and combining its longitude and latitude information and exact poisoning information, the proposed technique is capable of evolving a correction function by such. The proposed technique can be implemented and executed on handheld devices without hardware reconfiguration. Experiments are conducted to demonstrate performance of the proposed technique. Positioning error could be significantly reduced from the order of 10 m to the order of 1 m. PMID:24578657

  8. Molluscan Evolutionary Genomics

    SciTech Connect

    Simison, W. Brian; Boore, Jeffrey L.

    2005-12-01

    In the last 20 years there have been dramatic advances in techniques of high-throughput DNA sequencing, most recently accelerated by the Human Genome Project, a program that has determined the three billion base pair code on which we are based. Now this tremendous capability is being directed at other genome targets that are being sampled across the broad range of life. This opens up opportunities as never before for evolutionary and organismal biologists to address questions of both processes and patterns of organismal change. We stand at the dawn of a new 'modern synthesis' period, paralleling that of the early 20th century when the fledgling field of genetics first identified the underlying basis for Darwin's theory. We must now unite the efforts of systematists, paleontologists, mathematicians, computer programmers, molecular biologists, developmental biologists, and others in the pursuit of discovering what genomics can teach us about the diversity of life. Genome-level sampling for mollusks to date has mostly been limited to mitochondrial genomes and it is likely that these will continue to provide the best targets for broad phylogenetic sampling in the near future. However, we are just beginning to see an inroad into complete nuclear genome sequencing, with several mollusks and other eutrochozoans having been selected for work about to begin. Here, we provide an overview of the state of molluscan mitochondrial genomics, highlight a few of the discoveries from this research, outline the promise of broadening this dataset, describe upcoming projects to sequence whole mollusk nuclear genomes, and challenge the community to prepare for making the best use of these data.

  9. Evolutionary constraints or opportunities?

    PubMed Central

    Sharov, Alexei A.

    2014-01-01

    Natural selection is traditionally viewed as a leading factor of evolution, whereas variation is assumed to be random and non-directional. Any order in variation is attributed to epigenetic or developmental constraints that can hinder the action of natural selection. In contrast I consider the positive role of epigenetic mechanisms in evolution because they provide organisms with opportunities for rapid adaptive change. Because the term “constraint” has negative connotations, I use the term “regulated variation” to emphasize the adaptive nature of phenotypic variation, which helps populations and species to survive and evolve in changing environments. The capacity to produce regulated variation is a phenotypic property, which is not described in the genome. Instead, the genome acts as a switchboard, where mostly random mutations switch “on” or “off” preexisting functional capacities of organism components. Thus, there are two channels of heredity: informational (genomic) and structure-functional (phenotypic). Functional capacities of organisms most likely emerged in a chain of modifications and combinations of more simple ancestral functions. The role of DNA has been to keep records of these changes (without describing the result) so that they can be reproduced in the following generations. Evolutionary opportunities include adjustments of individual functions, multitasking, connection between various components of an organism, and interaction between organisms. The adaptive nature of regulated variation can be explained by the differential success of lineages in macro-evolution. Lineages with more advantageous patterns of regulated variation are likely to produce more species and secure more resources (i.e., long-term lineage selection). PMID:24769155

  10. Configurable pattern-based evolutionary biclustering of gene expression data

    PubMed Central

    2013-01-01

    Background Biclustering algorithms for microarray data aim at discovering functionally related gene sets under different subsets of experimental conditions. Due to the problem complexity and the characteristics of microarray datasets, heuristic searches are usually used instead of exhaustive algorithms. Also, the comparison among different techniques is still a challenge. The obtained results vary in relevant features such as the number of genes or conditions, which makes it difficult to carry out a fair comparison. Moreover, existing approaches do not allow the user to specify any preferences on these properties. Results Here, we present the first biclustering algorithm in which it is possible to particularize several biclusters features in terms of different objectives. This can be done by tuning the specified features in the algorithm or also by incorporating new objectives into the search. Furthermore, our approach bases the bicluster evaluation in the use of expression patterns, being able to recognize both shifting and scaling patterns either simultaneously or not. Evolutionary computation has been chosen as the search strategy, naming thus our proposal Evo-Bexpa (Evolutionary Biclustering based in Expression Patterns). Conclusions We have conducted experiments on both synthetic and real datasets demonstrating Evo-Bexpa abilities to obtain meaningful biclusters. Synthetic experiments have been designed in order to compare Evo-Bexpa performance with other approaches when looking for perfect patterns. Experiments with four different real datasets also confirm the proper performing of our algorithm, whose results have been biologically validated through Gene Ontology. PMID:23433178

  11. Evolutionary design of interfacial phase change van der Waals heterostructures.

    PubMed

    Kalikka, Janne; Zhou, Xilin; Behera, Jitendra; Nannicini, Giacomo; Simpson, Robert E

    2016-10-27

    We use an evolutionary algorithm to explore the design space of hexagonal Ge2Sb2Te5; a van der Waals layered two dimensional crystal heterostructure. The Ge2Sb2Te5 structure is more complicated than previously thought. Predominant features include layers of Ge3Sb2Te6 and Ge1Sb2Te4 two dimensional crystals that interact through Te-Te van der Waals bonds. Interestingly, (Ge/Sb)-Te-(Ge/Sb)-Te alternation is a common feature for the most stable structures of each generation's evolution. This emergent rule provides an important structural motif that must be included in the design of high performance Sb2Te3-GeTe van der Waals heterostructure superlattices with interfacial atomic switching capability. The structures predicted by the algorithm agree well with experimental measurements on highly oriented, and single crystal Ge2Sb2Te5 samples. By analysing the evolutionary algorithm optimised structures, we show that diffusive atomic switching is probable by Ge atoms undergoing a transition at the van der Waals interface from layers of Ge3Sb2Te6 to Ge1Sb2Te4 thus producing two blocks of Ge2Sb2Te5. Evolutionary methods present an efficient approach to explore the enormous multi-dimensional design parameter space of van der Waals bonded heterostructure superlattices.

  12. Model-Based Diagnostics for Propellant Loading Systems

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew John; Foygel, Michael; Smelyanskiy, Vadim N.

    2011-01-01

    The loading of spacecraft propellants is a complex, risky operation. Therefore, diagnostic solutions are necessary to quickly identify when a fault occurs, so that recovery actions can be taken or an abort procedure can be initiated. Model-based diagnosis solutions, established using an in-depth analysis and understanding of the underlying physical processes, offer the advanced capability to quickly detect and isolate faults, identify their severity, and predict their effects on system performance. We develop a physics-based model of a cryogenic propellant loading system, which describes the complex dynamics of liquid hydrogen filling from a storage tank to an external vehicle tank, as well as the influence of different faults on this process. The model takes into account the main physical processes such as highly nonequilibrium condensation and evaporation of the hydrogen vapor, pressurization, and also the dynamics of liquid hydrogen and vapor flows inside the system in the presence of helium gas. Since the model incorporates multiple faults in the system, it provides a suitable framework for model-based diagnostics and prognostics algorithms. Using this model, we analyze the effects of faults on the system, derive symbolic fault signatures for the purposes of fault isolation, and perform fault identification using a particle filter approach. We demonstrate the detection, isolation, and identification of a number of faults using simulation-based experiments.

  13. Sandboxes for Model-Based Inquiry

    ERIC Educational Resources Information Center

    Brady, Corey; Holbert, Nathan; Soylu, Firat; Novak, Michael; Wilensky, Uri

    2015-01-01

    In this article, we introduce a class of constructionist learning environments that we call "Emergent Systems Sandboxes" ("ESSs"), which have served as a centerpiece of our recent work in developing curriculum to support scalable model-based learning in classroom settings. ESSs are a carefully specified form of virtual…

  14. Model-Based Inquiries in Chemistry

    ERIC Educational Resources Information Center

    Khan, Samia

    2007-01-01

    In this paper, instructional strategies for sustaining model-based inquiry in an undergraduate chemistry class were analyzed through data collected from classroom observations, a student survey, and in-depth problem-solving sessions with the instructor and students. Analysis of teacher-student interactions revealed a cyclical pattern in which…

  15. Model-based Training of Situated Skills.

    ERIC Educational Resources Information Center

    Khan, Tariq M.; Brown, Keith

    2000-01-01

    Addresses areas of situated knowledge (metacognitive skills and affective skills) that have been ignored in intelligent computer-aided learning systems. Focuses on model-based reasoning, including contextualized and decontextualized knowledge, and examines an instructional method that supports situated knowledge by providing opportunities for…

  16. A Food Chain Algorithm for Capacitated Vehicle Routing Problem with Recycling in Reverse Logistics

    NASA Astrophysics Data System (ADS)

    Song, Qiang; Gao, Xuexia; Santos, Emmanuel T.

    2015-12-01

    This paper introduces the capacitated vehicle routing problem with recycling in reverse logistics, and designs a food chain algorithm for it. Some illustrative examples are selected to conduct simulation and comparison. Numerical results show that the performance of the food chain algorithm is better than the genetic algorithm, particle swarm optimization as well as quantum evolutionary algorithm.

  17. Quantum Algorithms

    NASA Technical Reports Server (NTRS)

    Abrams, D.; Williams, C.

    1999-01-01

    This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases for which all know classical algorithms require exponential time.

  18. A review of estimation of distribution algorithms in bioinformatics

    PubMed Central

    Armañanzas, Rubén; Inza, Iñaki; Santana, Roberto; Saeys, Yvan; Flores, Jose Luis; Lozano, Jose Antonio; Peer, Yves Van de; Blanco, Rosa; Robles, Víctor; Bielza, Concha; Larrañaga, Pedro

    2008-01-01

    Evolutionary search algorithms have become an essential asset in the algorithmic toolbox for solving high-dimensional optimization problems in across a broad range of bioinformatics problems. Genetic algorithms, the most well-known and representative evolutionary search technique, have been the subject of the major part of such applications. Estimation of distribution algorithms (EDAs) offer a novel evolutionary paradigm that constitutes a natural and attractive alternative to genetic algorithms. They make use of a probabilistic model, learnt from the promising solutions, to guide the search process. In this paper, we set out a basic taxonomy of EDA techniques, underlining the nature and complexity of the probabilistic model of each EDA variant. We review a set of innovative works that make use of EDA techniques to solve challenging bioinformatics problems, emphasizing the EDA paradigm's potential for further research in this domain. PMID:18822112

  19. How cultural evolutionary theory can inform social psychology and vice versa.

    PubMed

    Mesoudi, Alex

    2009-10-01

    Cultural evolutionary theory is an interdisciplinary field in which human culture is viewed as a Darwinian process of variation, competition, and inheritance, and the tools, methods, and theories developed by evolutionary biologists to study genetic evolution are adapted to study cultural change. It is argued here that an integration of the theories and findings of mainstream social psychology and of cultural evolutionary theory can be mutually beneficial. Social psychology provides cultural evolution with a set of empirically verified microevolutionary cultural processes, such as conformity, model-based biases, and content biases, that are responsible for specific patterns of cultural change. Cultural evolutionary theory provides social psychology with ultimate explanations for, and an understanding of the population-level consequences of, many social psychological phenomena, such as social learning, conformity, social comparison, and intergroup processes, as well as linking social psychology with other social science disciplines such as cultural anthropology, archaeology, and sociology.

  20. An efficient binomial model-based measure for sequence comparison and its application.

    PubMed

    Liu, Xiaoqing; Dai, Qi; Li, Lihua; He, Zerong

    2011-04-01

    Sequence comparison is one of the major tasks in bioinformatics, which could serve as evidence of structural and functional conservation, as well as of evolutionary relations. There are several similarity/dissimilarity measures for sequence comparison, but challenges remains. This paper presented a binomial model-based measure to analyze biological sequences. With help of a random indicator, the occurrence of a word at any position of sequence can be regarded as a random Bernoulli variable, and the distribution of a sum of the word occurrence is well known to be a binomial one. By using a recursive formula, we computed the binomial probability of the word count and proposed a binomial model-based measure based on the relative entropy. The proposed measure was tested by extensive experiments including classification of HEV genotypes and phylogenetic analysis, and further compared with alignment-based and alignment-free measures. The results demonstrate that the proposed measure based on binomial model is more efficient.

  1. Development of X-TOOLSS: Preliminary Design of Space Systems Using Evolutionary Computation

    NASA Technical Reports Server (NTRS)

    Schnell, Andrew R.; Hull, Patrick V.; Turner, Mike L.; Dozier, Gerry; Alverson, Lauren; Garrett, Aaron; Reneau, Jarred

    2008-01-01

    Evolutionary computational (EC) techniques such as genetic algorithms (GA) have been identified as promising methods to explore the design space of mechanical and electrical systems at the earliest stages of design. In this paper the authors summarize their research in the use of evolutionary computation to develop preliminary designs for various space systems. An evolutionary computational solver developed over the course of the research, X-TOOLSS (Exploration Toolset for the Optimization of Launch and Space Systems) is discussed. With the success of early, low-fidelity example problems, an outline of work involving more computationally complex models is discussed.

  2. Model-based Processing of Micro-cantilever Sensor Arrays

    SciTech Connect

    Tringe, J W; Clague, D S; Candy, J V; Lee, C L; Rudd, R E; Burnham, A K

    2004-11-17

    We develop a model-based processor (MBP) for a micro-cantilever array sensor to detect target species in solution. After discussing the generalized framework for this problem, we develop the specific model used in this study. We perform a proof-of-concept experiment, fit the model parameters to the measured data and use them to develop a Gauss-Markov simulation. We then investigate two cases of interest: (1) averaged deflection data, and (2) multi-channel data. In both cases the evaluation proceeds by first performing a model-based parameter estimation to extract the model parameters, next performing a Gauss-Markov simulation, designing the optimal MBP and finally applying it to measured experimental data. The simulation is used to evaluate the performance of the MBP in the multi-channel case and compare it to a ''smoother'' (''averager'') typically used in this application. It was shown that the MBP not only provides a significant gain ({approx} 80dB) in signal-to-noise ratio (SNR), but also consistently outperforms the smoother by 40-60 dB. Finally, we apply the processor to the smoothed experimental data and demonstrate its capability for chemical detection. The MBP performs quite well, though it includes a correctable systematic bias error. The project's primary accomplishment was the successful application of model-based processing to signals from micro-cantilever arrays: 40-60 dB improvement vs. the smoother algorithm was demonstrated. This result was achieved through the development of appropriate mathematical descriptions for the chemical and mechanical phenomena, and incorporation of these descriptions directly into the model-based signal processor. A significant challenge was the development of the framework which would maximize the usefulness of the signal processing algorithms while ensuring the accuracy of the mathematical description of the chemical-mechanical signal. Experimentally, the difficulty was to identify and characterize the non

  3. Model based iterative reconstruction for Bright Field electron tomography

    NASA Astrophysics Data System (ADS)

    Venkatakrishnan, Singanallur V.; Drummy, Lawrence F.; De Graef, Marc; Simmons, Jeff P.; Bouman, Charles A.

    2013-02-01

    Bright Field (BF) electron tomography (ET) has been widely used in the life sciences to characterize biological specimens in 3D. While BF-ET is the dominant modality in the life sciences it has been generally avoided in the physical sciences due to anomalous measurements in the data due to a phenomenon called "Bragg scatter" - visible when crystalline samples are imaged. These measurements cause undesirable artifacts in the reconstruction when the typical algorithms such as Filtered Back Projection (FBP) and Simultaneous Iterative Reconstruction Technique (SIRT) are applied to the data. Model based iterative reconstruction (MBIR) provides a powerful framework for tomographic reconstruction that incorporates a model for data acquisition, noise in the measurement and a model for the object to obtain reconstructions that are qualitatively superior and quantitatively accurate. In this paper we present a novel MBIR algorithm for BF-ET which accounts for the presence of anomalous measurements from Bragg scatter in the data during the iterative reconstruction. Our method accounts for the anomalies by formulating the reconstruction as minimizing a cost function which rejects measurements that deviate significantly from the typical Beer's law model widely assumed for BF-ET. Results on simulated as well as real data show that our method can dramatically improve the reconstructions compared to FBP and MBIR without anomaly rejection, suppressing the artifacts due to the Bragg anomalies.

  4. A Generative Control Capability for a Model-based Executive

    NASA Technical Reports Server (NTRS)

    Williams, Brian C.; Nayak, P. Pandurang

    1997-01-01

    This paper describes Burton, a core element of a new generation of goal-directed model-based autonomous executives. This executive makes extensive use of component-based declarative models to analyze novel situations and generate novel control actions both at the goal and hardware levels. It uses an extremely efficient online propositional inference engine to efficiently determine likely states consistent with current observations and optimal target states that achieve high level goals. It incorporates a flexible generative control sequencing algorithm within the reactive loop to bridge the gap between current and target states. The system is able to detect and avoid damaging and irreversible situations, After every control action it uses its model and sensors to detect anomalous situations and immediately take corrective action. Efficiency is achieved through a series of model compilation and online policy construction methods, and by exploiting general conventions of hardware design that permit a divide and conquer approach to planning. The paper presents a formal characterization of Burton's capability, develops efficient algorithms, and reports on experience with the implementation in the domain of spacecraft autonomy. Burton is being incorporated as one of the key elements of the Remote Agent core autonomy architecture for Deep Space One, the first spacecraft for NASA's New Millenium program.

  5. Symbolic Processing Combined with Model-Based Reasoning

    NASA Technical Reports Server (NTRS)

    James, Mark

    2009-01-01

    A computer program for the detection of present and prediction of future discrete states of a complex, real-time engineering system utilizes a combination of symbolic processing and numerical model-based reasoning. One of the biggest weaknesses of a purely symbolic approach is that it enables prediction of only future discrete states while missing all unmodeled states or leading to incorrect identification of an unmodeled state as a modeled one. A purely numerical approach is based on a combination of statistical methods and mathematical models of the applicable physics and necessitates development of a complete model to the level of fidelity required for prediction. In addition, a purely numerical approach does not afford the ability to qualify its results without some form of symbolic processing. The present software implements numerical algorithms to detect unmodeled events and symbolic algorithms to predict expected behavior, correlate the expected behavior with the unmodeled events, and interpret the results in order to predict future discrete states. The approach embodied in this software differs from that of the BEAM methodology (aspects of which have been discussed in several prior NASA Tech Briefs articles), which provides for prediction of future measurements in the continuous-data domain.

  6. Systems Engineering Interfaces: A Model Based Approach

    NASA Technical Reports Server (NTRS)

    Fosse, Elyse; Delp, Christopher

    2013-01-01

    Currently: Ops Rev developed and maintains a framework that includes interface-specific language, patterns, and Viewpoints. Ops Rev implements the framework to design MOS 2.0 and its 5 Mission Services. Implementation de-couples interfaces and instances of interaction Future: A Mission MOSE implements the approach and uses the model based artifacts for reviews. The framework extends further into the ground data layers and provides a unified methodology.

  7. Evolutionary foundations for cancer biology

    PubMed Central

    Aktipis, C Athena; Nesse, Randolph M

    2013-01-01

    New applications of evolutionary biology are transforming our understanding of cancer. The articles in this special issue provide many specific examples, such as microorganisms inducing cancers, the significance of within-tumor heterogeneity, and the possibility that lower dose chemotherapy may sometimes promote longer survival. Underlying these specific advances is a large-scale transformation, as cancer research incorporates evolutionary methods into its toolkit, and asks new evolutionary questions about why we are vulnerable to cancer. Evolution explains why cancer exists at all, how neoplasms grow, why cancer is remarkably rare, and why it occurs despite powerful cancer suppression mechanisms. Cancer exists because of somatic selection; mutations in somatic cells result in some dividing faster than others, in some cases generating neoplasms. Neoplasms grow, or do not, in complex cellular ecosystems. Cancer is relatively rare because of natural selection; our genomes were derived disproportionally from individuals with effective mechanisms for suppressing cancer. Cancer occurs nonetheless for the same six evolutionary reasons that explain why we remain vulnerable to other diseases. These four principles—cancers evolve by somatic selection, neoplasms grow in complex ecosystems, natural selection has shaped powerful cancer defenses, and the limitations of those defenses have evolutionary explanations—provide a foundation for understanding, preventing, and treating cancer. PMID:23396885

  8. Evolutionary foundations for cancer biology.

    PubMed

    Aktipis, C Athena; Nesse, Randolph M

    2013-01-01

    New applications of evolutionary biology are transforming our understanding of cancer. The articles in this special issue provide many specific examples, such as microorganisms inducing cancers, the significance of within-tumor heterogeneity, and the possibility that lower dose chemotherapy may sometimes promote longer survival. Underlying these specific advances is a large-scale transformation, as cancer research incorporates evolutionary methods into its toolkit, and asks new evolutionary questions about why we are vulnerable to cancer. Evolution explains why cancer exists at all, how neoplasms grow, why cancer is remarkably rare, and why it occurs despite powerful cancer suppression mechanisms. Cancer exists because of somatic selection; mutations in somatic cells result in some dividing faster than others, in some cases generating neoplasms. Neoplasms grow, or do not, in complex cellular ecosystems. Cancer is relatively rare because of natural selection; our genomes were derived disproportionally from individuals with effective mechanisms for suppressing cancer. Cancer occurs nonetheless for the same six evolutionary reasons that explain why we remain vulnerable to other diseases. These four principles-cancers evolve by somatic selection, neoplasms grow in complex ecosystems, natural selection has shaped powerful cancer defenses, and the limitations of those defenses have evolutionary explanations-provide a foundation for understanding, preventing, and treating cancer.

  9. Evolutionary genetics of maternal effects

    PubMed Central

    Wolf, Jason B.; Wade, Michael J.

    2016-01-01

    Maternal genetic effects (MGEs), where genes expressed by mothers affect the phenotype of their offspring, are important sources of phenotypic diversity in a myriad of organisms. We use a single‐locus model to examine how MGEs contribute patterns of heritable and nonheritable variation and influence evolutionary dynamics in randomly mating and inbreeding populations. We elucidate the influence of MGEs by examining the offspring genotype‐phenotype relationship, which determines how MGEs affect evolutionary dynamics in response to selection on offspring phenotypes. This approach reveals important results that are not apparent from classic quantitative genetic treatments of MGEs. We show that additive and dominance MGEs make different contributions to evolutionary dynamics and patterns of variation, which are differentially affected by inbreeding. Dominance MGEs make the offspring genotype‐phenotype relationship frequency dependent, resulting in the appearance of negative frequency‐dependent selection, while additive MGEs contribute a component of parent‐of‐origin dependent variation. Inbreeding amplifies the contribution of MGEs to the additive genetic variance and, therefore enhances their evolutionary response. Considering evolutionary dynamics of allele frequency change on an adaptive landscape, we show that this landscape differs from the mean fitness surface, and therefore, under some condition, fitness peaks can exist but not be “available” to the evolving population. PMID:26969266

  10. The major synthetic evolutionary transitions

    PubMed Central

    Solé, Ricard

    2016-01-01

    Evolution is marked by well-defined events involving profound innovations that are known as ‘major evolutionary transitions'. They involve the integration of autonomous elements into a new, higher-level organization whereby the former isolated units interact in novel ways, losing their original autonomy. All major transitions, which include the origin of life, cells, multicellular systems, societies or language (among other examples), took place millions of years ago. Are these transitions unique, rare events? Have they instead universal traits that make them almost inevitable when the right pieces are in place? Are there general laws of evolutionary innovation? In order to approach this problem under a novel perspective, we argue that a parallel class of evolutionary transitions can be explored involving the use of artificial evolutionary experiments where alternative paths to innovation can be explored. These ‘synthetic’ transitions include, for example, the artificial evolution of multicellular systems or the emergence of language in evolved communicating robots. These alternative scenarios could help us to understand the underlying laws that predate the rise of major innovations and the possibility for general laws of evolved complexity. Several key examples and theoretical approaches are summarized and future challenges are outlined. This article is part of the themed issue ‘The major synthetic evolutionary transitions’. PMID:27431528

  11. Evolutionary design of a fuzzy classifier from data.

    PubMed

    Chang, Xiaoguang; Lilly, John H

    2004-08-01

    Genetic algorithms show powerful capabilities for automatically designing fuzzy systems from data, but many proposed methods must be subjected to some minimal structure assumptions, such as rule base size. In this paper, we also address the design of fuzzy systems from data. A new evolutionary approach is proposed for deriving a compact fuzzy classification system directly from data without any a priori knowledge or assumptions on the distribution of the data. At the beginning of the algorithm, the fuzzy classifier is empty with no rules in the rule base and no membership functions assigned to fuzzy variables. Then, rules and membership functions are automatically created and optimized in an evolutionary process. To accomplish this, parameters of the variable input spread inference training (VISIT) algorithm are used to code fuzzy systems on the training data set. Therefore, we can derive each individual fuzzy system via the VISIT algorithm, and then search the best one via genetic operations. To evaluate the fuzzy classifier, a fuzzy expert system acts as the fitness function. This fuzzy expert system can effectively evaluate the accuracy and compactness at the same time. In the application section, we consider four benchmark classification problems: the iris data, wine data, Wisconsin breast cancer data, and Pima Indian diabetes data. Comparisons of our method with others in the literature show the effectiveness of the proposed method.

  12. Neuronal boost to evolutionary dynamics.

    PubMed

    de Vladar, Harold P; Szathmáry, Eörs

    2015-12-06

    Standard evolutionary dynamics is limited by the constraints of the genetic system. A central message of evolutionary neurodynamics is that evolutionary dynamics in the brain can happen in a neuronal niche in real time, despite the fact that neurons do not reproduce. We show that Hebbian learning and structural synaptic plasticity broaden the capacity for informational replication and guided variability provided a neuronally plausible mechanism of replication is in place. The synergy between learning and selection is more efficient than the equivalent search by mutation selection. We also consider asymmetric landscapes and show that the learning weights become correlated with the fitness gradient. That is, the neuronal complexes learn the local properties of the fitness landscape, resulting in the generation of variability directed towards the direction of fitness increase, as if mutations in a genetic pool were drawn such that they would increase reproductive success. Evolution might thus be more efficient within evolved brains than among organisms out in the wild.

  13. Neuronal boost to evolutionary dynamics

    PubMed Central

    de Vladar, Harold P.; Szathmáry, Eörs

    2015-01-01

    Standard evolutionary dynamics is limited by the constraints of the genetic system. A central message of evolutionary neurodynamics is that evolutionary dynamics in the brain can happen in a neuronal niche in real time, despite the fact that neurons do not reproduce. We show that Hebbian learning and structural synaptic plasticity broaden the capacity for informational replication and guided variability provided a neuronally plausible mechanism of replication is in place. The synergy between learning and selection is more efficient than the equivalent search by mutation selection. We also consider asymmetric landscapes and show that the learning weights become correlated with the fitness gradient. That is, the neuronal complexes learn the local properties of the fitness landscape, resulting in the generation of variability directed towards the direction of fitness increase, as if mutations in a genetic pool were drawn such that they would increase reproductive success. Evolution might thus be more efficient within evolved brains than among organisms out in the wild. PMID:26640653

  14. Evolutionary Aspects of Enzyme Dynamics*

    PubMed Central

    Klinman, Judith P.; Kohen, Amnon

    2014-01-01

    The role of evolutionary pressure on the chemical step catalyzed by enzymes is somewhat enigmatic, in part because chemistry is not rate-limiting for many optimized systems. Herein, we present studies that examine various aspects of the evolutionary relationship between protein dynamics and the chemical step in two paradigmatic enzyme families, dihydrofolate reductases and alcohol dehydrogenases. Molecular details of both convergent and divergent evolution are beginning to emerge. The findings suggest that protein dynamics across an entire enzyme can play a role in adaptation to differing physiological conditions. The growing tool kit of kinetics, kinetic isotope effects, molecular biology, biophysics, and bioinformatics provides means to link evolutionary changes in structure-dynamics function to the vibrational and conformational states of each protein. PMID:25210031

  15. Evolutionary psychology and intelligence research.

    PubMed

    Kanazawa, Satoshi

    2010-01-01

    This article seeks to unify two subfields of psychology that have hitherto stood separately: evolutionary psychology and intelligence research/differential psychology. I suggest that general intelligence may simultaneously be an evolved adaptation and an individual-difference variable. Tooby and Cosmides's (1990a) notion of random quantitative variation on a monomorphic design allows us to incorporate heritable individual differences in evolved adaptations. The Savanna-IQ Interaction Hypothesis, which is one consequence of the integration of evolutionary psychology and intelligence research, can potentially explain why less intelligent individuals enjoy TV more, why liberals are more intelligent than conservatives, and why night owls are more intelligent than morning larks, among many other findings. The general approach proposed here will allow us to integrate evolutionary psychology with any other aspect of differential psychology.

  16. Deep evolutionary origins of neurobiology

    PubMed Central

    Mancuso, Stefano

    2009-01-01

    It is generally assumed, both in common-sense argumentations and scientific concepts, that brains and neurons represent late evolutionary achievements which are present only in more advanced animals. Here we overview recently published data clearly revealing that our understanding of bacteria, unicellular eukaryotic organisms, plants, brains and neurons, rooted in the Aristotelian philosophy is flawed. Neural aspects of biological systems are obvious already in bacteria and unicellular biological units such as sexual gametes and diverse unicellular eukaryotic organisms. Altogether, processes and activities thought to represent evolutionary ‘recent’ specializations of the nervous system emerge rather to represent ancient and fundamental cell survival processes. PMID:19513267

  17. Comparison of the Asynchronous Differential Evolution and JADE Minimization Algorithms

    NASA Astrophysics Data System (ADS)

    Zhabitsky, Mikhail

    2016-02-01

    Differential Evolution (DE) is an efficient evolutionary algorithm to solve global optimization problems. In this work we compare performance of the recently proposed Asynchronous Differential Evolution with Adaptive Correlation Matrix (ADEACM) to the widely used JADE algorithm, a DE variant with adaptive control parameters.

  18. Improved initialisation of model-based clustering using Gaussian hierarchical partitions

    PubMed Central

    Scrucca, Luca; Raftery, Adrian E.

    2015-01-01

    Initialisation of the EM algorithm in model-based clustering is often crucial. Various starting points in the parameter space often lead to different local maxima of the likelihood function and, so to different clustering partitions. Among the several approaches available in the literature, model-based agglomerative hierarchical clustering is used to provide initial partitions in the popular mclust R package. This choice is computationally convenient and often yields good clustering partitions. However, in certain circumstances, poor initial partitions may cause the EM algorithm to converge to a local maximum of the likelihood function. We propose several simple and fast refinements based on data transformations and illustrate them through data examples. PMID:26949421

  19. Model-based neuroimaging for cognitive computing.

    PubMed

    Poznanski, Roman R

    2009-09-01

    The continuity of the mind is suggested to mean the continuous spatiotemporal dynamics arising from the electrochemical signature of the neocortex: (i) globally through volume transmission in the gray matter as fields of neural activity, and (ii) locally through extrasynaptic signaling between fine distal dendrites of cortical neurons. If the continuity of dynamical systems across spatiotemporal scales defines a stream of consciousness then intentional metarepresentations as templates of dynamic continuity allow qualia to be semantically mapped during neuroimaging of specific cognitive tasks. When interfaced with a computer, such model-based neuroimaging requiring new mathematics of the brain will begin to decipher higher cognitive operations not possible with existing brain-machine interfaces.

  20. Model-based Tomographic Reconstruction Literature Search

    SciTech Connect

    Chambers, D H; Lehman, S K

    2005-11-30

    In the process of preparing a proposal for internal research funding, a literature search was conducted on the subject of model-based tomographic reconstruction (MBTR). The purpose of the search was to ensure that the proposed research would not replicate any previous work. We found that the overwhelming majority of work on MBTR which used parameterized models of the object was theoretical in nature. Only three researchers had applied the technique to actual data. In this note, we summarize the findings of the literature search.

  1. Model based defect characterization in composites

    NASA Astrophysics Data System (ADS)

    Roberts, R.; Holland, S.

    2017-02-01

    Work is reported on model-based defect characterization in CFRP composites. The work utilizes computational models of the interaction of NDE probing energy fields (ultrasound and thermography), to determine 1) the measured signal dependence on material and defect properties (forward problem), and 2) an assessment of performance-critical defect properties from analysis of measured NDE signals (inverse problem). Work is reported on model implementation for inspection of CFRP laminates containing multi-ply impact-induced delamination, with application in this paper focusing on ultrasound. A companion paper in these proceedings summarizes corresponding activity in thermography. Inversion of ultrasound data is demonstrated showing the quantitative extraction of damage properties.

  2. A comparison of model-based and hyperbolic localization techniques as applied to marine mammal calls

    NASA Astrophysics Data System (ADS)

    Tiemann, Christopher O.; Porter, Michael B.

    2003-10-01

    A common technique for the passive acoustic localization of singing marine mammals is that of hyperbolic fixing. This technique assumes straight-line, constant wave speed acoustic propagation to associate travel time with range, but in some geometries, these assumptions can lead to localization errors. A new localization algorithm based on acoustic propagation models can account for waveguide and multipath effects, and it has successfully been tested against real acoustic data from three different environments (Hawaii, California, and Bahamas) and three different species (humpback, blue, and sperm whales). Accuracy of the model-based approach has been difficult to verify given the absence of concurrent visual and acoustic observations of the same animal. However, the model-based algorithm was recently exercised against a controlled source of known position broadcasting recorded whale sounds, and location estimates were then compared to hyperbolic techniques and true source position. In geometries where direct acoustic paths exist, both model-based and hyperbolic techniques perform equally well. However, in geometries where bathymetric and refractive effects are important, such as at long range, the model-based approach shows improved accuracy.

  3. Model-Based GN and C Simulation and Flight Software Development for Orion Missions beyond LEO

    NASA Technical Reports Server (NTRS)

    Odegard, Ryan; Milenkovic, Zoran; Henry, Joel; Buttacoli, Michael

    2014-01-01

    For Orion missions beyond low Earth orbit (LEO), the Guidance, Navigation, and Control (GN&C) system is being developed using a model-based approach for simulation and flight software. Lessons learned from the development of GN&C algorithms and flight software for the Orion Exploration Flight Test One (EFT-1) vehicle have been applied to the development of further capabilities for Orion GN&C beyond EFT-1. Continuing the use of a Model-Based Development (MBD) approach with the Matlab®/Simulink® tool suite, the process for GN&C development and analysis has been largely improved. Furthermore, a model-based simulation environment in Simulink, rather than an external C-based simulation, greatly eases the process for development of flight algorithms. The benefits seen by employing lessons learned from EFT-1 are described, as well as the approach for implementing additional MBD techniques. Also detailed are the key enablers for improvements to the MBD process, including enhanced configuration management techniques for model-based software systems, automated code and artifact generation, and automated testing and integration.

  4. MSTAR's extensible search engine and model-based inferencing toolkit

    NASA Astrophysics Data System (ADS)

    Wissinger, John; Ristroph, Robert; Diemunsch, Joseph R.; Severson, William E.; Fruedenthal, Eric

    1999-08-01

    The DARPA/AFRL 'Moving and Stationary Target Acquisition and Recognition' (MSTAR) program is developing a model-based vision approach to Synthetic Aperture Radar (SAR) Automatic Target Recognition (ATR). The motivation for this work is to develop a high performance ATR capability that can identify ground targets in highly unconstrained imaging scenarios that include variable image acquisition geometry, arbitrary target pose and configuration state, differences in target deployment situation, and strong intra-class variations. The MSTAR approach utilizes radar scattering models in an on-line hypothesize-and-test operation that compares predicted target signature statistics with features extracted from image data in an attempt to determine a 'best fit' explanation of the observed image. Central to this processing paradigm is the Search algorithm, which provides intelligent control in selecting features to measure and hypotheses to test, as well as in making the decision about when to stop processing and report a specific target type or clutter. Intelligent management of computation performed by the Search module is a key enabler to scaling the model-based approach to the large hypothesis spaces typical of realistic ATR problems. In this paper, we describe the present state of design and implementation of the MSTAR Search engine, as it has matured over the last three years of the MSTAR program. The evolution has been driven by a continually expanding problem domain that now includes 30 target types, viewed under arbitrary squint/depression, with articulations, reconfigurations, revetments, variable background, and up to 30% blocking occlusion. We believe that the research directions that have been inspired by MSTAR's challenging problem domain are leading to broadly applicable search methodologies that are relevant to computer vision systems in many areas.

  5. Evolutionary Perspective in Child Growth

    PubMed Central

    Hochberg, Ze’ev

    2011-01-01

    Hereditary, environmental, and stochastic factors determine a child’s growth in his unique environment, but their relative contribution to the phenotypic outcome and the extent of stochastic programming that is required to alter human phenotypes is not known because few data are available. This is an attempt to use evolutionary life-history theory in understanding child growth in a broad evolutionary perspective, using the data and theory of evolutionary predictive adaptive growth-related strategies. Transitions from one life-history phase to the next have inherent adaptive plasticity in their timing. Humans evolved to withstand energy crises by decreasing their body size, and evolutionary short-term adaptations to energy crises utilize a plasticity that modifies the timing of transition from infancy into childhood, culminating in short stature in times of energy crisis. Transition to juvenility is part of a strategy of conversion from a period of total dependence on the family and tribe for provision and security to self-supply, and a degree of adaptive plasticity is provided and determines body composition. Transition to adolescence entails plasticity in adapting to energy resources, other environmental cues, and the social needs of the maturing adolescent to determine life-span and the period of fecundity and fertility. Fundamental questions are raised by a life-history approach to the unique growth pattern of each child in his given genetic background and current environment. PMID:23908815

  6. Statistical methods for evolutionary trees.

    PubMed

    Edwards, A W F

    2009-09-01

    In 1963 and 1964, L. L. Cavalli-Sforza and A. W. F. Edwards introduced novel methods for computing evolutionary trees from genetical data, initially for human populations from blood-group gene frequencies. The most important development was their introduction of statistical methods of estimation applied to stochastic models of evolution.

  7. Is evolutionary biology strategic science?

    PubMed

    Meagher, Thomas R

    2007-01-01

    There is a profound need for the scientific community to be better aware of the policy context in which it operates. To address this need, Evolution has established a new Outlook feature section to include papers that explore the interface between society and evolutionary biology. This first paper in the series considers the strategic relevance of evolutionary biology. Support for scientific research in general is based on governmental or institutional expenditure that is an investment, and such investment is based on strategies designed to achieve particular outcomes, such as advance in particular areas of basic science or application. The scientific community can engage in the development of scientific strategies on a variety of levels, including workshops to explicitly develop research priorities and targeted funding initiatives to help define emerging scientific areas. Better understanding and communication of the scientific achievements of evolutionary biology, emphasizing immediate and potential societal relevance, are effective counters to challenges presented by the creationist agenda. Future papers in the Outlook feature section should assist the evolutionary biology community in achieving a better collective understanding of the societal relevance of their field.

  8. Evolutionary Psychology and Intelligence Research

    ERIC Educational Resources Information Center

    Kanazawa, Satoshi

    2010-01-01

    This article seeks to unify two subfields of psychology that have hitherto stood separately: evolutionary psychology and intelligence research/differential psychology. I suggest that general intelligence may simultaneously be an evolved adaptation and an individual-difference variable. Tooby and Cosmides's (1990a) notion of random quantitative…

  9. Cryptic eco-evolutionary dynamics.

    PubMed

    Kinnison, Michael T; Hairston, Nelson G; Hendry, Andrew P

    2015-12-01

    Natural systems harbor complex interactions that are fundamental parts of ecology and evolution. These interactions challenge our inclinations and training to seek the simplest explanations of patterns in nature. Not least is the likelihood that some complex processes might be missed when their patterns look similar to predictions for simpler mechanisms. Along these lines, theory and empirical evidence increasingly suggest that environmental, ecological, phenotypic, and genetic processes can be tightly intertwined, resulting in complex and sometimes surprising eco-evolutionary dynamics. The goal of this review is to temper inclinations to unquestioningly seek the simplest explanations in ecology and evolution, by recognizing that some eco-evolutionary outcomes may appear very similar to purely ecological, purely evolutionary, or even null expectations, and thus be cryptic. We provide theoretical and empirical evidence for observational biases and mechanisms that might operate among the various links in eco-evolutionary feedbacks to produce cryptic patterns. Recognition that cryptic dynamics can be associated with outcomes like stability, resilience, recovery, or coexistence in a dynamically changing world provides added impetus for finding ways to study them.

  10. Current Issues in Evolutionary Paleontology.

    ERIC Educational Resources Information Center

    Scully, Erik Paul

    1987-01-01

    Describes some of the contributions made by the field of paleontology to theories in geology and biology. Suggests that the two best examples of modern evolutionary paleontology relate to the theory of punctuated equilibria, and the possibility that mass extinctions may be cyclic. (TW)

  11. Euryhalinity in an evolutionary context

    USGS Publications Warehouse

    Schultz, Eric T.; McCormick, Stephen D.; McCormick, Stephen D.; Farrell, Anthony Peter; Brauner, Colin J.

    2013-01-01

    This chapter focuses on the evolutionary importance and taxonomic distribution of euryhalinity. Euryhalinity refers to broad halotolerance and broad halohabitat distribution. Salinity exposure experiments have demonstrated that species vary tenfold in their range of tolerable salinity levels, primarily because of differences in upper limits. Halotolerance breadth varies with the species’ evolutionary history, as represented by its ordinal classification, and with the species’ halohabitat. Freshwater and seawater species tolerate brackish water; their empirically-determined fundamental haloniche is broader than their realized haloniche, as revealed by the halohabitats they occupy. With respect to halohabitat distribution, a minority of species (<10%) are euryhaline. Habitat-euryhalinity is prevalent among basal actinopterygian fishes, is largely absent from orders arising from intermediate nodes, and reappears in the most derived taxa. There is pronounced family-level variability in the tendency to be halohabitat-euryhaline, which may have arisen during a burst of diversification following the Cretaceous-Palaeogene extinction. Low prevalence notwithstanding, euryhaline species are potent sources of evolutionary diversity. Euryhalinity is regarded as a key innovation trait whose evolution enables exploitation of new adaptive zone, triggering cladogenesis. We review phylogenetically-informed studies that demonstrate freshwater species diversifying from euryhaline ancestors through processes such as landlocking. These studies indicate that some euryhaline taxa are particularly susceptible to changes in halohabitat and subsequent diversification, and some geographic regions have been hotspots for transitions to freshwater. Comparative studies on mechanisms among multiple taxa and at multiple levels of biological integration are needed to clarify evolutionary pathways to, and from, euryhalinity.

  12. Evolutionary perspective in child growth.

    PubMed

    Hochberg, Ze'ev

    2011-07-01

    Hereditary, environmental, and stochastic factors determine a child's growth in his unique environment, but their relative contribution to the phenotypic outcome and the extent of stochastic programming that is required to alter human phenotypes is not known because few data are available. This is an attempt to use evolutionary life-history theory in understanding child growth in a broad evolutionary perspective, using the data and theory of evolutionary predictive adaptive growth-related strategies. Transitions from one life-history phase to the next have inherent adaptive plasticity in their timing. Humans evolved to withstand energy crises by decreasing their body size, and evolutionary short-term adaptations to energy crises utilize a plasticity that modifies the timing of transition from infancy into childhood, culminating in short stature in times of energy crisis. Transition to juvenility is part of a strategy of conversion from a period of total dependence on the family and tribe for provision and security to self-supply, and a degree of adaptive plasticity is provided and determines body composition. Transition to adolescence entails plasticity in adapting to energy resources, other environmental cues, and the social needs of the maturing adolescent to determine life-span and the period of fecundity and fertility. Fundamental questions are raised by a life-history approach to the unique growth pattern of each child in his given genetic background and current environment.

  13. Molecular phylogenetics: testing evolutionary hypotheses.

    PubMed

    Walsh, David A; Sharma, Adrian K

    2009-01-01

    A common approach for investigating evolutionary relationships between genes and organisms is to compare extant DNA or protein sequences and infer an evolutionary tree. This methodology is known as molecular phylogenetics and may be the most informative means for exploring phage evolution, since there are few morphological features that can be used to differentiate between these tiny biological entities. In addition, phage genomes can be mosaic, meaning different genes or genomic regions can exhibit conflicting evolutionary histories due to lateral gene transfer or homologous recombination between different phage genomes. Molecular phylogenetics can be used to identify and study such genome mosaicism. This chapter provides a general introduction to the theory and methodology used to reconstruct phylogenetic relationships from molecular data. Also included is a discussion on how the evolutionary history of different genes within the same set of genomes can be compared, using a collection of T4-type phage genomes as an example. A compilation of programs and packages that are available for conducting phylogenetic analyses is supplied as an accompanying appendix.

  14. Linear antenna array optimization using flower pollination algorithm.

    PubMed

    Saxena, Prerna; Kothari, Ashwin

    2016-01-01

    Flower pollination algorithm (FPA) is a new nature-inspired evolutionary algorithm used to solve multi-objective optimization problems. The aim of this paper is to introduce FPA to the electromagnetics and antenna community for the optimization of linear antenna arrays. FPA is applied for the first time to linear array so as to obtain optimized antenna positions in order to achieve an array pattern with minimum side lobe level along with placement of deep nulls in desired directions. Various design examples are presented that illustrate the use of FPA for linear antenna array optimization, and subsequently the results are validated by benchmarking along with results obtained using other state-of-the-art, nature-inspired evolutionary algorithms such as particle swarm optimization, ant colony optimization and cat swarm optimization. The results suggest that in most cases, FPA outperforms the other evolutionary algorithms and at times it yields a similar performance.

  15. Selective evolutionary generation systems: Theory and applications

    NASA Astrophysics Data System (ADS)

    Menezes, Amor A.

    This dissertation is devoted to the problem of behavior design, which is a generalization of the standard global optimization problem: instead of generating the optimizer, the generalization produces, on the space of candidate optimizers, a probability density function referred to as the behavior. The generalization depends on a parameter, the level of selectivity, such that as this parameter tends to infinity, the behavior becomes a delta function at the location of the global optimizer. The motivation for this generalization is that traditional off-line global optimization is non-resilient and non-opportunistic. That is, traditional global optimization is unresponsive to perturbations of the objective function. On-line optimization methods that are more resilient and opportunistic than their off-line counterparts typically consist of the computationally expensive sequential repetition of off-line techniques. A novel approach to inexpensive resilience and opportunism is to utilize the theory of Selective Evolutionary Generation Systems (SECS), which sequentially and probabilistically selects a candidate optimizer based on the ratio of the fitness values of two candidates and the level of selectivity. Using time-homogeneous, irreducible, ergodic Markov chains to model a sequence of local, and hence inexpensive, dynamic transitions, this dissertation proves that such transitions result in behavior that is called rational; such behavior is desirable because it can lead to both efficient search for an optimizer as well as resilient and opportunistic behavior. The dissertation also identifies system-theoretic properties of the proposed scheme, including equilibria, their stability and their optimality. Moreover, this dissertation demonstrates that the canonical genetic algorithm with fitness proportional selection and the (1+1) evolutionary strategy are particular cases of the scheme. Applications in three areas illustrate the versatility of the SECS theory: flight

  16. Feature-driven model-based segmentation

    NASA Astrophysics Data System (ADS)

    Qazi, Arish A.; Kim, John; Jaffray, David A.; Pekar, Vladimir

    2011-03-01

    The accurate delineation of anatomical structures is required in many medical image analysis applications. One example is radiation therapy planning (RTP), where traditional manual delineation is tedious, labor intensive, and can require hours of clinician's valuable time. Majority of automated segmentation methods in RTP belong to either model-based or atlas-based approaches. One substantial limitation of model-based segmentation is that its accuracy may be restricted by the uncertainties in image content, specifically when segmenting low-contrast anatomical structures, e.g. soft tissue organs in computed tomography images. In this paper, we introduce a non-parametric feature enhancement filter which replaces raw intensity image data by a high level probabilistic map which guides the deformable model to reliably segment low-contrast regions. The method is evaluated by segmenting the submandibular and parotid glands in the head and neck region and comparing the results to manual segmentations in terms of the volume overlap. Quantitative results show that we are in overall good agreement with expert segmentations, achieving volume overlap of up to 80%. Qualitatively, we demonstrate that we are able to segment low-contrast regions, which otherwise are difficult to delineate with deformable models relying on distinct object boundaries from the original image data.

  17. Sandboxes for Model-Based Inquiry

    NASA Astrophysics Data System (ADS)

    Brady, Corey; Holbert, Nathan; Soylu, Firat; Novak, Michael; Wilensky, Uri

    2015-04-01

    In this article, we introduce a class of constructionist learning environments that we call Emergent Systems Sandboxes ( ESSs), which have served as a centerpiece of our recent work in developing curriculum to support scalable model-based learning in classroom settings. ESSs are a carefully specified form of virtual construction environment that support students in creating, exploring, and sharing computational models of dynamic systems that exhibit emergent phenomena. They provide learners with "entity"-level construction primitives that reflect an underlying scientific model. These primitives can be directly "painted" into a sandbox space, where they can then be combined, arranged, and manipulated to construct complex systems and explore the emergent properties of those systems. We argue that ESSs offer a means of addressing some of the key barriers to adopting rich, constructionist model-based inquiry approaches in science classrooms at scale. Situating the ESS in a large-scale science modeling curriculum we are implementing across the USA, we describe how the unique "entity-level" primitive design of an ESS facilitates knowledge system refinement at both an individual and social level, we describe how it supports flexible modeling practices by providing both continuous and discrete modes of executability, and we illustrate how it offers students a variety of opportunities for validating their qualitative understandings of emergent systems as they develop.

  18. Towards a mechanistic foundation of evolutionary theory.

    PubMed

    Doebeli, Michael; Ispolatov, Yaroslav; Simon, Burt

    2017-02-15

    Most evolutionary thinking is based on the notion of fitness and related ideas such as fitness landscapes and evolutionary optima. Nevertheless, it is often unclear what fitness actually is, and its meaning often depends on the context. Here we argue that fitness should not be a basal ingredient in verbal or mathematical descriptions of evolution. Instead, we propose that evolutionary birth-death processes, in which individuals give birth and die at ever-changing rates, should be the basis of evolutionary theory, because such processes capture the fundamental events that generate evolutionary dynamics. In evolutionary birth-death processes, fitness is at best a derived quantity, and owing to the potential complexity of such processes, there is no guarantee that there is a simple scalar, such as fitness, that would describe long-term evolutionary outcomes. We discuss how evolutionary birth-death processes can provide useful perspectives on a number of central issues in evolution.

  19. Evolutionary Fuzzy Block-Matching-Based Camera Raw Image Denoising.

    PubMed

    Yang, Chin-Chang; Guo, Shu-Mei; Tsai, Jason Sheng-Hong

    2016-10-03

    An evolutionary fuzzy block-matching-based image denoising algorithm is proposed to remove noise from a camera raw image. Recently, a variance stabilization transform is widely used to stabilize the noise variance, so that a Gaussian denoising algorithm can be used to remove the signal-dependent noise in camera sensors. However, in the stabilized domain, the existed denoising algorithm may blur too much detail. To provide a better estimate of the noise-free signal, a new block-matching approach is proposed to find similar blocks by the use of a type-2 fuzzy logic system (FLS). Then, these similar blocks are averaged with the weightings which are determined by the FLS. Finally, an efficient differential evolution is used to further improve the performance of the proposed denoising algorithm. The experimental results show that the proposed denoising algorithm effectively improves the performance of image denoising. Furthermore, the average performance of the proposed method is better than those of two state-of-the-art image denoising algorithms in subjective and objective measures.

  20. Computational complexity of ecological and evolutionary spatial dynamics.

    PubMed

    Ibsen-Jensen, Rasmus; Chatterjee, Krishnendu; Nowak, Martin A

    2015-12-22

    There are deep, yet largely unexplored, connections between computer science and biology. Both disciplines examine how information proliferates in time and space. Central results in computer science describe the complexity of algorithms that solve certain classes of problems. An algorithm is deemed efficient if it can solve a problem in polynomial time, which means the running time of the algorithm is a polynomial function of the length of the input. There are classes of harder problems for which the fastest possible algorithm requires exponential time. Another criterion is the space requirement of the algorithm. There is a crucial distinction between algorithms that can find a solution, verify a solution, or list several distinct solutions in given time and space. The complexity hierarchy that is generated in this way is the foundation of theoretical computer science. Precise complexity results can be notoriously difficult. The famous question whether polynomial time equals nondeterministic polynomial time (i.e., P = NP) is one of the hardest open problems in computer science and all of mathematics. Here, we consider simple processes of ecological and evolutionary spatial dynamics. The basic question is: What is the probability that a new invader (or a new mutant) will take over a resident population? We derive precise complexity results for a variety of scenarios. We therefore show that some fundamental questions in this area cannot be answered by simple equations (assuming that P is not equal to NP).