Algorithmic Mechanism Design of Evolutionary Computation
Pei, Yan
2015-01-01
We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm. PMID:26257777
Algorithmic Mechanism Design of Evolutionary Computation.
Pei, Yan
2015-01-01
We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm. PMID:26257777
Fast Algorithms for Model-Based Diagnosis
NASA Technical Reports Server (NTRS)
Fijany, Amir; Barrett, Anthony; Vatan, Farrokh; Mackey, Ryan
2005-01-01
Two improved new methods for automated diagnosis of complex engineering systems involve the use of novel algorithms that are more efficient than prior algorithms used for the same purpose. Both the recently developed algorithms and the prior algorithms in question are instances of model-based diagnosis, which is based on exploring the logical inconsistency between an observation and a description of a system to be diagnosed. As engineering systems grow more complex and increasingly autonomous in their functions, the need for automated diagnosis increases concomitantly. In model-based diagnosis, the function of each component and the interconnections among all the components of the system to be diagnosed (for example, see figure) are represented as a logical system, called the system description (SD). Hence, the expected behavior of the system is the set of logical consequences of the SD. Faulty components lead to inconsistency between the observed behaviors of the system and the SD. The task of finding the faulty components (diagnosis) reduces to finding the components, the abnormalities of which could explain all the inconsistencies. Of course, the meaningful solution should be a minimal set of faulty components (called a minimal diagnosis), because the trivial solution, in which all components are assumed to be faulty, always explains all inconsistencies. Although the prior algorithms in question implement powerful methods of diagnosis, they are not practical because they essentially require exhaustive searches among all possible combinations of faulty components and therefore entail the amounts of computation that grow exponentially with the number of components of the system.
Scheduling Earth Observing Satellites with Evolutionary Algorithms
NASA Technical Reports Server (NTRS)
Globus, Al; Crawford, James; Lohn, Jason; Pryor, Anna
2003-01-01
We hypothesize that evolutionary algorithms can effectively schedule coordinated fleets of Earth observing satellites. The constraints are complex and the bottlenecks are not well understood, a condition where evolutionary algorithms are often effective. This is, in part, because evolutionary algorithms require only that one can represent solutions, modify solutions, and evaluate solution fitness. To test the hypothesis we have developed a representative set of problems, produced optimization software (in Java) to solve them, and run experiments comparing techniques. This paper presents initial results of a comparison of several evolutionary and other optimization techniques; namely the genetic algorithm, simulated annealing, squeaky wheel optimization, and stochastic hill climbing. We also compare separate satellite vs. integrated scheduling of a two satellite constellation. While the results are not definitive, tests to date suggest that simulated annealing is the best search technique and integrated scheduling is superior.
An Evolutionary Model Based on Bit-String with Intelligence
NASA Astrophysics Data System (ADS)
He, Mingfeng; Pan, Qiuhui; Yu, Binglin
An evolutionary model based on bit-strings with intelligence is set up in this paper. In this model, gene is divided into two parts which relative to health and intelligence. The accumulated intelligence influences the survival process by the effect of food and space restrictions. We modify the Verhulst factor to study this effect. Both asexual and sexual model are discussed in this paper. The results show that after many time steps, stability is reached and the population self-organizes, just like the standard Penna model. The intelligence made the equilibrium to be reached larger both in asexual model and sexual model. Compared with asexual model the population size fluctuates more strongly in the sexual model.
Evolutionary Algorithm for Optimal Vaccination Scheme
NASA Astrophysics Data System (ADS)
Parousis-Orthodoxou, K. J.; Vlachos, D. S.
2014-03-01
The following work uses the dynamic capabilities of an evolutionary algorithm in order to obtain an optimal immunization strategy in a user specified network. The produced algorithm uses a basic genetic algorithm with crossover and mutation techniques, in order to locate certain nodes in the inputted network. These nodes will be immunized in an SIR epidemic spreading process, and the performance of each immunization scheme, will be evaluated by the level of containment that provides for the spreading of the disease.
Evolutionary development of path planning algorithms
Hage, M
1998-09-01
This paper describes the use of evolutionary software techniques for developing both genetic algorithms and genetic programs. Genetic algorithms are evolved to solve a specific problem within a fixed and known environment. While genetic algorithms can evolve to become very optimized for their task, they often are very specialized and perform poorly if the environment changes. Genetic programs are evolved through simultaneous training in a variety of environments to develop a more general controller behavior that operates in unknown environments. Performance of genetic programs is less optimal than a specially bred algorithm for an individual environment, but the controller performs acceptably under a wider variety of circumstances. The example problem addressed in this paper is evolutionary development of algorithms and programs for path planning in nuclear environments, such as Chernobyl.
Synthesis of logic circuits with evolutionary algorithms
JONES,JAKE S.; DAVIDSON,GEORGE S.
2000-01-26
In the last decade there has been interest and research in the area of designing circuits with genetic algorithms, evolutionary algorithms, and genetic programming. However, the ability to design circuits of the size and complexity required by modern engineering design problems, simply by specifying required outputs for given inputs has as yet eluded researchers. This paper describes current research in the area of designing logic circuits using an evolutionary algorithm. The goal of the research is to improve the effectiveness of this method and make it a practical aid for design engineers. A novel method of implementing the algorithm is introduced, and results are presented for various multiprocessing systems. In addition to evolving standard arithmetic circuits, work in the area of evolving circuits that perform digital signal processing tasks is described.
Evolutionary Algorithm for Calculating Available Transfer Capability
NASA Astrophysics Data System (ADS)
Šošić, Darko; Škokljev, Ivan
2013-09-01
The paper presents an evolutionary algorithm for calculating available transfer capability (ATC). ATC is a measure of the transfer capability remaining in the physical transmission network for further commercial activity over and above already committed uses. In this paper, MATLAB software is used to determine the ATC between any bus in deregulated power systems without violating system constraints such as thermal, voltage, and stability constraints. The algorithm is applied on IEEE 5 bus system and on IEEE 30 bus system.
Automated Antenna Design with Evolutionary Algorithms
NASA Technical Reports Server (NTRS)
Linden, Derek; Hornby, Greg; Lohn, Jason; Globus, Al; Krishunkumor, K.
2006-01-01
Current methods of designing and optimizing antennas by hand are time and labor intensive, and limit complexity. Evolutionary design techniques can overcome these limitations by searching the design space and automatically finding effective solutions. In recent years, evolutionary algorithms have shown great promise in finding practical solutions in large, poorly understood design spaces. In particular, spacecraft antenna design has proven tractable to evolutionary design techniques. Researchers have been investigating evolutionary antenna design and optimization since the early 1990s, and the field has grown in recent years as computer speed has increased and electromagnetic simulators have improved. Two requirements-compliant antennas, one for ST5 and another for TDRS-C, have been automatically designed by evolutionary algorithms. The ST5 antenna is slated to fly this year, and a TDRS-C phased array element has been fabricated and tested. Such automated evolutionary design is enabled by medium-to-high quality simulators and fast modern computers to evaluate computer-generated designs. Evolutionary algorithms automate cut-and-try engineering, substituting automated search though millions of potential designs for intelligent search by engineers through a much smaller number of designs. For evolutionary design, the engineer chooses the evolutionary technique, parameters and the basic form of the antenna, e.g., single wire for ST5 and crossed-element Yagi for TDRS-C. Evolutionary algorithms then search for optimal configurations in the space defined by the engineer. NASA's Space Technology 5 (ST5) mission will launch three small spacecraft to test innovative concepts and technologies. Advanced evolutionary algorithms were used to automatically design antennas for ST5. The combination of wide beamwidth for a circularly-polarized wave and wide impedance bandwidth made for a challenging antenna design problem. From past experience in designing wire antennas, we chose to
Knowledge Guided Evolutionary Algorithms in Financial Investing
ERIC Educational Resources Information Center
Wimmer, Hayden
2013-01-01
A large body of literature exists on evolutionary computing, genetic algorithms, decision trees, codified knowledge, and knowledge management systems; however, the intersection of these computing topics has not been widely researched. Moving through the set of all possible solutions--or traversing the search space--at random exhibits no control…
Bell-Curve Based Evolutionary Optimization Algorithm
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.; Laba, K.; Kincaid, R.
1998-01-01
The paper presents an optimization algorithm that falls in the category of genetic, or evolutionary algorithms. While the bit exchange is the basis of most of the Genetic Algorithms (GA) in research and applications in America, some alternatives, also in the category of evolutionary algorithms, but use a direct, geometrical approach have gained popularity in Europe and Asia. The Bell-Curve Based Evolutionary Algorithm (BCB) is in this alternative category and is distinguished by the use of a combination of n-dimensional geometry and the normal distribution, the bell-curve, in the generation of the offspring. The tool for creating a child is a geometrical construct comprising a line connecting two parents and a weighted point on that line. The point that defines the child deviates from the weighted point in two directions: parallel and orthogonal to the connecting line, the deviation in each direction obeying a probabilistic distribution. Tests showed satisfactory performance of BCB. The principal advantage of BCB is its controllability via the normal distribution parameters and the geometrical construct variables.
Protein Structure Prediction with Evolutionary Algorithms
Hart, W.E.; Krasnogor, N.; Pelta, D.A.; Smith, J.
1999-02-08
Evolutionary algorithms have been successfully applied to a variety of molecular structure prediction problems. In this paper we reconsider the design of genetic algorithms that have been applied to a simple protein structure prediction problem. Our analysis considers the impact of several algorithmic factors for this problem: the confirmational representation, the energy formulation and the way in which infeasible conformations are penalized, Further we empirically evaluated the impact of these factors on a small set of polymer sequences. Our analysis leads to specific recommendations for both GAs as well as other heuristic methods for solving PSP on the HP model.
Turbopump Performance Improved by Evolutionary Algorithms
NASA Technical Reports Server (NTRS)
Oyama, Akira; Liou, Meng-Sing
2002-01-01
The development of design optimization technology for turbomachinery has been initiated using the multiobjective evolutionary algorithm under NASA's Intelligent Synthesis Environment and Revolutionary Aeropropulsion Concepts programs. As an alternative to the traditional gradient-based methods, evolutionary algorithms (EA's) are emergent design-optimization algorithms modeled after the mechanisms found in natural evolution. EA's search from multiple points, instead of moving from a single point. In addition, they require no derivatives or gradients of the objective function, leading to robustness and simplicity in coupling any evaluation codes. Parallel efficiency also becomes very high by using a simple master-slave concept for function evaluations, since such evaluations often consume the most CPU time, such as computational fluid dynamics. Application of EA's to multiobjective design problems is also straightforward because EA's maintain a population of design candidates in parallel. Because of these advantages, EA's are a unique and attractive approach to real-world design optimization problems.
Use of evolutionary algorithms for telescope scheduling
NASA Astrophysics Data System (ADS)
Grim, Ruud; Jansen, Mischa; Baan, Arno; van Hemert, Jano; de Wolf, Hans
2002-07-01
LOFAR, a new radio telescope, will be designed to observe with up to 8 independent beams, thus allowing several simultaneous observations. Scheduling of multiple observations parallel in time, each having their own constraints, requires a more intelligent and flexible scheduling function then operated before. In support of the LOFAR radio telescope project, and in co-operation with Leiden University, Fokker Space has started a study to investigate the suitability of the use of evolutionary algorithms applied to complex scheduling problems. After a positive familiarization phase, we now examine the potential use of evolutionary algorithms via a demonstration project. Results of the familiarization phase, and the first results of the demonstration project are presented in this paper.
Evolutionary algorithms and multi-agent systems
NASA Astrophysics Data System (ADS)
Oh, Jae C.
2006-05-01
This paper discusses how evolutionary algorithms are related to multi-agent systems and the possibility of military applications using the two disciplines. In particular, we present a game theoretic model for multi-agent resource distribution and allocation where agents in the environment must help each other to survive. Each agent maintains a set of variables representing actual friendship and perceived friendship. The model directly addresses problems in reputation management schemes in multi-agent systems and Peer-to-Peer distributed systems. We present algorithms based on evolutionary game process for maintaining the friendship values as well as a utility equation used in each agent's decision making. For an application problem, we adapted our formal model to the military coalition support problem in peace-keeping missions. Simulation results show that efficient resource allocation and sharing with minimum communication cost is achieved without centralized control.
A Note on Evolutionary Algorithms and Its Applications
ERIC Educational Resources Information Center
Bhargava, Shifali
2013-01-01
This paper introduces evolutionary algorithms with its applications in multi-objective optimization. Here elitist and non-elitist multiobjective evolutionary algorithms are discussed with their advantages and disadvantages. We also discuss constrained multiobjective evolutionary algorithms and their applications in various areas.
Stochastic Evolutionary Algorithms for Planning Robot Paths
NASA Technical Reports Server (NTRS)
Fink, Wolfgang; Aghazarian, Hrand; Huntsberger, Terrance; Terrile, Richard
2006-01-01
A computer program implements stochastic evolutionary algorithms for planning and optimizing collision-free paths for robots and their jointed limbs. Stochastic evolutionary algorithms can be made to produce acceptably close approximations to exact, optimal solutions for path-planning problems while often demanding much less computation than do exhaustive-search and deterministic inverse-kinematics algorithms that have been used previously for this purpose. Hence, the present software is better suited for application aboard robots having limited computing capabilities (see figure). The stochastic aspect lies in the use of simulated annealing to (1) prevent trapping of an optimization algorithm in local minima of an energy-like error measure by which the fitness of a trial solution is evaluated while (2) ensuring that the entire multidimensional configuration and parameter space of the path-planning problem is sampled efficiently with respect to both robot joint angles and computation time. Simulated annealing is an established technique for avoiding local minima in multidimensional optimization problems, but has not, until now, been applied to planning collision-free robot paths by use of low-power computers.
Intervals in evolutionary algorithms for global optimization
Patil, R.B.
1995-05-01
Optimization is of central concern to a number of disciplines. Interval Arithmetic methods for global optimization provide us with (guaranteed) verified results. These methods are mainly restricted to the classes of objective functions that are twice differentiable and use a simple strategy of eliminating a splitting larger regions of search space in the global optimization process. An efficient approach that combines the efficient strategy from Interval Global Optimization Methods and robustness of the Evolutionary Algorithms is proposed. In the proposed approach, search begins with randomly created interval vectors with interval widths equal to the whole domain. Before the beginning of the evolutionary process, fitness of these interval parameter vectors is defined by evaluating the objective function at the center of the initial interval vectors. In the subsequent evolutionary process the local optimization process returns an estimate of the bounds of the objective function over the interval vectors. Though these bounds may not be correct at the beginning due to large interval widths and complicated function properties, the process of reducing interval widths over time and a selection approach similar to simulated annealing helps in estimating reasonably correct bounds as the population evolves. The interval parameter vectors at these estimated bounds (local optima) are then subjected to crossover and mutation operators. This evolutionary process continues for predetermined number of generations in the search of the global optimum.
Predicting polymeric crystal structures by evolutionary algorithms
NASA Astrophysics Data System (ADS)
Zhu, Qiang; Sharma, Vinit; Oganov, Artem R.; Ramprasad, Ramamurthy
2014-10-01
The recently developed evolutionary algorithm USPEX proved to be a tool that enables accurate and reliable prediction of structures. Here we extend this method to predict the crystal structure of polymers by constrained evolutionary search, where each monomeric unit is treated as a building block with fixed connectivity. This greatly reduces the search space and allows the initial structure generation with different sequences and packings of these blocks. The new constrained evolutionary algorithm is successfully tested and validated on a diverse range of experimentally known polymers, namely, polyethylene, polyacetylene, poly(glycolic acid), poly(vinyl chloride), poly(oxymethylene), poly(phenylene oxide), and poly (p-phenylene sulfide). By fixing the orientation of polymeric chains, this method can be further extended to predict the structures of complex linear polymers, such as all polymorphs of poly(vinylidene fluoride), nylon-6 and cellulose. The excellent agreement between predicted crystal structures and experimentally known structures assures a major role of this approach in the efficient design of the future polymeric materials.
Evolutionary algorithm for metabolic pathways synthesis.
Gerard, Matias F; Stegmayer, Georgina; Milone, Diego H
2016-06-01
Metabolic pathway building is an active field of research, necessary to understand and manipulate the metabolism of organisms. There are different approaches, mainly based on classical search methods, to find linear sequences of reactions linking two compounds. However, an important limitation of these methods is the exponential increase of search trees when a large number of compounds and reactions is considered. Besides, such models do not take into account all substrates for each reaction during the search, leading to solutions that lack biological feasibility in many cases. This work proposes a new evolutionary algorithm that allows searching not only linear, but also branched metabolic pathways, formed by feasible reactions that relate multiple compounds simultaneously. Tests performed using several sets of reactions show that this algorithm is able to find feasible linear and branched metabolic pathways. PMID:27080162
Performance Comparison Of Evolutionary Algorithms For Image Clustering
NASA Astrophysics Data System (ADS)
Civicioglu, P.; Atasever, U. H.; Ozkan, C.; Besdok, E.; Karkinli, A. E.; Kesikoglu, A.
2014-09-01
Evolutionary computation tools are able to process real valued numerical sets in order to extract suboptimal solution of designed problem. Data clustering algorithms have been intensively used for image segmentation in remote sensing applications. Despite of wide usage of evolutionary algorithms on data clustering, their clustering performances have been scarcely studied by using clustering validation indexes. In this paper, the recently proposed evolutionary algorithms (i.e., Artificial Bee Colony Algorithm (ABC), Gravitational Search Algorithm (GSA), Cuckoo Search Algorithm (CS), Adaptive Differential Evolution Algorithm (JADE), Differential Search Algorithm (DSA) and Backtracking Search Optimization Algorithm (BSA)) and some classical image clustering techniques (i.e., k-means, fcm, som networks) have been used to cluster images and their performances have been compared by using four clustering validation indexes. Experimental test results exposed that evolutionary algorithms give more reliable cluster-centers than classical clustering techniques, but their convergence time is quite long.
Wind farm optimization using evolutionary algorithms
NASA Astrophysics Data System (ADS)
Ituarte-Villarreal, Carlos M.
In recent years, the wind power industry has focused its efforts on solving the Wind Farm Layout Optimization (WFLO) problem. Wind resource assessment is a pivotal step in optimizing the wind-farm design and siting and, in determining whether a project is economically feasible or not. In the present work, three (3) different optimization methods are proposed for the solution of the WFLO: (i) A modified Viral System Algorithm applied to the optimization of the proper location of the components in a wind-farm to maximize the energy output given a stated wind environment of the site. The optimization problem is formulated as the minimization of energy cost per unit produced and applies a penalization for the lack of system reliability. The viral system algorithm utilized in this research solves three (3) well-known problems in the wind-energy literature; (ii) a new multiple objective evolutionary algorithm to obtain optimal placement of wind turbines while considering the power output, cost, and reliability of the system. The algorithm presented is based on evolutionary computation and the objective functions considered are the maximization of power output, the minimization of wind farm cost and the maximization of system reliability. The final solution to this multiple objective problem is presented as a set of Pareto solutions and, (iii) A hybrid viral-based optimization algorithm adapted to find the proper component configuration for a wind farm with the introduction of the universal generating function (UGF) analytical approach to discretize the different operating or mechanical levels of the wind turbines in addition to the various wind speed states. The proposed methodology considers the specific probability functions of the wind resource to describe their proper behaviors to account for the stochastic comportment of the renewable energy components, aiming to increase their power output and the reliability of these systems. The developed heuristic considers a
A Novel Multiobjective Evolutionary Algorithm Based on Regression Analysis
Song, Zhiming; Wang, Maocai; Dai, Guangming; Vasile, Massimiliano
2015-01-01
As is known, the Pareto set of a continuous multiobjective optimization problem with m objective functions is a piecewise continuous (m − 1)-dimensional manifold in the decision space under some mild conditions. However, how to utilize the regularity to design multiobjective optimization algorithms has become the research focus. In this paper, based on this regularity, a model-based multiobjective evolutionary algorithm with regression analysis (MMEA-RA) is put forward to solve continuous multiobjective optimization problems with variable linkages. In the algorithm, the optimization problem is modelled as a promising area in the decision space by a probability distribution, and the centroid of the probability distribution is (m − 1)-dimensional piecewise continuous manifold. The least squares method is used to construct such a model. A selection strategy based on the nondominated sorting is used to choose the individuals to the next generation. The new algorithm is tested and compared with NSGA-II and RM-MEDA. The result shows that MMEA-RA outperforms RM-MEDA and NSGA-II on the test instances with variable linkages. At the same time, MMEA-RA has higher efficiency than the other two algorithms. A few shortcomings of MMEA-RA have also been identified and discussed in this paper. PMID:25874246
Evolutionary algorithm for vehicle driving cycle generation.
Perhinschi, Mario G; Marlowe, Christopher; Tamayo, Sergio; Tu, Jun; Wayne, W Scott
2011-09-01
Modeling transit bus emissions and fuel economy requires a large amount of experimental data over wide ranges of operational conditions. Chassis dynamometer tests are typically performed using representative driving cycles defined based on vehicle instantaneous speed as sequences of "microtrips", which are intervals between consecutive vehicle stops. Overall significant parameters of the driving cycle, such as average speed, stops per mile, kinetic intensity, and others, are used as independent variables in the modeling process. Performing tests at all the necessary combinations of parameters is expensive and time consuming. In this paper, a methodology is proposed for building driving cycles at prescribed independent variable values using experimental data through the concatenation of "microtrips" isolated from a limited number of standard chassis dynamometer test cycles. The selection of the adequate "microtrips" is achieved through a customized evolutionary algorithm. The genetic representation uses microtrip definitions as genes. Specific mutation, crossover, and karyotype alteration operators have been defined. The Roulette-Wheel selection technique with elitist strategy drives the optimization process, which consists of minimizing the errors to desired overall cycle parameters. This utility is part of the Integrated Bus Information System developed at West Virginia University. PMID:22010377
A Review of Surrogate Assisted Multiobjective Evolutionary Algorithms
Díaz-Manríquez, Alan; Toscano, Gregorio; Barron-Zambrano, Jose Hugo; Tello-Leal, Edgar
2016-01-01
Multiobjective evolutionary algorithms have incorporated surrogate models in order to reduce the number of required evaluations to approximate the Pareto front of computationally expensive multiobjective optimization problems. Currently, few works have reviewed the state of the art in this topic. However, the existing reviews have focused on classifying the evolutionary multiobjective optimization algorithms with respect to the type of underlying surrogate model. In this paper, we center our focus on classifying multiobjective evolutionary algorithms with respect to their integration with surrogate models. This interaction has led us to classify similar approaches and identify advantages and disadvantages of each class. PMID:27382366
Comparing Evolutionary Strategies on a Biobjective Cultural Algorithm
Lagos, Carolina; Crawford, Broderick; Cabrera, Enrique; Rubio, José-Miguel; Paredes, Fernando
2014-01-01
Evolutionary algorithms have been widely used to solve large and complex optimisation problems. Cultural algorithms (CAs) are evolutionary algorithms that have been used to solve both single and, to a less extent, multiobjective optimisation problems. In order to solve these optimisation problems, CAs make use of different strategies such as normative knowledge, historical knowledge, circumstantial knowledge, and among others. In this paper we present a comparison among CAs that make use of different evolutionary strategies; the first one implements a historical knowledge, the second one considers a circumstantial knowledge, and the third one implements a normative knowledge. These CAs are applied on a biobjective uncapacitated facility location problem (BOUFLP), the biobjective version of the well-known uncapacitated facility location problem. To the best of our knowledge, only few articles have applied evolutionary multiobjective algorithms on the BOUFLP and none of those has focused on the impact of the evolutionary strategy on the algorithm performance. Our biobjective cultural algorithm, called BOCA, obtains important improvements when compared to other well-known evolutionary biobjective optimisation algorithms such as PAES and NSGA-II. The conflicting objective functions considered in this study are cost minimisation and coverage maximisation. Solutions obtained by each algorithm are compared using a hypervolume S metric. PMID:25254257
Learning evasive maneuvers using evolutionary algorithms and neural networks
NASA Astrophysics Data System (ADS)
Kang, Moung Hung
In this research, evolutionary algorithms and recurrent neural networks are combined to evolve control knowledge to help pilots avoid being struck by a missile, based on a two-dimensional air combat simulation model. The recurrent neural network is used for representing the pilot's control knowledge and evolutionary algorithms (i.e., Genetic Algorithms, Evolution Strategies, and Evolutionary Programming) are used for optimizing the weights and/or topology of the recurrent neural network. The simulation model of the two-dimensional evasive maneuver problem evolved is used for evaluating the performance of the recurrent neural network. Five typical air combat conditions were selected to evaluate the performance of the recurrent neural networks evolved by the evolutionary algorithms. Analysis of Variance (ANOVA) tests and response graphs were used to analyze the results. Overall, there was little difference in the performance of the three evolutionary algorithms used to evolve the control knowledge. However, the number of generations of each algorithm required to obtain the best performance was significantly different. ES converges the fastest, followed by EP and then by GA. The recurrent neural networks evolved by the evolutionary algorithms provided better performance than the traditional recommendations for evasive maneuvers, maximum gravitational turn, for each air combat condition. Furthermore, the recommended actions of the recurrent neural networks are reasonable and can be used for pilot training.
PACS model based on digital watermarking and its core algorithms
NASA Astrophysics Data System (ADS)
Que, Dashun; Wen, Xianlin; Chen, Bi
2009-10-01
PACS model based on digital watermarking is proposed by analyzing medical image features and PACS requirements from the point of view of information security, its core being digital watermarking server and the corresponding processing module. Two kinds of digital watermarking algorithm are studied; one is non-region of interest (NROI) digital watermarking algorithm based on wavelet domain and block-mean, the other is reversible watermarking algorithm on extended difference and pseudo-random matrix. The former belongs to robust lossy watermarking, which embedded in NROI by wavelet provides a good way for protecting the focus area (ROI) of images, and introduction of block-mean approach a good scheme to enhance the anti-attack capability; the latter belongs to fragile lossless watermarking, which has the performance of simple implementation and can realize tamper localization effectively, and the pseudo-random matrix enhances the correlation and security between pixels. Plenty of experimental research has been completed in this paper, including the realization of digital watermarking PACS model, the watermarking processing module and its anti-attack experiments, the digital watermarking server and the network transmission simulating experiments of medical images. Theoretical analysis and experimental results show that the designed PACS model can effectively ensure confidentiality, authenticity, integrity and security of medical image information.
Evolutionary algorithm based structure search for hard ruthenium carbides
NASA Astrophysics Data System (ADS)
Harikrishnan, G.; Ajith, K. M.; Chandra, Sharat; Valsakumar, M. C.
2015-12-01
An exhaustive structure search employing evolutionary algorithm and density functional theory has been carried out for ruthenium carbides, for the three stoichiometries Ru1C1, Ru2C1 and Ru3C1, yielding five lowest energy structures. These include the structures from the two reported syntheses of ruthenium carbides. Their emergence in the present structure search in stoichiometries, unlike the previously reported ones, is plausible in the light of the high temperature required for their synthesis. The mechanical stability and ductile character of all these systems are established by their elastic constants, and the dynamical stability of three of them by the phonon data. Rhombohedral structure ≤ft(R\\bar{3}m\\right) is found to be energetically the most stable one in Ru1C1 stoichiometry and hexagonal structure ≤ft( P\\bar{6}m2\\right) , the most stable in Ru3C1 stoichiometry. RuC-Zinc blende system is a semiconductor with a band gap of 0.618 eV while the other two stable systems are metallic. Employing a semi-empirical model based on the bond strength, the hardness of RuC-Zinc blende is found to be a significantly large value of ~37 GPa while a fairly large value of ~21GPa is obtained for the RuC-Rhombohedral system. The positive formation energies of these systems show that high temperature and possibly high pressure are necessary for their synthesis.
Locally-adaptive and memetic evolutionary pattern search algorithms.
Hart, William E
2003-01-01
Recent convergence analyses of evolutionary pattern search algorithms (EPSAs) have shown that these methods have a weak stationary point convergence theory for a broad class of unconstrained and linearly constrained problems. This paper describes how the convergence theory for EPSAs can be adapted to allow each individual in a population to have its own mutation step length (similar to the design of evolutionary programing and evolution strategies algorithms). These are called locally-adaptive EPSAs (LA-EPSAs) since each individual's mutation step length is independently adapted in different local neighborhoods. The paper also describes a variety of standard formulations of evolutionary algorithms that can be used for LA-EPSAs. Further, it is shown how this convergence theory can be applied to memetic EPSAs, which use local search to refine points within each iteration. PMID:12804096
Using Evolutionary Algorithms to Induce Oblique Decision Trees
Cantu-Paz, E.; Kamath, C.
2000-01-21
This paper illustrates the application of evolutionary algorithms (EAs) to the problem of oblique decision tree induction. The objectives are to demonstrate that EAs can find classifiers whose accuracy is competitive with other oblique tree construction methods, and that this can be accomplished in a shorter time. Experiments were performed with a (1+1) evolutionary strategy and a simple genetic algorithm on public domain and artificial data sets. The empirical results suggest that the EAs quickly find Competitive classifiers, and that EAs scale up better than traditional methods to the dimensionality of the domain and the number of training instances.
Biased Randomized Algorithm for Fast Model-Based Diagnosis
NASA Technical Reports Server (NTRS)
Williams, Colin; Vartan, Farrokh
2005-01-01
A biased randomized algorithm has been developed to enable the rapid computational solution of a propositional- satisfiability (SAT) problem equivalent to a diagnosis problem. The closest competing methods of automated diagnosis are described in the preceding article "Fast Algorithms for Model-Based Diagnosis" and "Two Methods of Efficient Solution of the Hitting-Set Problem" (NPO-30584), which appears elsewhere in this issue. It is necessary to recapitulate some of the information from the cited articles as a prerequisite to a description of the present method. As used here, "diagnosis" signifies, more precisely, a type of model-based diagnosis in which one explores any logical inconsistencies between the observed and expected behaviors of an engineering system. The function of each component and the interconnections among all the components of the engineering system are represented as a logical system. Hence, the expected behavior of the engineering system is represented as a set of logical consequences. Faulty components lead to inconsistency between the observed and expected behaviors of the system, represented by logical inconsistencies. Diagnosis - the task of finding the faulty components - reduces to finding the components, the abnormalities of which could explain all the logical inconsistencies. One seeks a minimal set of faulty components (denoted a minimal diagnosis), because the trivial solution, in which all components are deemed to be faulty, always explains all inconsistencies. In the methods of the cited articles, the minimal-diagnosis problem is treated as equivalent to a minimal-hitting-set problem, which is translated from a combinatorial to a computational problem by mapping it onto the Boolean-satisfiability and integer-programming problems. The integer-programming approach taken in one of the prior methods is complete (in the sense that it is guaranteed to find a solution if one exists) and slow and yields a lower bound on the size of the
Evolutionary algorithms, simulated annealing, and Tabu search: a comparative study
NASA Astrophysics Data System (ADS)
Youssef, Habib; Sait, Sadiq M.; Adiche, Hakim
1998-10-01
Evolutionary algorithms, simulated annealing (SA), and Tabu Search (TS) are general iterative algorithms for combinatorial optimization. The term evolutionary algorithm is used to refer to any probabilistic algorithm whose design is inspired by evolutionary mechanisms found in biological species. Most widely known algorithms of this category are Genetic Algorithms (GA). GA, SA, and TS have been found to be very effective and robust in solving numerous problems from a wide range of application domains.Furthermore, they are even suitable for ill-posed problems where some of the parameters are not known before hand. These properties are lacking in all traditional optimization techniques. In this paper we perform a comparative study among GA, SA, and TS. These algorithms have many similarities, but they also possess distinctive features, mainly in their strategies for searching the solution state space. the three heuristics are applied on the same optimization problem and compared with respect to (1) quality of the best solution identified by each heuristic, (2) progress of the search from initial solution(s) until stopping criteria are met, (3) the progress of the cost of the best solution as a function of time, and (4) the number of solutions found at successive intervals of the cost function. The benchmark problem was is the floorplanning of very large scale integrated circuits. This is a hard multi-criteria optimization problem. Fuzzy logic is used to combine all objective criteria into a single fuzzy evaluation function, which is then used to rate competing solutions.
PARALLELISATION OF THE MODEL-BASED ITERATIVE RECONSTRUCTION ALGORITHM DIRA.
Örtenberg, A; Magnusson, M; Sandborg, M; Alm Carlsson, G; Malusek, A
2016-06-01
New paradigms for parallel programming have been devised to simplify software development on multi-core processors and many-core graphical processing units (GPU). Despite their obvious benefits, the parallelisation of existing computer programs is not an easy task. In this work, the use of the Open Multiprocessing (OpenMP) and Open Computing Language (OpenCL) frameworks is considered for the parallelisation of the model-based iterative reconstruction algorithm DIRA with the aim to significantly shorten the code's execution time. Selected routines were parallelised using OpenMP and OpenCL libraries; some routines were converted from MATLAB to C and optimised. Parallelisation of the code with the OpenMP was easy and resulted in an overall speedup of 15 on a 16-core computer. Parallelisation with OpenCL was more difficult owing to differences between the central processing unit and GPU architectures. The resulting speedup was substantially lower than the theoretical peak performance of the GPU; the cause was explained. PMID:26454270
A Hybrid Evolutionary Algorithm for Wheat Blending Problem
Bonyadi, Mohammad Reza; Michalewicz, Zbigniew; Barone, Luigi
2014-01-01
This paper presents a hybrid evolutionary algorithm to deal with the wheat blending problem. The unique constraints of this problem make many existing algorithms fail: either they do not generate acceptable results or they are not able to complete optimization within the required time. The proposed algorithm starts with a filtering process that follows predefined rules to reduce the search space. Then the linear-relaxed version of the problem is solved using a standard linear programming algorithm. The result is used in conjunction with a solution generated by a heuristic method to generate an initial solution. After that, a hybrid of an evolutionary algorithm, a heuristic method, and a linear programming solver is used to improve the quality of the solution. A local search based posttuning method is also incorporated into the algorithm. The proposed algorithm has been tested on artificial test cases and also real data from past years. Results show that the algorithm is able to find quality results in all cases and outperforms the existing method in terms of both quality and speed. PMID:24707222
A novel fitness evaluation method for evolutionary algorithms
NASA Astrophysics Data System (ADS)
Wang, Ji-feng; Tang, Ke-zong
2013-03-01
Fitness evaluation is a crucial task in evolutionary algorithms because it can affect the convergence speed and also the quality of the final solution. But these algorithms may require huge computation power for solving nonlinear programming problems. This paper proposes a novel fitness evaluation approach which employs similarity-base learning embedded in a classical differential evolution (SDE) to evaluate all new individuals. Each individual consists of three elements: parameter vector (v), a fitness value (f), and a reliability value(r). The f is calculated using NFEA, and only when the r is below a threshold is the f calculated using true fitness function. Moreover, applying error compensation system to the proposed algorithm further enhances the performance of the algorithm to make r much closer to true fitness value for each new child. Simulation results over a comprehensive set of benchmark functions show that the convergence rate of the proposed algorithm is much faster than much that of the compared algorithms.
A survey on evolutionary algorithm based hybrid intelligence in bioinformatics.
Li, Shan; Kang, Liying; Zhao, Xing-Ming
2014-01-01
With the rapid advance in genomics, proteomics, metabolomics, and other types of omics technologies during the past decades, a tremendous amount of data related to molecular biology has been produced. It is becoming a big challenge for the bioinformatists to analyze and interpret these data with conventional intelligent techniques, for example, support vector machines. Recently, the hybrid intelligent methods, which integrate several standard intelligent approaches, are becoming more and more popular due to their robustness and efficiency. Specifically, the hybrid intelligent approaches based on evolutionary algorithms (EAs) are widely used in various fields due to the efficiency and robustness of EAs. In this review, we give an introduction about the applications of hybrid intelligent methods, in particular those based on evolutionary algorithm, in bioinformatics. In particular, we focus on their applications to three common problems that arise in bioinformatics, that is, feature selection, parameter estimation, and reconstruction of biological networks. PMID:24729969
Available Transfer Capability Determination Using Hybrid Evolutionary Algorithm
NASA Astrophysics Data System (ADS)
Jirapong, Peeraool; Ongsakul, Weerakorn
2008-10-01
This paper proposes a new hybrid evolutionary algorithm (HEA) based on evolutionary programming (EP), tabu search (TS), and simulated annealing (SA) to determine the available transfer capability (ATC) of power transactions between different control areas in deregulated power systems. The optimal power flow (OPF)-based ATC determination is used to evaluate the feasible maximum ATC value within real and reactive power generation limits, line thermal limits, voltage limits, and voltage and angle stability limits. The HEA approach simultaneously searches for real power generations except slack bus in a source area, real power loads in a sink area, and generation bus voltages to solve the OPF-based ATC problem. Test results on the modified IEEE 24-bus reliability test system (RTS) indicate that ATC determination by the HEA could enhance ATC far more than those from EP, TS, hybrid TS/SA, and improved EP (IEP) algorithms, leading to an efficient utilization of the existing transmission system.
A Survey on Evolutionary Algorithm Based Hybrid Intelligence in Bioinformatics
Li, Shan; Zhao, Xing-Ming
2014-01-01
With the rapid advance in genomics, proteomics, metabolomics, and other types of omics technologies during the past decades, a tremendous amount of data related to molecular biology has been produced. It is becoming a big challenge for the bioinformatists to analyze and interpret these data with conventional intelligent techniques, for example, support vector machines. Recently, the hybrid intelligent methods, which integrate several standard intelligent approaches, are becoming more and more popular due to their robustness and efficiency. Specifically, the hybrid intelligent approaches based on evolutionary algorithms (EAs) are widely used in various fields due to the efficiency and robustness of EAs. In this review, we give an introduction about the applications of hybrid intelligent methods, in particular those based on evolutionary algorithm, in bioinformatics. In particular, we focus on their applications to three common problems that arise in bioinformatics, that is, feature selection, parameter estimation, and reconstruction of biological networks. PMID:24729969
Hart, W.E.
1999-02-10
Evolutionary programs (EPs) and evolutionary pattern search algorithms (EPSAS) are two general classes of evolutionary methods for optimizing on continuous domains. The relative performance of these methods has been evaluated on standard global optimization test functions, and these results suggest that EPSAs more robustly converge to near-optimal solutions than EPs. In this paper we evaluate the relative performance of EPSAs and EPs on a real-world application: flexible ligand binding in the Autodock docking software. We compare the performance of these methods on a suite of docking test problems. Our results confirm that EPSAs and EPs have comparable performance, and they suggest that EPSAs may be more robust on larger, more complex problems.
Receiver Diversity Combining Using Evolutionary Algorithms in Rayleigh Fading Channel
Akbari, Mohsen; Manesh, Mohsen Riahi
2014-01-01
In diversity combining at the receiver, the output signal-to-noise ratio (SNR) is often maximized by using the maximal ratio combining (MRC) provided that the channel is perfectly estimated at the receiver. However, channel estimation is rarely perfect in practice, which results in deteriorating the system performance. In this paper, an imperialistic competitive algorithm (ICA) is proposed and compared with two other evolutionary based algorithms, namely, particle swarm optimization (PSO) and genetic algorithm (GA), for diversity combining of signals travelling across the imperfect channels. The proposed algorithm adjusts the combiner weights of the received signal components in such a way that maximizes the SNR and minimizes the bit error rate (BER). The results indicate that the proposed method eliminates the need of channel estimation and can outperform the conventional diversity combining methods. PMID:25045725
Supervised and unsupervised discretization methods for evolutionary algorithms
Cantu-Paz, E
2001-01-24
This paper introduces simple model-building evolutionary algorithms (EAs) that operate on continuous domains. The algorithms are based on supervised and unsupervised discretization methods that have been used as preprocessing steps in machine learning. The basic idea is to discretize the continuous variables and use the discretization as a simple model of the solutions under consideration. The model is then used to generate new solutions directly, instead of using the usual operators based on sexual recombination and mutation. The algorithms presented here have fewer parameters than traditional and other model-building EAs. They expect that the proposed algorithms that use multivariate models scale up better to the dimensionality of the problem than existing EAs.
Models based on "out-of Kilter" algorithm
NASA Astrophysics Data System (ADS)
Adler, M. J.; Drobot, R.
2012-04-01
In case of many water users along the river stretches, it is very important, in case of low flows and droughty periods to develop an optimization model for water allocation, to cover all needs under certain predefined constraints, depending of the Contingency Plan for drought management. Such a program was developed during the implementation of the WATMAN Project, in Romania (WATMAN Project, 2005-2006, USTDA) for Arges-Dambovita-Ialomita Basins water transfers. This good practice was proposed for WATER CoRe Project- Good Practice Handbook for Drought Management, (InterregIVC, 2011), to be applied for the European Regions. Two types of simulation-optimization models based on an improved version of out-of-kilter algorithm as optimization technique have been developed and used in Romania: • models for founding of the short-term operation of a WMS, • models generically named SIMOPT that aim to the analysis of long-term WMS operation and have as the main results the statistical WMS functional parameters. A real WMS is modeled by an arcs-nodes network so the real WMS operation problem becomes a problem of flows in networks. The nodes and oriented arcs as well as their characteristics such as lower and upper limits and associated costs are the direct analog of the physical and operational WMS characteristics. Arcs represent both physical and conventional elements of WMS such as river branches, channels or pipes, water user demands or other water management requirements, trenches of water reservoirs volumes, water levels in channels or rivers, nodes are junctions of at least two arcs and stand for locations of lakes or water reservoirs and/or confluences of river branches, water withdrawal or wastewater discharge points, etc. Quantitative features of water resources, water users and water reservoirs or other water works are expressed as constraints of non-violating the lower and upper limits assigned on arcs. Options of WMS functioning i.e. water retention/discharge in
Multi-objective Job Shop Rescheduling with Evolutionary Algorithm
NASA Astrophysics Data System (ADS)
Hao, Xinchang; Gen, Mitsuo
In current manufacturing systems, production processes and management are involved in many unexpected events and new requirements emerging constantly. This dynamic environment implies that operation rescheduling is usually indispensable. A wide variety of procedures and heuristics has been developed to improve the quality of rescheduling. However, most proposed approaches are derived usually with respect to simplified assumptions. As a consequence, these approaches might be inconsistent with the actual requirements in a real production environment, i.e., they are often unsuitable and inflexible to respond efficiently to the frequent changes. In this paper, a multi-objective job shop rescheduling problem (moJSRP) is formulated to improve the practical application of rescheduling. To solve the moJSRP model, an evolutionary algorithm is designed, in which a random key-based representation and interactive adaptive-weight (i-awEA) fitness assignment are embedded. To verify the effectiveness, the proposed algorithm has been compared with other apporaches and benchmarks on the robustness of moJRP optimziation. The comparison results show that iAWGA-A is better than weighted fitness method in terms of effectiveness and stability. Simlarly, iAWGA-A also outperforms other well stability approachessuch as non-dominated sorting genetic algorithm (NSGA-II) and strength Pareto evolutionary algorithm2 (SPEA2).
Optimal classification of standoff bioaerosol measurements using evolutionary algorithms
NASA Astrophysics Data System (ADS)
Nyhavn, Ragnhild; Moen, Hans J. F.; Farsund, Øystein; Rustad, Gunnar
2011-05-01
Early warning systems based on standoff detection of biological aerosols require real-time signal processing of a large quantity of high-dimensional data, challenging the systems efficiency in terms of both computational complexity and classification accuracy. Hence, optimal feature selection is essential in forming a stable and efficient classification system. This involves finding optimal signal processing parameters, characteristic spectral frequencies and other data transformations in large magnitude variable space, stating the need for an efficient and smart search algorithm. Evolutionary algorithms are population-based optimization methods inspired by Darwinian evolutionary theory. These methods focus on application of selection, mutation and recombination on a population of competing solutions and optimize this set by evolving the population of solutions for each generation. We have employed genetic algorithms in the search for optimal feature selection and signal processing parameters for classification of biological agents. The experimental data were achieved with a spectrally resolved lidar based on ultraviolet laser induced fluorescence, and included several releases of 5 common simulants. The genetic algorithm outperform benchmark methods involving analytic, sequential and random methods like support vector machines, Fisher's linear discriminant and principal component analysis, with significantly improved classification accuracy compared to the best classical method.
Filter model based dwell time algorithm for ion beam figuring
NASA Astrophysics Data System (ADS)
Li, Yun; Xing, Tingwen; Jia, Xin; Wei, Haoming
2010-10-01
The process of Ion Beam Figuring (IBF) can be described by a two-dimensional convolution equation which including dwell time. Solving the dwell time is a key problem in IBF. Theoretically, the dwell time can be solved from a two-dimensional deconvolution. However, it is often ill-posed]; the suitable solution of that is hard to get. In this article, a dwell time algorithm is proposed, depending on the characters of IBF. Usually, the Beam Removal Function (BRF) in IBF is Gaussian, which can be regarded as a headstand Gaussian filter. In its stop-band, the filter has various filtering abilities for various frequencies. The dwell time algorithm proposed in this article is just based on this concept. The Curved Surface Smooth Extension (CSSE) method and Fast Fourier Transform (FFT) algorithm are also used. The simulation results show that this algorithm is high precision, effective, and suitable for actual application.
Virus evolutionary genetic algorithm for task collaboration of logistics distribution
NASA Astrophysics Data System (ADS)
Ning, Fanghua; Chen, Zichen; Xiong, Li
2005-12-01
In order to achieve JIT (Just-In-Time) level and clients' maximum satisfaction in logistics collaboration, a Virus Evolutionary Genetic Algorithm (VEGA) was put forward under double constraints of logistics resource and operation sequence. Based on mathematic description of a multiple objective function, the algorithm was designed to schedule logistics tasks with different due dates and allocate them to network members. By introducing a penalty item, make span and customers' satisfaction were expressed in fitness function. And a dynamic adaptive probability of infection was used to improve performance of local search. Compared to standard Genetic Algorithm (GA), experimental result illustrates the performance superiority of VEGA. So the VEGA can provide a powerful decision-making technique for optimizing resource configuration in logistics network.
Logit Model based Performance Analysis of an Optimization Algorithm
NASA Astrophysics Data System (ADS)
Hernández, J. A.; Ospina, J. D.; Villada, D.
2011-09-01
In this paper, the performance of the Multi Dynamics Algorithm for Global Optimization (MAGO) is studied through simulation using five standard test functions. To guarantee that the algorithm converges to a global optimum, a set of experiments searching for the best combination between the only two MAGO parameters -number of iterations and number of potential solutions, are considered. These parameters are sequentially varied, while increasing the dimension of several test functions, and performance curves were obtained. The MAGO was originally designed to perform well with small populations; therefore, the self-adaptation task with small populations is more challenging while the problem dimension is higher. The results showed that the convergence probability to an optimal solution increases according to growing patterns of the number of iterations and the number of potential solutions. However, the success rates slow down when the dimension of the problem escalates. Logit Model is used to determine the mutual effects between the parameters of the algorithm.
Hybrid Evolutionary-Heuristic Algorithm for Capacitor Banks Allocation
NASA Astrophysics Data System (ADS)
Barukčić, Marinko; Nikolovski, Srete; Jović, Franjo
2010-11-01
The issue of optimal allocation of capacitor banks concerning power losses minimization in distribution networks are considered in this paper. This optimization problem has been recently tackled by application of contemporary soft computing methods such as: genetic algorithms, neural networks, fuzzy logic, simulated annealing, ant colony methods, and hybrid methods. An evolutionaryheuristic method has been proposed for optimal capacitor allocation in radial distribution networks. An evolutionary method based on genetic algorithm is developed. The proposed method has a reduced number of parameters compared to the usual genetic algorithm. A heuristic stage is used for improving the optimal solution given by the evolutionary stage. A new cost-voltage node index is used in the heuristic stage in order to improve the quality of solution. The efficiency of the proposed two-stage method has been tested on different test networks. The quality of solution has been verified by comparison tests with other methods on the same test networks. The proposed method has given significantly better solutions for time dependent load in the 69-bus network than found in references.
The algorithmic anatomy of model-based evaluation
Daw, Nathaniel D.; Dayan, Peter
2014-01-01
Despite many debates in the first half of the twentieth century, it is now largely a truism that humans and other animals build models of their environments and use them for prediction and control. However, model-based (MB) reasoning presents severe computational challenges. Alternative, computationally simpler, model-free (MF) schemes have been suggested in the reinforcement learning literature, and have afforded influential accounts of behavioural and neural data. Here, we study the realization of MB calculations, and the ways that this might be woven together with MF values and evaluation methods. There are as yet mostly only hints in the literature as to the resulting tapestry, so we offer more preview than review. PMID:25267820
Evolutionary pattern search algorithms for unconstrained and linearly constrained optimization
HART,WILLIAM E.
2000-06-01
The authors describe a convergence theory for evolutionary pattern search algorithms (EPSAs) on a broad class of unconstrained and linearly constrained problems. EPSAs adaptively modify the step size of the mutation operator in response to the success of previous optimization steps. The design of EPSAs is inspired by recent analyses of pattern search methods. The analysis significantly extends the previous convergence theory for EPSAs. The analysis applies to a broader class of EPSAs,and it applies to problems that are nonsmooth, have unbounded objective functions, and which are linearly constrained. Further, they describe a modest change to the algorithmic framework of EPSAs for which a non-probabilistic convergence theory applies. These analyses are also noteworthy because they are considerably simpler than previous analyses of EPSAs.
Characterization of atmospheric contaminant sources using adaptive evolutionary algorithms
NASA Astrophysics Data System (ADS)
Cervone, Guido; Franzese, Pasquale; Grajdeanu, Adrian
2010-10-01
The characteristics of an unknown source of emissions in the atmosphere are identified using an Adaptive Evolutionary Strategy (AES) methodology based on ground concentration measurements and a Gaussian plume model. The AES methodology selects an initial set of source characteristics including position, size, mass emission rate, and wind direction, from which a forward dispersion simulation is performed. The error between the simulated concentrations from the tentative source and the observed ground measurements is calculated. Then the AES algorithm prescribes the next tentative set of source characteristics. The iteration proceeds towards minimum error, corresponding to convergence towards the real source. The proposed methodology was used to identify the source characteristics of 12 releases from the Prairie Grass field experiment of dispersion, two for each atmospheric stability class, ranging from very unstable to stable atmosphere. The AES algorithm was found to have advantages over a simple canonical ES and a Monte Carlo (MC) method which were used as benchmarks.
Discovering new materials and new phenomena with evolutionary algorithms
NASA Astrophysics Data System (ADS)
Oganov, Artem
Thanks to powerful evolutionary algorithms, in particular the USPEX method, it is now possible to predict both the stable compounds and their crystal structures at arbitrary conditions, given just the set of chemical elements. Recent developments include major increases of efficiency and extensions to low-dimensional systems and molecular crystals (which allowed large structures to be handled easily, e.g. Mg(BH4)2 and H2O-H2) and new techniques called evolutionary metadynamics and Mendelevian search. Some of the results that I will discuss include: 1. Theoretical and experimental evidence for a new partially ionic phase of boron, γ-B and an insulating and optically transparent form of sodium. 2. Predicted stability of ``impossible'' chemical compounds that become stable under pressure - e.g. Na3Cl, Na2Cl, Na3Cl2, NaCl3, NaCl7, Mg3O2 and MgO2. 3. Novel surface phases (e.g. boron surface reconstructions). 4. Novel dielectric polymers, and novel permanent magnets confirmed by experiment and ready for applications. 5. Prediction of new ultrahard materials and computational proof that diamond is the hardest possible material.
On the application of evolutionary pattern search algorithms
Hart, W.E.
1997-02-01
This paper presents an experimental evaluation of evolutionary pattern search algorithms (EPSAs). Our experimental evaluation of EPSAs indicates that EPSAs can achieve similar performance to EAs on challenging global optimization problems. Additionally, we describe a stopping rule for EPSAs that reliably terminated them near a stationary point of the objective function. The ability for EPSAs to reliably terminate near stationary points offers a practical advantage over other EAs, which are typically stopped by heuristic stopping rules or simple bounds on the number of iterations. Our experiments also illustrate how the rate of the crossover operator can influence the tradeoff between the number of iterations before termination and the quality of the solution found by an EPSA.
Multiobjective Optimization of Rocket Engine Pumps Using Evolutionary Algorithm
NASA Technical Reports Server (NTRS)
Oyama, Akira; Liou, Meng-Sing
2001-01-01
A design optimization method for turbopumps of cryogenic rocket engines has been developed. Multiobjective Evolutionary Algorithm (MOEA) is used for multiobjective pump design optimizations. Performances of design candidates are evaluated by using the meanline pump flow modeling method based on the Euler turbine equation coupled with empirical correlations for rotor efficiency. To demonstrate the feasibility of the present approach, a single stage centrifugal pump design and multistage pump design optimizations are presented. In both cases, the present method obtains very reasonable Pareto-optimal solutions that include some designs outperforming the original design in total head while reducing input power by one percent. Detailed observation of the design results also reveals some important design criteria for turbopumps in cryogenic rocket engines. These results demonstrate the feasibility of the EA-based design optimization method in this field.
A hierarchical evolutionary algorithm for multiobjective optimization in IMRT
Holdsworth, Clay; Kim, Minsun; Liao, Jay; Phillips, Mark H.
2010-01-01
Purpose: The current inverse planning methods for intensity modulated radiation therapy (IMRT) are limited because they are not designed to explore the trade-offs between the competing objectives of tumor and normal tissues. The goal was to develop an efficient multiobjective optimization algorithm that was flexible enough to handle any form of objective function and that resulted in a set of Pareto optimal plans. Methods: A hierarchical evolutionary multiobjective algorithm designed to quickly generate a small diverse Pareto optimal set of IMRT plans that meet all clinical constraints and reflect the optimal trade-offs in any radiation therapy plan was developed. The top level of the hierarchical algorithm is a multiobjective evolutionary algorithm (MOEA). The genes of the individuals generated in the MOEA are the parameters that define the penalty function minimized during an accelerated deterministic IMRT optimization that represents the bottom level of the hierarchy. The MOEA incorporates clinical criteria to restrict the search space through protocol objectives and then uses Pareto optimality among the fitness objectives to select individuals. The population size is not fixed, but a specialized niche effect, domination advantage, is used to control the population and plan diversity. The number of fitness objectives is kept to a minimum for greater selective pressure, but the number of genes is expanded for flexibility that allows a better approximation of the Pareto front. Results: The MOEA improvements were evaluated for two example prostate cases with one target and two organs at risk (OARs). The population of plans generated by the modified MOEA was closer to the Pareto front than populations of plans generated using a standard genetic algorithm package. Statistical significance of the method was established by compiling the results of 25 multiobjective optimizations using each method. From these sets of 12–15 plans, any random plan selected from a MOEA
Multidisciplinary Multiobjective Optimal Design for Turbomachinery Using Evolutionary Algorithm
NASA Technical Reports Server (NTRS)
2005-01-01
This report summarizes Dr. Lian s efforts toward developing a robust and efficient tool for multidisciplinary and multi-objective optimal design for turbomachinery using evolutionary algorithms. This work consisted of two stages. The first stage (from July 2003 to June 2004) Dr. Lian focused on building essential capabilities required for the project. More specifically, Dr. Lian worked on two subjects: an enhanced genetic algorithm (GA) and an integrated optimization system with a GA and a surrogate model. The second stage (from July 2004 to February 2005) Dr. Lian formulated aerodynamic optimization and structural optimization into a multi-objective optimization problem and performed multidisciplinary and multi-objective optimizations on a transonic compressor blade based on the proposed model. Dr. Lian s numerical results showed that the proposed approach can effectively reduce the blade weight and increase the stage pressure ratio in an efficient manner. In addition, the new design was structurally safer than the original design. Five conference papers and three journal papers were published on this topic by Dr. Lian.
Multi-objective evolutionary algorithm for operating parallel reservoir system
NASA Astrophysics Data System (ADS)
Chang, Li-Chiu; Chang, Fi-John
2009-10-01
SummaryThis paper applies a multi-objective evolutionary algorithm, the non-dominated sorting genetic algorithm (NSGA-II), to examine the operations of a multi-reservoir system in Taiwan. The Feitsui and Shihmen reservoirs are the most important water supply reservoirs in Northern Taiwan supplying the domestic and industrial water supply needs for over 7 million residents. A daily operational simulation model is developed to guide the releases of the reservoir system and then to calculate the shortage indices (SI) of both reservoirs over a long-term simulation period. The NSGA-II is used to minimize the SI values through identification of optimal joint operating strategies. Based on a 49 year data set, we demonstrate that better operational strategies would reduce shortage indices for both reservoirs. The results indicate that the NSGA-II provides a promising approach. The pareto-front optimal solutions identified operational compromises for the two reservoirs that would be expected to improve joint operations.
The hierarchical fair competition (HFC) framework for sustainable evolutionary algorithms.
Hu, Jianjun; Goodman, Erik; Seo, Kisung; Fan, Zhun; Rosenberg, Rondal
2005-01-01
Many current Evolutionary Algorithms (EAs) suffer from a tendency to converge prematurely or stagnate without progress for complex problems. This may be due to the loss of or failure to discover certain valuable genetic material or the loss of the capability to discover new genetic material before convergence has limited the algorithm's ability to search widely. In this paper, the Hierarchical Fair Competition (HFC) model, including several variants, is proposed as a generic framework for sustainable evolutionary search by transforming the convergent nature of the current EA framework into a non-convergent search process. That is, the structure of HFC does not allow the convergence of the population to the vicinity of any set of optimal or locally optimal solutions. The sustainable search capability of HFC is achieved by ensuring a continuous supply and the incorporation of genetic material in a hierarchical manner, and by culturing and maintaining, but continually renewing, populations of individuals of intermediate fitness levels. HFC employs an assembly-line structure in which subpopulations are hierarchically organized into different fitness levels, reducing the selection pressure within each subpopulation while maintaining the global selection pressure to help ensure the exploitation of the good genetic material found. Three EAs based on the HFC principle are tested - two on the even-10-parity genetic programming benchmark problem and a real-world analog circuit synthesis problem, and another on the HIFF genetic algorithm (GA) benchmark problem. The significant gain in robustness, scalability and efficiency by HFC, with little additional computing effort, and its tolerance of small population sizes, demonstrates its effectiveness on these problems and shows promise of its potential for improving other existing EAs for difficult problems. A paradigm shift from that of most EAs is proposed: rather than trying to escape from local optima or delay convergence at a
An Allele Real-Coded Quantum Evolutionary Algorithm Based on Hybrid Updating Strategy.
Zhang, Yu-Xian; Qian, Xiao-Yi; Peng, Hui-Deng; Wang, Jian-Hui
2016-01-01
For improving convergence rate and preventing prematurity in quantum evolutionary algorithm, an allele real-coded quantum evolutionary algorithm based on hybrid updating strategy is presented. The real variables are coded with probability superposition of allele. A hybrid updating strategy balancing the global search and local search is presented in which the superior allele is defined. On the basis of superior allele and inferior allele, a guided evolutionary process as well as updating allele with variable scale contraction is adopted. And H ε gate is introduced to prevent prematurity. Furthermore, the global convergence of proposed algorithm is proved by Markov chain. Finally, the proposed algorithm is compared with genetic algorithm, quantum evolutionary algorithm, and double chains quantum genetic algorithm in solving continuous optimization problem, and the experimental results verify the advantages on convergence rate and search accuracy. PMID:27057159
An Allele Real-Coded Quantum Evolutionary Algorithm Based on Hybrid Updating Strategy
Zhang, Yu-Xian; Qian, Xiao-Yi; Peng, Hui-Deng; Wang, Jian-Hui
2016-01-01
For improving convergence rate and preventing prematurity in quantum evolutionary algorithm, an allele real-coded quantum evolutionary algorithm based on hybrid updating strategy is presented. The real variables are coded with probability superposition of allele. A hybrid updating strategy balancing the global search and local search is presented in which the superior allele is defined. On the basis of superior allele and inferior allele, a guided evolutionary process as well as updating allele with variable scale contraction is adopted. And Hε gate is introduced to prevent prematurity. Furthermore, the global convergence of proposed algorithm is proved by Markov chain. Finally, the proposed algorithm is compared with genetic algorithm, quantum evolutionary algorithm, and double chains quantum genetic algorithm in solving continuous optimization problem, and the experimental results verify the advantages on convergence rate and search accuracy. PMID:27057159
GX-Means: A model-based divide and merge algorithm for geospatial image clustering
Vatsavai, Raju; Symons, Christopher T; Chandola, Varun; Jun, Goo
2011-01-01
One of the practical issues in clustering is the specification of the appropriate number of clusters, which is not obvious when analyzing geospatial datasets, partly because they are huge (both in size and spatial extent) and high dimensional. In this paper we present a computationally efficient model-based split and merge clustering algorithm that incrementally finds model parameters and the number of clusters. Additionally, we attempt to provide insights into this problem and other data mining challenges that are encountered when clustering geospatial data. The basic algorithm we present is similar to the G-means and X-means algorithms; however, our proposed approach avoids certain limitations of these well-known clustering algorithms that are pertinent when dealing with geospatial data. We compare the performance of our approach with the G-means and X-means algorithms. Experimental evaluation on simulated data and on multispectral and hyperspectral remotely sensed image data demonstrates the effectiveness of our algorithm.
A multiagent evolutionary algorithm for constraint satisfaction problems.
Liu, Jing; Zhong, Weicai; Jiao, Licheng
2006-02-01
With the intrinsic properties of constraint satisfaction problems (CSPs) in mind, we divide CSPs into two types, namely, permutation CSPs and nonpermutation CSPs. According to their characteristics, several behaviors are designed for agents by making use of the ability of agents to sense and act on the environment. These behaviors are controlled by means of evolution, so that the multiagent evolutionary algorithm for constraint satisfaction problems (MAEA-CSPs) results. To overcome the disadvantages of the general encoding methods, the minimum conflict encoding is also proposed. Theoretical analyzes show that MAEA-CSPs has a linear space complexity and converges to the global optimum. The first part of the experiments uses 250 benchmark binary CSPs and 79 graph coloring problems from the DIMACS challenge to test the performance of MAEA-CSPs for nonpermutation CSPs. MAEA-CSPs is compared with six well-defined algorithms and the effect of the parameters is analyzed systematically. The second part of the experiments uses a classical CSP, n-queen problems, and a more practical case, job-shop scheduling problems (JSPs), to test the performance of MAEA-CSPs for permutation CSPs. The scalability of MAEA-CSPs along n for n-queen problems is studied with great care. The results show that MAEA-CSPs achieves good performance when n increases from 10(4) to 10(7), and has a linear time complexity. Even for 10(7)-queen problems, MAEA-CSPs finds the solutions by only 150 seconds. For JSPs, 59 benchmark problems are used, and good performance is also obtained. PMID:16468566
Optimizing quantum gas production by an evolutionary algorithm
NASA Astrophysics Data System (ADS)
Lausch, T.; Hohmann, M.; Kindermann, F.; Mayer, D.; Schmidt, F.; Widera, A.
2016-05-01
We report on the application of an evolutionary algorithm (EA) to enhance performance of an ultra-cold quantum gas experiment. The production of a ^{87}rubidium Bose-Einstein condensate (BEC) can be divided into fundamental cooling steps, specifically magneto-optical trapping of cold atoms, loading of atoms to a far-detuned crossed dipole trap, and finally the process of evaporative cooling. The EA is applied separately for each of these steps with a particular definition for the feedback, the so-called fitness. We discuss the principles of an EA and implement an enhancement called differential evolution. Analyzing the reasons for the EA to improve, e.g., the atomic loading rates and increase the BEC phase-space density, yields an optimal parameter set for the BEC production and enables us to reduce the BEC production time significantly. Furthermore, we focus on how additional information about the experiment and optimization possibilities can be extracted and how the correlations revealed allow for further improvement. Our results illustrate that EAs are powerful optimization tools for complex experiments and exemplify that the application yields useful information on the dependence of these experiments on the optimized parameters.
EVO—Evolutionary algorithm for crystal structure prediction
NASA Astrophysics Data System (ADS)
Bahmann, Silvia; Kortus, Jens
2013-06-01
We present EVO—an evolution strategy designed for crystal structure search and prediction. The concept and main features of biological evolution such as creation of diversity and survival of the fittest have been transferred to crystal structure prediction. EVO successfully demonstrates its applicability to find crystal structures of the elements of the 3rd main group with their different spacegroups. For this we used the number of atoms in the conventional cell and multiples of it. Running EVO with different numbers of carbon atoms per unit cell yields graphite as the lowest energy structure as well as a diamond-like structure, both in one run. Our implementation also supports the search for 2D structures and was able to find a boron sheet with structural features so far not considered in literature. Program summaryProgram title: EVO Catalogue identifier: AEOZ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOZ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 23488 No. of bytes in distributed program, including test data, etc.: 1830122 Distribution format: tar.gz Programming language: Python. Computer: No limitations known. Operating system: Linux. RAM: Negligible compared to the requirements of the electronic structure programs used Classification: 7.8. External routines: Quantum ESPRESSO (http://www.quantum-espresso.org/), GULP (https://projects.ivec.org/gulp/) Nature of problem: Crystal structure search is a global optimisation problem in 3N+3 dimensions where N is the number of atoms in the unit cell. The high dimensional search space is accompanied by an unknown energy landscape. Solution method: Evolutionary algorithms transfer the main features of biological evolution to use them in global searches. The combination of the "survival of the fittest" (deterministic) and the
NASA Astrophysics Data System (ADS)
Marwati, Rini; Yulianti, Kartika; Pangestu, Herny Wulandari
2016-02-01
A fuzzy evolutionary algorithm is an integration of an evolutionary algorithm and a fuzzy system. In this paper, we present an application of a genetic algorithm to a fuzzy evolutionary algorithm to detect and to solve chromosomes conflict. A chromosome conflict is identified by existence of any two genes in a chromosome that has the same values as two genes in another chromosome. Based on this approach, we construct an algorithm to solve a lecture scheduling problem. Time codes, lecture codes, lecturer codes, and room codes are defined as genes. They are collected to become chromosomes. As a result, the conflicted schedule turns into chromosomes conflict. Built in the Delphi program, results show that the conflicted lecture schedule problem is solvable by this algorithm.
NASA Astrophysics Data System (ADS)
Allhoff, K. T.; Ritterskamp, D.; Rall, B. C.; Drossel, B.; Guill, C.
2015-06-01
The networks of predator-prey interactions in ecological systems are remarkably complex, but nevertheless surprisingly stable in terms of long term persistence of the system as a whole. In order to understand the mechanism driving the complexity and stability of such food webs, we developed an eco-evolutionary model in which new species emerge as modifications of existing ones and dynamic ecological interactions determine which species are viable. The food-web structure thereby emerges from the dynamical interplay between speciation and trophic interactions. The proposed model is less abstract than earlier evolutionary food web models in the sense that all three evolving traits have a clear biological meaning, namely the average body mass of the individuals, the preferred prey body mass, and the width of their potential prey body mass spectrum. We observed networks with a wide range of sizes and structures and high similarity to natural food webs. The model networks exhibit a continuous species turnover, but massive extinction waves that affect more than 50% of the network are not observed.
Allhoff, K. T.; Ritterskamp, D.; Rall, B. C.; Drossel, B.; Guill, C.
2015-01-01
The networks of predator-prey interactions in ecological systems are remarkably complex, but nevertheless surprisingly stable in terms of long term persistence of the system as a whole. In order to understand the mechanism driving the complexity and stability of such food webs, we developed an eco-evolutionary model in which new species emerge as modifications of existing ones and dynamic ecological interactions determine which species are viable. The food-web structure thereby emerges from the dynamical interplay between speciation and trophic interactions. The proposed model is less abstract than earlier evolutionary food web models in the sense that all three evolving traits have a clear biological meaning, namely the average body mass of the individuals, the preferred prey body mass, and the width of their potential prey body mass spectrum. We observed networks with a wide range of sizes and structures and high similarity to natural food webs. The model networks exhibit a continuous species turnover, but massive extinction waves that affect more than 50% of the network are not observed. PMID:26042870
NASA Astrophysics Data System (ADS)
Khair, Joseph Daniel
Communication requirements and demands on deployed systems are increasing daily. This increase is due to the desire for more capability, but also, due to the changing landscape of threats on remote vehicles. As such, it is important that we continue to find new and innovative ways to transmit data to and from these remote systems, consistent with this changing landscape. Specifically, this research shows that data can be transmitted to a remote system effectively and efficiently with a model-based approach using real-time updates, called Algorithmically Corrected Model-based Technique (ACMBT), resulting in substantial savings in communications overhead. To demonstrate this model-based data transmission technique, a hardware-based test fixture was designed and built. Execution and analysis software was created to perform a series of characterizations demonstrating the effectiveness of the new transmission method. The new approach was compared to a traditional transmission approach in the same environment, and the results were analyzed and presented. A Figure of Merit (FOM) was devised and presented to allow standardized comparison of traditional and proposed data transmission methodologies alongside bandwidth utilization metrics. The results of this research have successfully shown the model-based technique to be feasible. Additionally, this research has opened the trade space for future discussion and implementation of this technique.
NASA Astrophysics Data System (ADS)
Tang, Zhili
2016-06-01
This paper solved aerodynamic drag reduction of transport wing fuselage configuration in transonic regime by using a parallel Nash evolutionary/deterministic hybrid optimization algorithm. Two sets of parameters are used, namely globally and locally. It is shown that optimizing separately local and global parameters by using Nash algorithms is far more efficient than considering these variables as a whole.
Path Planning Using a Hybrid Evolutionary Algorithm Based on Tree Structure Encoding
Wang, Siao-En; Guo, Jian-Horn
2014-01-01
A hybrid evolutionary algorithm using scalable encoding method for path planning is proposed in this paper. The scalable representation is based on binary tree structure encoding. To solve the problem of hybrid genetic algorithm and particle swarm optimization, the “dummy node” is added into the binary trees to deal with the different lengths of representations. The experimental results show that the proposed hybrid method demonstrates using fewer turning points than traditional evolutionary algorithms to generate shorter collision-free paths for mobile robot navigation. PMID:24971389
Path planning using a hybrid evolutionary algorithm based on tree structure encoding.
Ju, Ming-Yi; Wang, Siao-En; Guo, Jian-Horn
2014-01-01
A hybrid evolutionary algorithm using scalable encoding method for path planning is proposed in this paper. The scalable representation is based on binary tree structure encoding. To solve the problem of hybrid genetic algorithm and particle swarm optimization, the "dummy node" is added into the binary trees to deal with the different lengths of representations. The experimental results show that the proposed hybrid method demonstrates using fewer turning points than traditional evolutionary algorithms to generate shorter collision-free paths for mobile robot navigation. PMID:24971389
Development of model-based fault diagnosis algorithms for MASCOTTE cryogenic test bench
NASA Astrophysics Data System (ADS)
Iannetti, A.; Marzat, J.; Piet-Lahanier, H.; Ordonneau, G.; Vingert, L.
2014-12-01
This article describes the on-going results of a fault diagnosis benchmark for a cryogenic rocket engine demonstrator. The benchmark consists in the use of classical model- based fault diagnosis methods to monitor the status of the cooling circuit of the MASCOTTE cryogenic bench. The algorithms developed are validated on real data from the last 2014 firing campaign (ATAC campaign). The objective of this demonstration is to find practical diagnosis alternatives to classical redline providing more flexible means of data exploitation in real time and for post processing.
Optimization of aeroelastic composite structures using evolutionary algorithms
NASA Astrophysics Data System (ADS)
Manan, A.; Vio, G. A.; Harmin, M. Y.; Cooper, J. E.
2010-02-01
The flutter/divergence speed of a simple rectangular composite wing is maximized through the use of different ply orientations. Four different biologically inspired optimization algorithms (binary genetic algorithm, continuous genetic algorithm, particle swarm optimization, and ant colony optimization) and a simple meta-modeling approach are employed statistically on the same problem set. In terms of the best flutter speed, it was found that similar results were obtained using all of the methods, although the continuous methods gave better answers than the discrete methods. When the results were considered in terms of the statistical variation between different solutions, ant colony optimization gave estimates with much less scatter.
Evolutionary Algorithms Approach to the Solution of Damage Detection Problems
NASA Astrophysics Data System (ADS)
Salazar Pinto, Pedro Yoajim; Begambre, Oscar
2010-09-01
In this work is proposed a new Self-Configured Hybrid Algorithm by combining the Particle Swarm Optimization (PSO) and a Genetic Algorithm (GA). The aim of the proposed strategy is to increase the stability and accuracy of the search. The central idea is the concept of Guide Particle, this particle (the best PSO global in each generation) transmits its information to a particle of the following PSO generation, which is controlled by the GA. Thus, the proposed hybrid has an elitism feature that improves its performance and guarantees the convergence of the procedure. In different test carried out in benchmark functions, reported in the international literature, a better performance in stability and accuracy was observed; therefore the new algorithm was used to identify damage in a simple supported beam using modal data. Finally, it is worth noting that the algorithm is independent of the initial definition of heuristic parameters.
A multiobjective evolutionary algorithm to find community structures based on affinity propagation
NASA Astrophysics Data System (ADS)
Shang, Ronghua; Luo, Shuang; Zhang, Weitong; Stolkin, Rustam; Jiao, Licheng
2016-07-01
Community detection plays an important role in reflecting and understanding the topological structure of complex networks, and can be used to help mine the potential information in networks. This paper presents a Multiobjective Evolutionary Algorithm based on Affinity Propagation (APMOEA) which improves the accuracy of community detection. Firstly, APMOEA takes the method of affinity propagation (AP) to initially divide the network. To accelerate its convergence, the multiobjective evolutionary algorithm selects nondominated solutions from the preliminary partitioning results as its initial population. Secondly, the multiobjective evolutionary algorithm finds solutions approximating the true Pareto optimal front through constantly selecting nondominated solutions from the population after crossover and mutation in iterations, which overcomes the tendency of data clustering methods to fall into local optima. Finally, APMOEA uses an elitist strategy, called "external archive", to prevent degeneration during the process of searching using the multiobjective evolutionary algorithm. According to this strategy, the preliminary partitioning results obtained by AP will be archived and participate in the final selection of Pareto-optimal solutions. Experiments on benchmark test data, including both computer-generated networks and eight real-world networks, show that the proposed algorithm achieves more accurate results and has faster convergence speed compared with seven other state-of-art algorithms.
Predicting land cover using GIS, Bayesian and evolutionary algorithm methods.
Aitkenhead, M J; Aalders, I H
2009-01-01
Modelling land cover change from existing land cover maps is a vital requirement for anyone wishing to understand how the landscape may change in the future. In order to test any land cover change model, existing data must be used. However, often it is not known which data should be applied to the problem, or whether relationships exist within and between complex datasets. Here we have developed and tested a model that applied evolutionary processes to Bayesian networks. The model was developed and tested on a dataset containing land cover information and environmental data, in order to show that decisions about which datasets should be used could be made automatically. Bayesian networks are amenable to evolutionary methods as they can be easily described using a binary string to which crossover and mutation operations can be applied. The method, developed to allow comparison with standard Bayesian network development software, was proved capable of carrying out a rapid and effective search of the space of possible networks in order to find an optimal or near-optimal solution for the selection of datasets that have causal links with one another. Comparison of land cover mapping in the North-East of Scotland was made with a commercial Bayesian software package, with the evolutionary method being shown to provide greater flexibility in its ability to adapt to incorporate/utilise available evidence/knowledge and develop effective and accurate network structures, at the cost of requiring additional computer programming skills. The dataset used to develop the models included GIS-based data taken from the Land Cover for Scotland 1988 (LCS88), Land Capability for Forestry (LCF), Land Capability for Agriculture (LCA), the soil map of Scotland and additional climatic variables. PMID:18079039
Scheduling Earth Observing Fleets Using Evolutionary Algorithms: Problem Description and Approach
NASA Technical Reports Server (NTRS)
Globus, Al; Crawford, James; Lohn, Jason; Morris, Robert; Clancy, Daniel (Technical Monitor)
2002-01-01
We describe work in progress concerning multi-instrument, multi-satellite scheduling. Most, although not all, Earth observing instruments currently in orbit are unique. In the relatively near future, however, we expect to see fleets of Earth observing spacecraft, many carrying nearly identical instruments. This presents a substantially new scheduling challenge. Inspired by successful commercial applications of evolutionary algorithms in scheduling domains, this paper presents work in progress regarding the use of evolutionary algorithms to solve a set of Earth observing related model problems. Both the model problems and the software are described. Since the larger problems will require substantial computation and evolutionary algorithms are embarrassingly parallel, we discuss our parallelization techniques using dedicated and cycle-scavenged workstations.
Bioinspired Evolutionary Algorithm Based for Improving Network Coverage in Wireless Sensor Networks
Abbasi, Mohammadjavad; Bin Abd Latiff, Muhammad Shafie
2014-01-01
Wireless sensor networks (WSNs) include sensor nodes in which each node is able to monitor the physical area and send collected information to the base station for further analysis. The important key of WSNs is detection and coverage of target area which is provided by random deployment. This paper reviews and addresses various area detection and coverage problems in sensor network. This paper organizes many scenarios for applying sensor node movement for improving network coverage based on bioinspired evolutionary algorithm and explains the concern and objective of controlling sensor node coverage. We discuss area coverage and target detection model by evolutionary algorithm. PMID:24693247
Design of synthetic biological logic circuits based on evolutionary algorithm.
Chuang, Chia-Hua; Lin, Chun-Liang; Chang, Yen-Chang; Jennawasin, Tanagorn; Chen, Po-Kuei
2013-08-01
The construction of an artificial biological logic circuit using systematic strategy is recognised as one of the most important topics for the development of synthetic biology. In this study, a real-structured genetic algorithm (RSGA), which combines general advantages of the traditional real genetic algorithm with those of the structured genetic algorithm, is proposed to deal with the biological logic circuit design problem. A general model with the cis-regulatory input function and appropriate promoter activity functions is proposed to synthesise a wide variety of fundamental logic gates such as NOT, Buffer, AND, OR, NAND, NOR and XOR. The results obtained can be extended to synthesise advanced combinational and sequential logic circuits by topologically distinct connections. The resulting optimal design of these logic gates and circuits are established via the RSGA. The in silico computer-based modelling technology has been verified showing its great advantages in the purpose. PMID:23919952
A Comparative Study between Migration and Pair-Swap on Quantum-Inspired Evolutionary Algorithm
NASA Astrophysics Data System (ADS)
Imabeppu, Takahiro; Ono, Satoshi; Morishige, Ryota; Kurose, Motoyoshi; Nakayama, Shigeru
Quantum-inspired Evolutionary Algorithm (QEA) has been proposed as one of stochastic algorithms of evolutionary computation instead of a quantum algorithm. The authors have proposed Quantum-inspired Evolutionary Algorithm based on Pair Swap (QEAPS), which uses pair swap operator and does not group individuals in order to simplify QEA and reduce parameters in QEA. QEA and QEAPS imitationally use quantum bits as genes and superposition states in quantum computation. QEAPS has shown better search performance than QEA on knapsack problem, while eliminating parameters about immigration intervals and number of groups. However, QEAPS still has a parameter in common with QEA, a rotation angle unit, which is uncommon among other evolutionary computation algorithms. The rotation angle unit deeply affects exploitation and exploration control in QEA, but it has been unclear how the parameter influences QEAPS to behave. This paper aims to show that QEAPS involves few parameters and even those parameters can be adjusted easily. Experimental results, in knapsack problem and number partitioning problem which have different characteristics, have shown that QEAPS is competitive with other metaheuristics in search performance, and that QEAPS is robust against the parameter configuration and problem characteristics.
Evolutionary Processes in the Development of Errors in Subtraction Algorithms
ERIC Educational Resources Information Center
Fernandez, Ricardo Lopez; Garcia, Ana B. Sanchez
2008-01-01
The study of errors made in subtraction is a research subject approached from different theoretical premises that affect different components of the algorithmic process as triggers of their generation. In the following research an attempt has been made to investigate the typology and nature of errors which occur in subtractions and their evolution…
First principles prediction of amorphous phases using evolutionary algorithms.
Nahas, Suhas; Gaur, Anshu; Bhowmick, Somnath
2016-07-01
We discuss the efficacy of evolutionary method for the purpose of structural analysis of amorphous solids. At present, ab initio molecular dynamics (MD) based melt-quench technique is used and this deterministic approach has proven to be successful to study amorphous materials. We show that a stochastic approach motivated by Darwinian evolution can also be used to simulate amorphous structures. Applying this method, in conjunction with density functional theory based electronic, ionic and cell relaxation, we re-investigate two well known amorphous semiconductors, namely silicon and indium gallium zinc oxide. We find that characteristic structural parameters like average bond length and bond angle are within ∼2% of those reported by ab initio MD calculations and experimental studies. PMID:27394098
First principles prediction of amorphous phases using evolutionary algorithms
NASA Astrophysics Data System (ADS)
Nahas, Suhas; Gaur, Anshu; Bhowmick, Somnath
2016-07-01
We discuss the efficacy of evolutionary method for the purpose of structural analysis of amorphous solids. At present, ab initio molecular dynamics (MD) based melt-quench technique is used and this deterministic approach has proven to be successful to study amorphous materials. We show that a stochastic approach motivated by Darwinian evolution can also be used to simulate amorphous structures. Applying this method, in conjunction with density functional theory based electronic, ionic and cell relaxation, we re-investigate two well known amorphous semiconductors, namely silicon and indium gallium zinc oxide. We find that characteristic structural parameters like average bond length and bond angle are within ˜2% of those reported by ab initio MD calculations and experimental studies.
NASA Astrophysics Data System (ADS)
Xiao, Zhongliang
2012-04-01
In this paper, we set up a mathematical model to solve the problem of airport ground services. In this model, we set objective function of cost and time, and the purpose is making it minimized. Base on the analysis of scheduling characteristic, we use the multi-population co-evolutionary Memetic algorithm (MAMC) which is with the elitist strategy to realize the model. From the result we can see that our algorithm is better than the genetic algorithm in this problem and we can see that our algorithm is convergence. So we can summarize that it can be a better optimization to airport ground services problem.
Experiments with a Parallel Multi-Objective Evolutionary Algorithm for Scheduling
NASA Technical Reports Server (NTRS)
Brown, Matthew; Johnston, Mark D.
2013-01-01
Evolutionary multi-objective algorithms have great potential for scheduling in those situations where tradeoffs among competing objectives represent a key requirement. One challenge, however, is runtime performance, as a consequence of evolving not just a single schedule, but an entire population, while attempting to sample the Pareto frontier as accurately and uniformly as possible. The growing availability of multi-core processors in end user workstations, and even laptops, has raised the question of the extent to which such hardware can be used to speed up evolutionary algorithms. In this paper we report on early experiments in parallelizing a Generalized Differential Evolution (GDE) algorithm for scheduling long-range activities on NASA's Deep Space Network. Initial results show that significant speedups can be achieved, but that performance does not necessarily improve as more cores are utilized. We describe our preliminary results and some initial suggestions from parallelizing the GDE algorithm. Directions for future work are outlined.
Predicting patchy particle crystals: variable box shape simulations and evolutionary algorithms.
Bianchi, Emanuela; Doppelbauer, Günther; Filion, Laura; Dijkstra, Marjolein; Kahl, Gerhard
2012-06-01
We consider several patchy particle models that have been proposed in literature and we investigate their candidate crystal structures in a systematic way. We compare two different algorithms for predicting crystal structures: (i) an approach based on Monte Carlo simulations in the isobaric-isothermal ensemble and (ii) an optimization technique based on ideas of evolutionary algorithms. We show that the two methods are equally successful and provide consistent results on crystalline phases of patchy particle systems. PMID:22697525
2014-01-01
Background To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. Results This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Conclusions Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel
Simulation of Biochemical Pathway Adaptability Using Evolutionary Algorithms
Bosl, W J
2005-01-26
The systems approach to genomics seeks quantitative and predictive descriptions of cells and organisms. However, both the theoretical and experimental methods necessary for such studies still need to be developed. We are far from understanding even the simplest collective behavior of biomolecules, cells or organisms. A key aspect to all biological problems, including environmental microbiology, evolution of infectious diseases, and the adaptation of cancer cells is the evolvability of genomes. This is particularly important for Genomes to Life missions, which tend to focus on the prospect of engineering microorganisms to achieve desired goals in environmental remediation and climate change mitigation, and energy production. All of these will require quantitative tools for understanding the evolvability of organisms. Laboratory biodefense goals will need quantitative tools for predicting complicated host-pathogen interactions and finding counter-measures. In this project, we seek to develop methods to simulate how external and internal signals cause the genetic apparatus to adapt and organize to produce complex biochemical systems to achieve survival. This project is specifically directed toward building a computational methodology for simulating the adaptability of genomes. This project investigated the feasibility of using a novel quantitative approach to studying the adaptability of genomes and biochemical pathways. This effort was intended to be the preliminary part of a larger, long-term effort between key leaders in computational and systems biology at Harvard University and LLNL, with Dr. Bosl as the lead PI. Scientific goals for the long-term project include the development and testing of new hypotheses to explain the observed adaptability of yeast biochemical pathways when the myosin-II gene is deleted and the development of a novel data-driven evolutionary computation as a way to connect exploratory computational simulation with hypothesis
On Polymorphic Circuits and Their Design Using Evolutionary Algorithms
NASA Technical Reports Server (NTRS)
Stoica, Adrian; Zebulum, Ricardo; Keymeulen, Didier; Lohn, Jason; Clancy, Daniel (Technical Monitor)
2002-01-01
This paper introduces the concept of polymorphic electronics (polytronics) - referring to electronics with superimposed built-in functionality. A function change does not require switches/reconfiguration as in traditional approaches. Instead the change comes from modifications in the characteristics of devices involved in the circuit, in response to controls such as temperature, power supply voltage (VDD), control signals, light, etc. The paper illustrates polytronic circuits in which the control is done by temperature, morphing signals, and VDD respectively. Polytronic circuits are obtained by evolutionary design/evolvable hardware techniques. These techniques are ideal for the polytronics design, a new area that lacks design guidelines, know-how,- yet the requirements/objectives are easy to specify and test. The circuits are evolved/synthesized in two different modes. The first mode explores an unstructured space, in which transistors can be interconnected freely in any arrangement (in simulations only). The second mode uses a Field Programmable Transistor Array (FPTA) model, and the circuit topology is sought as a mapping onto a programmable architecture (these experiments are performed both in simulations and on FPTA chips). The experiments demonstrated the synthesis. of polytronic circuits by evolution. The capacity of storing/hiding "extra" functions provides for watermark/invisible functionality, thus polytronics may find uses in intelligence/security applications.
Wayne F. Boyer; Gurdeep S. Hura
2005-09-01
The Problem of obtaining an optimal matching and scheduling of interdependent tasks in distributed heterogeneous computing (DHC) environments is well known to be an NP-hard problem. In a DHC system, task execution time is dependent on the machine to which it is assigned and task precedence constraints are represented by a directed acyclic graph. Recent research in evolutionary techniques has shown that genetic algorithms usually obtain more efficient schedules that other known algorithms. We propose a non-evolutionary random scheduling (RS) algorithm for efficient matching and scheduling of inter-dependent tasks in a DHC system. RS is a succession of randomized task orderings and a heuristic mapping from task order to schedule. Randomized task ordering is effectively a topological sort where the outcome may be any possible task order for which the task precedent constraints are maintained. A detailed comparison to existing evolutionary techniques (GA and PSGA) shows the proposed algorithm is less complex than evolutionary techniques, computes schedules in less time, requires less memory and fewer tuning parameters. Simulation results show that the average schedules produced by RS are approximately as efficient as PSGA schedules for all cases studied and clearly more efficient than PSGA for certain cases. The standard formulation for the scheduling problem addressed in this paper is Rm|prec|Cmax.,
Kwan, Mei-Po; Xiao, Ningchuan; Ding, Guoxiang
2015-01-01
Due to the complexity and multidimensional characteristics of human activities, assessing the similarity of human activity patterns and classifying individuals with similar patterns remains highly challenging. This paper presents a new and unique methodology for evaluating the similarity among individual activity patterns. It conceptualizes multidimensional sequence alignment (MDSA) as a multiobjective optimization problem, and solves this problem with an evolutionary algorithm. The study utilizes sequence alignment to code multiple facets of human activities into multidimensional sequences, and to treat similarity assessment as a multiobjective optimization problem that aims to minimize the alignment cost for all dimensions simultaneously. A multiobjective optimization evolutionary algorithm (MOEA) is used to generate a diverse set of optimal or near-optimal alignment solutions. Evolutionary operators are specifically designed for this problem, and a local search method also is incorporated to improve the search ability of the algorithm. We demonstrate the effectiveness of our method by comparing it with a popular existing method called ClustalG using a set of 50 sequences. The results indicate that our method outperforms the existing method for most of our selected cases. The multiobjective evolutionary algorithm presented in this paper provides an effective approach for assessing activity pattern similarity, and a foundation for identifying distinctive groups of individuals with similar activity patterns. PMID:26190858
NASA Astrophysics Data System (ADS)
Tang, Y.; Reed, P.; Wagner, T.
2005-12-01
This study provides the first comprehensive assessment of state-of-the-art evolutionary multiobjective optimization (EMO) tools- relative effectiveness in calibrating integrated hydrologic models. The relative computational efficiency, accuracy, and ease-of-use of the following EMO algorithms are tested: Epsilon Dominance Nondominated Sorted Genetic Algorithm-II (??-NSGAII), the Multiobjective Shuffled Complex Evolution Metropolis algorithm (MOSCEM-UA), and the Strength Pareto Evolutionary Algorithm 2 (SPEA2). This study assesses the performances of these three evolutionary multiobjective algorithms using a formal metrics-based methodology. This study uses two phases of testing to compare the algorithms- performances. In the first phase, this study uses a suite of standard computer science test problems to validate the algorithms- abilities to perform global search effectively, efficiently, and reliably. The second phase of testing compares the algorithms- performances for a computationally intensive multiobjective integrated hydrologic model calibration application for the Shale Hills watershed located within the Valley and Ridge province of the Susquehanna River Basin in north central Pennsylvania. The Shale Hills test case demonstrates the computational challenges posed by the paradigmatic shift in environmental and water resources simulation tools towards highly nonlinear physical models that seek to holistically simulate the water cycle. Specifically, the Shale Hills test case is an excellent test for the three EMO algorithms due to the large number of continuous decision variables, the increased computational demands posed by the simulating fully-coupled hydrologic processes, and the highly multimodal nature of the search space. A challenge and contribution of this work is the development of a comprehensive methodology for comprehensively comparing EMO algorithms that have different search operators and randomization techniques.
NASA Astrophysics Data System (ADS)
Della Mora, S.; Boschi, L.; Becker, T. W.; Giardini, D.
2010-12-01
The wavelength spectrum of three-dimensional (3D) heterogeneity naturally reflects the nature of Earth dynamics, and is in its own right an important constraint for geodynamical modeling. The Earth's spectrum has been usually evaluated indirectly, on the basis of previously derived tomographic models. If the geographic distribution of seismic heterogeneities is neglected, however, one can invert global seismic data directly to find the spectrum of the Earth. Inverting for the spectrum is in principle (fewer unknowns) cheaper and robust than inverting for the 3D structure of a planet: this should allow us to constrain planetary structure at smaller scales than by current 3D models. Based on the work of Gudmundsson and coworkers in the early 1990s, we have developed a linear algorithm for surface waves. The spectra we obtain are in qualitative agreement with results from 3D tomography, but the resolving power is generally lower, due to the simplifications required to linearise the ``spectral'' inversion. To overcome this problem, we performed full nonlinear inversions of synthetically generated and real datasets, and compare the obtained spectra with the input and tomographic models respectively. The inversions are calculated on a distributed memory parallel nodes cluster, employing the MPI package. An evolutionary strategy approach is used to explore the parameter space, using the PIKAIA software. The first preliminary results show a resolving power higher than that of linearised inversion. This confirms that the approximations required in the linear formulation affect the solution quality, and suggests that the nonlinear approach might effectively help to constrain the heterogeneity spectrum more robustly than currently possible.
Hybrid evolutionary algorithms for network-centric command and control
NASA Astrophysics Data System (ADS)
Khosla, Deepak; Nichols, Tom
2006-05-01
Network-centric force optimization is the problem of threat engagement and dynamic Weapon-Target Allocation (WTA) across the force. The goal is to allocate and schedule defensive weapon resources over a given period of time so as to achieve certain battle management objectives subject to resource and temporal constraints. The problem addresses in this paper is one of dynamic WTA and involves optimization across both resources (weapons) and time. We henceforth refer to this problem as the Weapon Allocation and Scheduling problem (WAS). This paper addresses and solves the WAS problem for two separate battle management objectives: (1) Threat Kill Maximization (TKM), and (2) Asset Survival Maximization (ASM). Henceforth, the WAS problems for the above objectives are referred to as the WAS-TKM and WAS-ASM, respectively. Both WAS problems are NP-complete problem and belong to a class of multiple-resource-constrained optimal scheduling problems. While the above objectives appear to be intuitively similar from a battle management perspective, the two optimal scheduling problems are quite different in their complexity. We present a hybrid genetic algorithm (GA) that is a combination of a traditional genetic algorithm and a simulated annealing-type algorithm for solving these problems. The hybrid GA approach proposed here uses a simulated annealing-type heuristics to compute the fitness of a GA-selected population. This step also optimizes the temporal dimension (scheduling) under resource and temporal constraints and is significantly different for the WAS-TKM and WAS-ASM problems. The proposed method provides schedules that are near optimal in short cycle times and have minimal perturbation from one cycle to the next.
NASA Astrophysics Data System (ADS)
Patz, Mark David
A non-intrusive buried object classifier for a ground penetrating radar (GPR) system is developed. Various GPR data sets and the implemented processing are described. A model based inversion algorithm that utilizes correlation methodology for target classification is introduced. Experimental data was collected with a continuous wave GPR. Synthetic data was generated with a newly developed software package that implements mathematical models to predict the electromagnetic returns from an underground object. Sample targets and geometries were chosen to produce nine configurations/scenarios for analysis. The real measurement sets for each configuration and the synthetic sets for a family of similar configurations were imaged with the same state-of-the-art signal processing algorithms. The imaged results for the real data measurements were correlated with the imaged results for the synthetic data sets to produce performance measurements, thus producing a procedure that provides a non-invasive assessment of the object and medium determined by the synthetic data set that maximally correlated with the real data return. Synthetic results and experiment results showed good correlations. For the synthetic data, a mathematical model was developed for electromagnetic returns from an object shape (i.e., cylinder, parallelepiped, sphere) composed of a uniform construction (i.e., metal, wood, plastic, clay) within a uniform dielectric material (i.e., air, sand, loam, clay, water). This model was then implemented within a software package, thus providing the ability to generate simulated measurements from any combination of object, construction, and dielectric.
Towards unbiased benchmarking of evolutionary and hybrid algorithms for real-valued optimisation
NASA Astrophysics Data System (ADS)
MacNish, Cara
2007-12-01
Randomised population-based algorithms, such as evolutionary, genetic and swarm-based algorithms, and their hybrids with traditional search techniques, have proven successful and robust on many difficult real-valued optimisation problems. This success, along with the readily applicable nature of these techniques, has led to an explosion in the number of algorithms and variants proposed. In order for the field to advance it is necessary to carry out effective comparative evaluations of these algorithms, and thereby better identify and understand those properties that lead to better performance. This paper discusses the difficulties of providing benchmarking of evolutionary and allied algorithms that is both meaningful and logistically viable. To be meaningful the benchmarking test must give a fair comparison that is free, as far as possible, from biases that favour one style of algorithm over another. To be logistically viable it must overcome the need for pairwise comparison between all the proposed algorithms. To address the first problem, we begin by attempting to identify the biases that are inherent in commonly used benchmarking functions. We then describe a suite of test problems, generated recursively as self-similar or fractal landscapes, designed to overcome these biases. For the second, we describe a server that uses web services to allow researchers to 'plug in' their algorithms, running on their local machines, to a central benchmarking repository.
Evolutionary Design of Rule Changing Artificial Society Using Genetic Algorithms
NASA Astrophysics Data System (ADS)
Wu, Yun; Kanoh, Hitoshi
Socioeconomic phenomena, cultural progress and political organization have recently been studied by creating artificial societies consisting of simulated agents. In this paper we propose a new method to design action rules of agents in artificial society that can realize given requests using genetic algorithms (GAs). In this paper we propose an efficient method for designing the action rules of agents that will constitute an artificial society that meets a specified demand by using a GAs. In the proposed method, each chromosome in the GA population represents a candidate set of action rules and the number of rule iterations. While a conventional method applies distinct rules in order of precedence, the present method applies a set of rules repeatedly for a certain period. The present method is aiming at both firm evolution of agent population and continuous action by that. Experimental results using the artificial society proved that the present method can generate artificial society which fills a demand in high probability.
Runtime Analysis of (1+1) Evolutionary Algorithm for a TSP Instance
NASA Astrophysics Data System (ADS)
Zhang, Yu Shan; Hao, Zhi Feng
Evolutionary Algorithms (EAs) have been used widely and successfully in solving a famous classical combinatorial optimization problem-the traveling salesman problem (TSP). There are lots of experimental results concerning the TSP. However, relatively few theoretical results on the runtime analysis of EAs on the TSP are available. This paper conducts a runtime analysis of a simple Evolutionary Algorithm called (1+1) EA on a TSP instance. We represent a tour as a string of integer, and randomly choose 2-opt and 3-opt operator as the mutation operator at each iteration. The expected runtime of (1+1) EA on this TSP instance is proved to be O(n 4), which is tighter than O(n 6 + (1/ρ)nln n) of (1+1) MMAA (Max-Min ant algorithms). It is also shown that the selection of mutation operator is very important in (1+1) EA.
On source models for (192)Ir HDR brachytherapy dosimetry using model based algorithms.
Pantelis, Evaggelos; Zourari, Kyveli; Zoros, Emmanouil; Lahanas, Vasileios; Karaiskos, Pantelis; Papagiannis, Panagiotis
2016-06-01
A source model is a prerequisite of all model based dose calculation algorithms. Besides direct simulation, the use of pre-calculated phase space files (phsp source models) and parameterized phsp source models has been proposed for Monte Carlo (MC) to promote efficiency and ease of implementation in obtaining photon energy, position and direction. In this work, a phsp file for a generic (192)Ir source design (Ballester et al 2015) is obtained from MC simulation. This is used to configure a parameterized phsp source model comprising appropriate probability density functions (PDFs) and a sampling procedure. According to phsp data analysis 15.6% of the generated photons are absorbed within the source, and 90.4% of the emergent photons are primary. The PDFs for sampling photon energy and direction relative to the source long axis, depend on the position of photon emergence. Photons emerge mainly from the cylindrical source surface with a constant probability over ±0.1 cm from the center of the 0.35 cm long source core, and only 1.7% and 0.2% emerge from the source tip and drive wire, respectively. Based on these findings, an analytical parameterized source model is prepared for the calculation of the PDFs from data of source geometry and materials, without the need for a phsp file. The PDFs from the analytical parameterized source model are in close agreement with those employed in the parameterized phsp source model. This agreement prompted the proposal of a purely analytical source model based on isotropic emission of photons generated homogeneously within the source core with energy sampled from the (192)Ir spectrum, and the assignment of a weight according to attenuation within the source. Comparison of single source dosimetry data obtained from detailed MC simulation and the proposed analytical source model show agreement better than 2% except for points lying close to the source longitudinal axis. PMID:27191179
On source models for 192Ir HDR brachytherapy dosimetry using model based algorithms
NASA Astrophysics Data System (ADS)
Pantelis, Evaggelos; Zourari, Kyveli; Zoros, Emmanouil; Lahanas, Vasileios; Karaiskos, Pantelis; Papagiannis, Panagiotis
2016-06-01
A source model is a prerequisite of all model based dose calculation algorithms. Besides direct simulation, the use of pre-calculated phase space files (phsp source models) and parameterized phsp source models has been proposed for Monte Carlo (MC) to promote efficiency and ease of implementation in obtaining photon energy, position and direction. In this work, a phsp file for a generic 192Ir source design (Ballester et al 2015) is obtained from MC simulation. This is used to configure a parameterized phsp source model comprising appropriate probability density functions (PDFs) and a sampling procedure. According to phsp data analysis 15.6% of the generated photons are absorbed within the source, and 90.4% of the emergent photons are primary. The PDFs for sampling photon energy and direction relative to the source long axis, depend on the position of photon emergence. Photons emerge mainly from the cylindrical source surface with a constant probability over ±0.1 cm from the center of the 0.35 cm long source core, and only 1.7% and 0.2% emerge from the source tip and drive wire, respectively. Based on these findings, an analytical parameterized source model is prepared for the calculation of the PDFs from data of source geometry and materials, without the need for a phsp file. The PDFs from the analytical parameterized source model are in close agreement with those employed in the parameterized phsp source model. This agreement prompted the proposal of a purely analytical source model based on isotropic emission of photons generated homogeneously within the source core with energy sampled from the 192Ir spectrum, and the assignment of a weight according to attenuation within the source. Comparison of single source dosimetry data obtained from detailed MC simulation and the proposed analytical source model show agreement better than 2% except for points lying close to the source longitudinal axis.
NASA Astrophysics Data System (ADS)
Tang, Y.; Reed, P.; Wagener, T.
2005-11-01
This study provides a comprehensive assessment of state-of-the-art evolutionary multiobjective optimization (EMO) tools' relative effectiveness in calibrating hydrologic models. The relative computational efficiency, accuracy, and ease-of-use of the following EMO algorithms are tested: Epsilon Dominance Nondominated Sorted Genetic Algorithm-II (ɛ-NSGAII), the Multiobjective Shuffled Complex Evolution Metropolis algorithm (MOSCEM-UA), and the Strength Pareto Evolutionary Algorithm 2 (SPEA2). This study uses three test cases to compare the algorithms' performances: (1) a standardized test function suite from the computer science literature, (2) a benchmark hydrologic calibration test case for the Leaf River near Collins, Mississippi, and (3) a computationally intensive integrated model application in the Shale Hills watershed in Pennsylvania. A challenge and contribution of this work is the development of a methodology for comprehensively comparing EMO algorithms that have different search operators and randomization techniques. Overall, SPEA2 is an excellent benchmark algorithm for multiobjective hydrologic model calibration. SPEA2 attained competitive to superior results for most of the problems tested in this study. ɛ-NSGAII appears to be superior to MOSCEM-UA and competitive with SPEA2 for hydrologic model calibration.
A Guiding Evolutionary Algorithm with Greedy Strategy for Global Optimization Problems
Cao, Leilei; Xu, Lihong; Goodman, Erik D.
2016-01-01
A Guiding Evolutionary Algorithm (GEA) with greedy strategy for global optimization problems is proposed. Inspired by Particle Swarm Optimization, the Genetic Algorithm, and the Bat Algorithm, the GEA was designed to retain some advantages of each method while avoiding some disadvantages. In contrast to the usual Genetic Algorithm, each individual in GEA is crossed with the current global best one instead of a randomly selected individual. The current best individual served as a guide to attract offspring to its region of genotype space. Mutation was added to offspring according to a dynamic mutation probability. To increase the capability of exploitation, a local search mechanism was applied to new individuals according to a dynamic probability of local search. Experimental results show that GEA outperformed the other three typical global optimization algorithms with which it was compared. PMID:27293421
A Guiding Evolutionary Algorithm with Greedy Strategy for Global Optimization Problems.
Cao, Leilei; Xu, Lihong; Goodman, Erik D
2016-01-01
A Guiding Evolutionary Algorithm (GEA) with greedy strategy for global optimization problems is proposed. Inspired by Particle Swarm Optimization, the Genetic Algorithm, and the Bat Algorithm, the GEA was designed to retain some advantages of each method while avoiding some disadvantages. In contrast to the usual Genetic Algorithm, each individual in GEA is crossed with the current global best one instead of a randomly selected individual. The current best individual served as a guide to attract offspring to its region of genotype space. Mutation was added to offspring according to a dynamic mutation probability. To increase the capability of exploitation, a local search mechanism was applied to new individuals according to a dynamic probability of local search. Experimental results show that GEA outperformed the other three typical global optimization algorithms with which it was compared. PMID:27293421
NASA Astrophysics Data System (ADS)
Malusek, Alexandr; Magnusson, Maria; Sandborg, Michael; Westin, Robin; Alm Carlsson, Gudrun
2014-03-01
Better knowledge of elemental composition of patient tissues may improve the accuracy of absorbed dose delivery in brachytherapy. Deficiencies of water-based protocols have been recognized and work is ongoing to implement patient-specific radiation treatment protocols. A model based iterative image reconstruction algorithm DIRA has been developed by the authors to automatically decompose patient tissues to two or three base components via dual-energy computed tomography. Performance of an updated version of DIRA was evaluated for the determination of prostate calcification. A computer simulation using an anthropomorphic phantom showed that the mass fraction of calcium in the prostate tissue was determined with accuracy better than 9%. The calculated mass fraction was little affected by the choice of the material triplet for the surrounding soft tissue. Relative differences between true and approximated values of linear attenuation coefficient and mass energy absorption coefficient for the prostate tissue were less than 6% for photon energies from 1 keV to 2 MeV. The results indicate that DIRA has the potential to improve the accuracy of dose delivery in brachytherapy despite the fact that base material triplets only approximate surrounding soft tissues.
Autoregressive model based algorithm for correcting motion and serially correlated errors in fNIRS
Barker, Jeffrey W.; Aarabi, Ardalan; Huppert, Theodore J.
2013-01-01
Systemic physiology and motion-induced artifacts represent two major sources of confounding noise in functional near infrared spectroscopy (fNIRS) imaging that can reduce the performance of analyses and inflate false positive rates (i.e., type I errors) of detecting evoked hemodynamic responses. In this work, we demonstrated a general algorithm for solving the general linear model (GLM) for both deconvolution (finite impulse response) and canonical regression models based on designing optimal pre-whitening filters using autoregressive models and employing iteratively reweighted least squares. We evaluated the performance of the new method by performing receiver operating characteristic (ROC) analyses using synthetic data, in which serial correlations, motion artifacts, and evoked responses were controlled via simulations, as well as using experimental data from children (3–5 years old) as a source baseline physiological noise and motion artifacts. The new method outperformed ordinary least squares (OLS) with no motion correction, wavelet based motion correction, or spline interpolation based motion correction in the presence of physiological and motion related noise. In the experimental data, false positive rates were as high as 37% when the estimated p-value was 0.05 for the OLS methods. The false positive rate was reduced to 5–9% with the proposed method. Overall, the method improves control of type I errors and increases performance when motion artifacts are present. PMID:24009999
Convergence of a discretized self-adaptive evolutionary algorithm on multi-dimensional problems.
Hart, William Eugene; DeLaurentis, John Morse
2003-08-01
We consider the convergence properties of a non-elitist self-adaptive evolutionary strategy (ES) on multi-dimensional problems. In particular, we apply our recent convergence theory for a discretized (1,{lambda})-ES to design a related (1,{lambda})-ES that converges on a class of seperable, unimodal multi-dimensional problems. The distinguishing feature of self-adaptive evolutionary algorithms (EAs) is that the control parameters (like mutation step lengths) are evolved by the evolutionary algorithm. Thus the control parameters are adapted in an implicit manner that relies on the evolutionary dynamics to ensure that more effective control parameters are propagated during the search. Self-adaptation is a central feature of EAs like evolutionary stategies (ES) and evolutionary programming (EP), which are applied to continuous design spaces. Rudolph summarizes theoretical results concerning self-adaptive EAs and notes that the theoretical underpinnings for these methods are essentially unexplored. In particular, convergence theories that ensure convergence to a limit point on continuous spaces have only been developed by Rudolph, Hart, DeLaurentis and Ferguson, and Auger et al. In this paper, we illustrate how our analysis of a (1,{lambda})-ES for one-dimensional unimodal functions can be used to ensure convergence of a related ES on multidimensional functions. This (1,{lambda})-ES randomly selects a search dimension in each iteration, along which points generated. For a general class of separable functions, our analysis shows that the ES searches along each dimension independently, and thus this ES converges to the (global) minimum.
Learning deterministic finite automata with a smart state labeling evolutionary algorithm.
Lucas, Simon M; Reynolds, T Jeff
2005-07-01
Learning a Deterministic Finite Automaton (DFA) from a training set of labeled strings is a hard task that has been much studied within the machine learning community. It is equivalent to learning a regular language by example and has applications in language modeling. In this paper, we describe a novel evolutionary method for learning DFA that evolves only the transition matrix and uses a simple deterministic procedure to optimally assign state labels. We compare its performance with the Evidence Driven State Merging (EDSM) algorithm, one of the most powerful known DFA learning algorithms. We present results on random DFA induction problems of varying target size and training set density. We also studythe effects of noisy training data on the evolutionary approach and on EDSM. On noise-free data, we find that our evolutionary method outperforms EDSM on small sparse data sets. In the case of noisy training data, we find that our evolutionary method consistently outperforms EDSM, as well as other significant methods submitted to two recent competitions. PMID:16013754
THE APPLICATION OF AN EVOLUTIONARY ALGORITHM TO THE OPTIMIZATION OF A MESOSCALE METEOROLOGICAL MODEL
Werth, D.; O'Steen, L.
2008-02-11
We show that a simple evolutionary algorithm can optimize a set of mesoscale atmospheric model parameters with respect to agreement between the mesoscale simulation and a limited set of synthetic observations. This is illustrated using the Regional Atmospheric Modeling System (RAMS). A set of 23 RAMS parameters is optimized by minimizing a cost function based on the root mean square (rms) error between the RAMS simulation and synthetic data (observations derived from a separate RAMS simulation). We find that the optimization can be efficient with relatively modest computer resources, thus operational implementation is possible. The optimization efficiency, however, is found to depend strongly on the procedure used to perturb the 'child' parameters relative to their 'parents' within the evolutionary algorithm. In addition, the meteorological variables included in the rms error and their weighting are found to be an important factor with respect to finding the global optimum.
An Adaptive Evolutionary Algorithm for Traveling Salesman Problem with Precedence Constraints
Sung, Jinmo; Jeong, Bongju
2014-01-01
Traveling sales man problem with precedence constraints is one of the most notorious problems in terms of the efficiency of its solution approach, even though it has very wide range of industrial applications. We propose a new evolutionary algorithm to efficiently obtain good solutions by improving the search process. Our genetic operators guarantee the feasibility of solutions over the generations of population, which significantly improves the computational efficiency even when it is combined with our flexible adaptive searching strategy. The efficiency of the algorithm is investigated by computational experiments. PMID:24701158
NASA Astrophysics Data System (ADS)
Woodley, Robert; Lindahl, Eric; Barker, Joseph
2007-04-01
A culturally diverse group of people are now participating in military multinational coalition operations (e.g., combined air operations center, training exercises such as Red Flag at Nellis AFB, NATO AWACS), as well as in extreme environments. Human biases and routines, capabilities, and limitations strongly influence overall system performance; whether during operations or simulations using models of humans. Many missions and environments challenge human capabilities (e.g., combat stress, waiting, fatigue from long duty hours or tour of duty). This paper presents a team selection algorithm based on an evolutionary algorithm. The main difference between this and the standard EA is that a new form of objective function is used that incorporates the beliefs and uncertainties of the data. Preliminary results show that this selection algorithm will be very beneficial for very large data sets with multiple constraints and uncertainties. This algorithm will be utilized in a military unit selection tool.
Sum-of-squares-based fuzzy controller design using quantum-inspired evolutionary algorithm
NASA Astrophysics Data System (ADS)
Yu, Gwo-Ruey; Huang, Yu-Chia; Cheng, Chih-Yung
2016-07-01
In the field of fuzzy control, control gains are obtained by solving stabilisation conditions in linear-matrix-inequality-based Takagi-Sugeno fuzzy control method and sum-of-squares-based polynomial fuzzy control method. However, the optimal performance requirements are not considered under those stabilisation conditions. In order to handle specific performance problems, this paper proposes a novel design procedure with regard to polynomial fuzzy controllers using quantum-inspired evolutionary algorithms. The first contribution of this paper is a combination of polynomial fuzzy control and quantum-inspired evolutionary algorithms to undertake an optimal performance controller design. The second contribution is the proposed stability condition derived from the polynomial Lyapunov function. The proposed design approach is dissimilar to the traditional approach, in which control gains are obtained by solving the stabilisation conditions. The first step of the controller design uses the quantum-inspired evolutionary algorithms to determine the control gains with the best performance. Then, the stability of the closed-loop system is analysed under the proposed stability conditions. To illustrate effectiveness and validity, the problem of balancing and the up-swing of an inverted pendulum on a cart is used.
NASA Astrophysics Data System (ADS)
Lee, Kyoung Jin
Understanding and modeling seismic wave propagation is important in regional and exploration seismology. Ray tracing is a powerful and popular method for this purpose. Wavefront construction (WFC) method handles wavefronts instead of individual rays, thereby controlling proper ray density on the wavefront. By adaptively controlling rays over a wavefront, it efficiently models wave propagation. Algorithms for a quasi-P wave wavefront construction method and a new coordinate system used to generate wavefront construction mesh are proposed and tested for numerical properties and modeling capabilities. Traveltimes, amplitudes, and other parameters, which can be used for seismic imaging such as migrations and synthetic seismograms, are computed from the wavefront construction method. Modeling with wavefront construction code is applied to anisotropic media as well as isotropic media. Synthetic seismograms are computed using the wavefront construction method as a new way of generating synthetics. To incorporate layered velocity models, the model based interpolation (MBI) ray tracing method, which is designed to take advantage of the wavefront construction method as well as conventional ray tracing methods, is proposed and experimental codes are developed for it. Many wavefront construction codes are limited to smoothed velocity models for handling complicated problems in layered velocity models and the conventional ray tracing methods suffer from the inability to control ray density during wave propagation. By interpolating the wavefront near model boundaries, it is possible to handle the layered velocity model as well as overcome ray density control problems in conventional methods. The test results revealed this new method can be an effective modeling tool for accurate and effective computing.
NASA Astrophysics Data System (ADS)
Tang, Y.; Reed, P.; Wagener, T.
2006-05-01
This study provides a comprehensive assessment of state-of-the-art evolutionary multiobjective optimization (EMO) tools' relative effectiveness in calibrating hydrologic models. The relative computational efficiency, accuracy, and ease-of-use of the following EMO algorithms are tested: Epsilon Dominance Nondominated Sorted Genetic Algorithm-II (ɛ-NSGAII), the Multiobjective Shuffled Complex Evolution Metropolis algorithm (MOSCEM-UA), and the Strength Pareto Evolutionary Algorithm 2 (SPEA2). This study uses three test cases to compare the algorithms' performances: (1) a standardized test function suite from the computer science literature, (2) a benchmark hydrologic calibration test case for the Leaf River near Collins, Mississippi, and (3) a computationally intensive integrated surface-subsurface model application in the Shale Hills watershed in Pennsylvania. One challenge and contribution of this work is the development of a methodology for comprehensively comparing EMO algorithms that have different search operators and randomization techniques. Overall, SPEA2 attained competitive to superior results for most of the problems tested in this study. The primary strengths of the SPEA2 algorithm lie in its search reliability and its diversity preservation operator. The biggest challenge in maximizing the performance of SPEA2 lies in specifying an effective archive size without a priori knowledge of the Pareto set. In practice, this would require significant trial-and-error analysis, which is problematic for more complex, computationally intensive calibration applications. ɛ-NSGAII appears to be superior to MOSCEM-UA and competitive with SPEA2 for hydrologic model calibration. ɛ-NSGAII's primary strength lies in its ease-of-use due to its dynamic population sizing and archiving which lead to rapid convergence to very high quality solutions with minimal user input. MOSCEM-UA is best suited for hydrologic model calibration applications that have small parameter sets
Technology Transfer Automated Retrieval System (TEKTRAN)
Hyperspectral scattering is a promising technique for rapid and noninvasive measurement of multiple quality attributes of apple fruit. A hierarchical evolutionary algorithm (HEA) approach, in combination with subspace decomposition and partial least squares (PLS) regression, was proposed to select o...
NASA Astrophysics Data System (ADS)
Zatarain Salazar, Jazmin; Reed, Patrick M.; Herman, Jonathan D.; Giuliani, Matteo; Castelletti, Andrea
2016-06-01
Globally, the pressures of expanding populations, climate change, and increased energy demands are motivating significant investments in re-operationalizing existing reservoirs or designing operating policies for new ones. These challenges require an understanding of the tradeoffs that emerge across the complex suite of multi-sector demands in river basin systems. This study benchmarks our current capabilities to use Evolutionary Multi-Objective Direct Policy Search (EMODPS), a decision analytic framework in which reservoirs' candidate operating policies are represented using parameterized global approximators (e.g., radial basis functions) then those parameterized functions are optimized using multi-objective evolutionary algorithms to discover the Pareto approximate operating policies. We contribute a comprehensive diagnostic assessment of modern MOEAs' abilities to support EMODPS using the Conowingo reservoir in the Lower Susquehanna River Basin, Pennsylvania, USA. Our diagnostic results highlight that EMODPS can be very challenging for some modern MOEAs and that epsilon dominance, time-continuation, and auto-adaptive search are helpful for attaining high levels of performance. The ɛ-MOEA, the auto-adaptive Borg MOEA, and ɛ-NSGAII all yielded superior results for the six-objective Lower Susquehanna benchmarking test case. The top algorithms show low sensitivity to different MOEA parameterization choices and high algorithmic reliability in attaining consistent results for different random MOEA trials. Overall, EMODPS poses a promising method for discovering key reservoir management tradeoffs; however algorithmic choice remains a key concern for problems of increasing complexity.
Metabolic flux estimation--a self-adaptive evolutionary algorithm with singular value decomposition.
Yang, Jing; Wongsa, Sarawan; Kadirkamanathan, Visakan; Billings, Stephen A; Wright, Phillip C
2007-01-01
Metabolic flux analysis is important for metabolic system regulation and intracellular pathway identification. A popular approach for intracellular flux estimation involves using 13C tracer experiments to label states that can be measured by nuclear magnetic resonance spectrometry or gas chromatography mass spectrometry. However, the bilinear balance equations derived from 13C tracer experiments and the noisy measurements require a nonlinear optimization approach to obtain the optimal solution. In this paper, the flux quantification problem is formulated as an error-minimization problem with equality and inequality constraints through the 13C balance and stoichiometric equations. The stoichiometric constraints are transformed to a null space by singular value decomposition. Self-adaptive evolutionary algorithms are then introduced for flux quantification. The performance of the evolutionary algorithm is compared with ordinary least squares estimation by the simulation of the central pentose phosphate pathway. The proposed algorithm is also applied to the central metabolism of Corynebacterium glutamicum under lysine-producing conditions. A comparison between the results from the proposed algorithm and data from the literature is given. The complexity of a metabolic system with bidirectional reactions is also investigated by analyzing the fluctuations in the flux estimates when available measurements are varied. PMID:17277420
Design and Optimization of Low-thrust Orbit Transfers Using Q-law and Evolutionary Algorithms
NASA Technical Reports Server (NTRS)
Lee, Seungwon; vonAllmen, Paul; Fink, Wolfgang; Petropoulos, Anastassios; Terrile, Richard
2005-01-01
Future space missions will depend more on low-thrust propulsion (such as ion engines) thanks to its high specific impulse. Yet, the design of low-thrust trajectories is complex and challenging. Third-body perturbations often dominate the thrust, and a significant change to the orbit requires a long duration of thrust. In order to guide the early design phases, we have developed an efficient and efficacious method to obtain approximate propellant and flight-time requirements (i.e., the Pareto front) for orbit transfers. A search for the Pareto-optimal trajectories is done in two levels: optimal thrust angles and locations are determined by Q-law, while the Q-law is optimized with two evolutionary algorithms: a genetic algorithm and a simulated-annealing-related algorithm. The examples considered are several types of orbit transfers around the Earth and the asteroid Vesta.
Lin, Kuan-Cheng; Hsieh, Yi-Hsiu
2015-10-01
The classification and analysis of data is an important issue in today's research. Selecting a suitable set of features makes it possible to classify an enormous quantity of data quickly and efficiently. Feature selection is generally viewed as a problem of feature subset selection, such as combination optimization problems. Evolutionary algorithms using random search methods have proven highly effective in obtaining solutions to problems of optimization in a diversity of applications. In this study, we developed a hybrid evolutionary algorithm based on endocrine-based particle swarm optimization (EPSO) and artificial bee colony (ABC) algorithms in conjunction with a support vector machine (SVM) for the selection of optimal feature subsets for the classification of datasets. The results of experiments using specific UCI medical datasets demonstrate that the accuracy of the proposed hybrid evolutionary algorithm is superior to that of basic PSO, EPSO and ABC algorithms, with regard to classification accuracy using subsets with a reduced number of features. PMID:26289628
Tuning of MEMS Gyroscope using Evolutionary Algorithm and "Switched Drive-Angle" Method
NASA Technical Reports Server (NTRS)
Keymeulen, Didier; Ferguson, Michael I.; Breuer, Luke; Peay, Chris; Oks, Boris; Cheng, Yen; Kim, Dennis; MacDonald, Eric; Foor, David; Terrile, Rich; Yee, Karl
2006-01-01
We propose a tuning method for Micro-Electro-Mechanical Systems (MEMS) gyroscopes based on evolutionary computation that has the capacity to efficiently increase the sensitivity of MEMS gyroscopes through tuning and, furthermore, to find the optimally tuned configuration for this state of increased sensitivity. We present the results of an experiment to determine the speed and efficiency of an evolutionary algorithm applied to electrostatic tuning of MEMS micro gyros. The MEMS gyro used in this experiment is a pyrex post resonator gyro (PRG) in a closed-loop control system. A measure of the quality of tuning is given by the difference in resonant frequencies, or frequency split, for the two orthogonal rocking axes. The current implementation of the closed-loop platform is able to measure and attain a relative stability in the sub-millihertz range, leading to a reduction of the frequency split to less than 100 mHz.
O'Hagan, Steve; Knowles, Joshua; Kell, Douglas B.
2012-01-01
Comparatively few studies have addressed directly the question of quantifying the benefits to be had from using molecular genetic markers in experimental breeding programmes (e.g. for improved crops and livestock), nor the question of which organisms should be mated with each other to best effect. We argue that this requires in silico modelling, an approach for which there is a large literature in the field of evolutionary computation (EC), but which has not really been applied in this way to experimental breeding programmes. EC seeks to optimise measurable outcomes (phenotypic fitnesses) by optimising in silico the mutation, recombination and selection regimes that are used. We review some of the approaches from EC, and compare experimentally, using a biologically relevant in silico landscape, some algorithms that have knowledge of where they are in the (genotypic) search space (G-algorithms) with some (albeit well-tuned ones) that do not (F-algorithms). For the present kinds of landscapes, F- and G-algorithms were broadly comparable in quality and effectiveness, although we recognise that the G-algorithms were not equipped with any ‘prior knowledge’ of epistatic pathway interactions. This use of algorithms based on machine learning has important implications for the optimisation of experimental breeding programmes in the post-genomic era when we shall potentially have access to the full genome sequence of every organism in a breeding population. The non-proprietary code that we have used is made freely available (via Supplementary information). PMID:23185279
A Self-adaptive Evolutionary Algorithm for Multi-objective Optimization
NASA Astrophysics Data System (ADS)
Cao, Ruifen; Li, Guoli; Wu, Yican
Evolutionary algorithm has gained a worldwide popularity among multi-objective optimization. The paper proposes a self-adaptive evolutionary algorithm (called SEA) for multi-objective optimization. In the SEA, the probability of crossover and mutation,P c and P m , are varied depending on the fitness values of the solutions. Fitness assignment of SEA realizes the twin goals of maintaining diversity in the population and guiding the population to the true Pareto Front; fitness value of individual not only depends on improved density estimation but also depends on non-dominated rank. The density estimation can keep diversity in all instances including when scalars of all objectives are much different from each other. SEA is compared against the Non-dominated Sorting Genetic Algorithm (NSGA-II) on a set of test problems introduced by the MOEA community. Simulated results show that SEA is as effective as NSGA-II in most of test functions, but when scalar of objectives are much different from each other, SEA has better distribution of non-dominated solutions.
A Gaze-Driven Evolutionary Algorithm to Study Aesthetic Evaluation of Visual Symmetry
Bertamini, Marco; Jones, Andrew; Holmes, Tim; Zanker, Johannes M.
2016-01-01
Empirical work has shown that people like visual symmetry. We used a gaze-driven evolutionary algorithm technique to answer three questions about symmetry preference. First, do people automatically evaluate symmetry without explicit instruction? Second, is perfect symmetry the best stimulus, or do people prefer a degree of imperfection? Third, does initial preference for symmetry diminish after familiarity sets in? Stimuli were generated as phenotypes from an algorithmic genotype, with genes for symmetry (coded as deviation from a symmetrical template, deviation–symmetry, DS gene) and orientation (0° to 90°, orientation, ORI gene). An eye tracker identified phenotypes that were good at attracting and retaining the gaze of the observer. Resulting fitness scores determined the genotypes that passed to the next generation. We recorded changes to the distribution of DS and ORI genes over 20 generations. When participants looked for symmetry, there was an increase in high-symmetry genes. When participants looked for the patterns they preferred, there was a smaller increase in symmetry, indicating that people tolerated some imperfection. Conversely, there was no increase in symmetry during free viewing, and no effect of familiarity or orientation. This work demonstrates the viability of the evolutionary algorithm approach as a quantitative measure of aesthetic preference. PMID:27433324
A Gaze-Driven Evolutionary Algorithm to Study Aesthetic Evaluation of Visual Symmetry.
Makin, Alexis D J; Bertamini, Marco; Jones, Andrew; Holmes, Tim; Zanker, Johannes M
2016-03-01
Empirical work has shown that people like visual symmetry. We used a gaze-driven evolutionary algorithm technique to answer three questions about symmetry preference. First, do people automatically evaluate symmetry without explicit instruction? Second, is perfect symmetry the best stimulus, or do people prefer a degree of imperfection? Third, does initial preference for symmetry diminish after familiarity sets in? Stimuli were generated as phenotypes from an algorithmic genotype, with genes for symmetry (coded as deviation from a symmetrical template, deviation-symmetry, DS gene) and orientation (0° to 90°, orientation, ORI gene). An eye tracker identified phenotypes that were good at attracting and retaining the gaze of the observer. Resulting fitness scores determined the genotypes that passed to the next generation. We recorded changes to the distribution of DS and ORI genes over 20 generations. When participants looked for symmetry, there was an increase in high-symmetry genes. When participants looked for the patterns they preferred, there was a smaller increase in symmetry, indicating that people tolerated some imperfection. Conversely, there was no increase in symmetry during free viewing, and no effect of familiarity or orientation. This work demonstrates the viability of the evolutionary algorithm approach as a quantitative measure of aesthetic preference. PMID:27433324
Creating ensembles of oblique decision trees with evolutionary algorithms and sampling
Cantu-Paz, Erick; Kamath, Chandrika
2006-06-13
A decision tree system that is part of a parallel object-oriented pattern recognition system, which in turn is part of an object oriented data mining system. A decision tree process includes the step of reading the data. If necessary, the data is sorted. A potential split of the data is evaluated according to some criterion. An initial split of the data is determined. The final split of the data is determined using evolutionary algorithms and statistical sampling techniques. The data is split. Multiple decision trees are combined in ensembles.
Searching for the Optimal Working Point of the MEIC at JLab Using an Evolutionary Algorithm
Balsa Terzic, Matthew Kramer, Colin Jarvis
2011-03-01
The Medium-energy Electron Ion Collider (MEIC), a proposed medium-energy ring-ring electron-ion collider based on CEBAF at Jefferson Lab. The collider luminosity and stability are sensitive to the choice of a working point - the betatron and synchrotron tunes of the two colliding beams. Therefore, a careful selection of the working point is essential for stable operation of the collider, as well as for achieving high luminosity. Here we describe a novel approach for locating an optimal working point based on evolutionary algorithm techniques.
A new evolutionary algorithm with structure mutation for the maximum balanced biclique problem.
Yuan, Bo; Li, Bin; Chen, Huanhuan; Yao, Xin
2015-05-01
The maximum balanced biclique problem (MBBP), an NP-hard combinatorial optimization problem, has been attracting more attention in recent years. Existing node-deletion-based algorithms usually fail to find high-quality solutions due to their easy stagnation in local optima, especially when the scale of the problem grows large. In this paper, a new algorithm for the MBBP, evolutionary algorithm with structure mutation (EA/SM), is proposed. In the EA/SM framework, local search complemented with a repair-assisted restart process is adopted. A new mutation operator, SM, is proposed to enhance the exploration during the local search process. The SM can change the structure of solutions dynamically while keeping their size (fitness) and the feasibility unchanged. It implements a kind of large mutation in the structure space of MBBP to help the algorithm escape from local optima. An MBBP-specific local search operator is designed to improve the quality of solutions efficiently; besides, a new repair-assisted restart process is introduced, in which the Marchiori's heuristic repair is modified to repair every new solution reinitialized by an estimation of distribution algorithm (EDA)-like process. The proposed algorithm is evaluated on a large set of benchmark graphs with various scales and densities. Experimental results show that: 1) EA/SM produces significantly better results than the state-of-the-art heuristic algorithms; 2) it also outperforms a repair-based EDA and a repair-based genetic algorithm on all benchmark graphs; and 3) the advantages of EA/SM are mainly due to the introduction of the new SM operator and the new repair-assisted restart process. PMID:25137737
Ma, Jingjing; Liu, Jie; Ma, Wenping; Gong, Maoguo; Jiao, Licheng
2014-01-01
Community structure is one of the most important properties in social networks. In dynamic networks, there are two conflicting criteria that need to be considered. One is the snapshot quality, which evaluates the quality of the community partitions at the current time step. The other is the temporal cost, which evaluates the difference between communities at different time steps. In this paper, we propose a decomposition-based multiobjective community detection algorithm to simultaneously optimize these two objectives to reveal community structure and its evolution in dynamic networks. It employs the framework of multiobjective evolutionary algorithm based on decomposition to simultaneously optimize the modularity and normalized mutual information, which quantitatively measure the quality of the community partitions and temporal cost, respectively. A local search strategy dealing with the problem-specific knowledge is incorporated to improve the effectiveness of the new algorithm. Experiments on computer-generated and real-world networks demonstrate that the proposed algorithm can not only find community structure and capture community evolution more accurately, but also be steadier than the two compared algorithms. PMID:24723806
Implementation of a Fractional Model-Based Fault Detection Algorithm into a PLC Controller
NASA Astrophysics Data System (ADS)
Kopka, Ryszard
2014-12-01
This paper presents results related to the implementation of model-based fault detection and diagnosis procedures into a typical PLC controller. To construct the mathematical model and to implement the PID regulator, a non-integer order differential/integral calculation was used. Such an approach allows for more exact control of the process and more precise modelling. This is very crucial in model-based diagnostic methods. The theoretical results were verified on a real object in the form of a supercapacitor connected to a PLC controller by a dedicated electronic circuit controlled directly from the PLC outputs.
Frutos, M; Méndez, M; Tohmé, F; Broz, D
2013-01-01
Many of the problems that arise in production systems can be handled with multiobjective techniques. One of those problems is that of scheduling operations subject to constraints on the availability of machines and buffer capacity. In this paper we analyze different Evolutionary multiobjective Algorithms (MOEAs) for this kind of problems. We consider an experimental framework in which we schedule production operations for four real world Job-Shop contexts using three algorithms, NSGAII, SPEA2, and IBEA. Using two performance indexes, Hypervolume and R2, we found that SPEA2 and IBEA are the most efficient for the tasks at hand. On the other hand IBEA seems to be a better choice of tool since it yields more solutions in the approximate Pareto frontier. PMID:24489502
Frutos, M.; Méndez, M.; Tohmé, F.; Broz, D.
2013-01-01
Many of the problems that arise in production systems can be handled with multiobjective techniques. One of those problems is that of scheduling operations subject to constraints on the availability of machines and buffer capacity. In this paper we analyze different Evolutionary multiobjective Algorithms (MOEAs) for this kind of problems. We consider an experimental framework in which we schedule production operations for four real world Job-Shop contexts using three algorithms, NSGAII, SPEA2, and IBEA. Using two performance indexes, Hypervolume and R2, we found that SPEA2 and IBEA are the most efficient for the tasks at hand. On the other hand IBEA seems to be a better choice of tool since it yields more solutions in the approximate Pareto frontier. PMID:24489502
A Cooperative Co-Evolutionary Genetic Algorithm for Tree Scoring and Ancestral Genome Inference.
Gao, Nan; Zhang, Yan; Feng, Bing; Tang, Jijun
2015-01-01
Recent advances of technology have made it easy to obtain and compare whole genomes. Rearrangements of genomes through operations such as reversals and transpositions are rare events that enable researchers to reconstruct deep evolutionary history among species. Some of the popular methods need to search a large tree space for the best scored tree, thus it is desirable to have a fast and accurate method that can score a given tree efficiently. During the tree scoring procedure, the genomic structures of internal tree nodes are also provided, which provide important information for inferring ancestral genomes and for modeling the evolutionary processes. However, computing tree scores and ancestral genomes are very difficult and a lot of researchers have to rely on heuristic methods which have various disadvantages. In this paper, we describe the first genetic algorithm for tree scoring and ancestor inference, which uses a fitness function considering co-evolution, adopts different initial seeding methods to initialize the first population pool, and utilizes a sorting-based approach to realize evolution. Our extensive experiments show that compared with other existing algorithms, this new method is more accurate and can infer ancestral genomes that are much closer to the true ancestors. PMID:26671797
An evolutionary firefly algorithm for the estimation of nonlinear biological model parameters.
Abdullah, Afnizanfaizal; Deris, Safaai; Anwar, Sohail; Arjunan, Satya N V
2013-01-01
The development of accurate computational models of biological processes is fundamental to computational systems biology. These models are usually represented by mathematical expressions that rely heavily on the system parameters. The measurement of these parameters is often difficult. Therefore, they are commonly estimated by fitting the predicted model to the experimental data using optimization methods. The complexity and nonlinearity of the biological processes pose a significant challenge, however, to the development of accurate and fast optimization methods. We introduce a new hybrid optimization method incorporating the Firefly Algorithm and the evolutionary operation of the Differential Evolution method. The proposed method improves solutions by neighbourhood search using evolutionary procedures. Testing our method on models for the arginine catabolism and the negative feedback loop of the p53 signalling pathway, we found that it estimated the parameters with high accuracy and within a reasonable computation time compared to well-known approaches, including Particle Swarm Optimization, Nelder-Mead, and Firefly Algorithm. We have also verified the reliability of the parameters estimated by the method using an a posteriori practical identifiability test. PMID:23469172
David J. Muth Jr.
2006-09-01
This paper examines the use of graph based evolutionary algorithms (GBEAs) to find multiple acceptable solutions for heat transfer in engineering systems during the optimization process. GBEAs are a type of evolutionary algorithm (EA) in which a topology, or geography, is imposed on an evolving population of solutions. The rates at which solutions can spread within the population are controlled by the choice of topology. As in nature geography can be used to develop and sustain diversity within the solution population. Altering the choice of graph can create a more or less diverse population of potential solutions. The choice of graph can also affect the convergence rate for the EA and the number of mating events required for convergence. The engineering system examined in this paper is a biomass fueled cookstove used in developing nations for household cooking. In this cookstove wood is combusted in a small combustion chamber and the resulting hot gases are utilized to heat the stove’s cooking surface. The spatial temperature profile of the cooking surface is determined by a series of baffles that direct the flow of hot gases. The optimization goal is to find baffle configurations that provide an even temperature distribution on the cooking surface. Often in engineering, the goal of optimization is not to find the single optimum solution but rather to identify a number of good solutions that can be used as a starting point for detailed engineering design. Because of this a key aspect of evolutionary optimization is the diversity of the solutions found. The key conclusion in this paper is that GBEA’s can be used to create multiple good solutions needed to support engineering design.
Fernandez-Lozano, C.; Canto, C.; Gestal, M.; Andrade-Garda, J. M.; Rabuñal, J. R.; Dorado, J.; Pazos, A.
2013-01-01
Given the background of the use of Neural Networks in problems of apple juice classification, this paper aim at implementing a newly developed method in the field of machine learning: the Support Vector Machines (SVM). Therefore, a hybrid model that combines genetic algorithms and support vector machines is suggested in such a way that, when using SVM as a fitness function of the Genetic Algorithm (GA), the most representative variables for a specific classification problem can be selected. PMID:24453933
Log-linear model based behavior selection method for artificial fish swarm algorithm.
Huang, Zhehuang; Chen, Yidong
2015-01-01
Artificial fish swarm algorithm (AFSA) is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm. PMID:25691895
Log-Linear Model Based Behavior Selection Method for Artificial Fish Swarm Algorithm
Huang, Zhehuang; Chen, Yidong
2015-01-01
Artificial fish swarm algorithm (AFSA) is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm. PMID:25691895
Ishibuchi, Hisao; Sudo, Takahiko; Nojima, Yusuke
2016-01-01
In interactive evolutionary computation (IEC), each solution is evaluated by a human user. Usually the total number of examined solutions is very small. In some applications such as hearing aid design and music composition, only a single solution can be evaluated at a time by a human user. Moreover, accurate and precise numerical evaluation is difficult. Based on these considerations, we formulated an IEC model with the minimum requirement for fitness evaluation ability of human users under the following assumptions: They can evaluate only a single solution at a time, they can memorize only a single previous solution they have just evaluated, their evaluation result on the current solution is whether it is better than the previous one or not, and the best solution among the evaluated ones should be identified after a pre-specified number of evaluations. In this paper, we first explain our IEC model in detail. Next we propose a ([Formula: see text])ES-style algorithm for our IEC model. Then we propose an offline meta-level approach to automated algorithm design for our IEC model. The main feature of our approach is the use of a different mechanism (e.g., mutation, crossover, random initialization) to generate each solution to be evaluated. Through computational experiments on test problems, our approach is compared with the ([Formula: see text])ES-style algorithm where a solution generation mechanism is pre-specified and fixed throughout the execution of the algorithm. PMID:27026888
Grand-canonical evolutionary algorithm for the prediction of two-dimensional materials
NASA Astrophysics Data System (ADS)
Revard, Benjamin C.; Tipton, William W.; Yesypenko, Anna; Hennig, Richard G.
2016-02-01
Single-layer materials represent a new materials class with properties that are potentially transformative for applications in nanoelectronics and solar-energy harvesting. With the goal of discovering novel two-dimensional (2D) materials with unusual compositions and structures, we have developed a grand-canonical evolutionary algorithm that searches the structure and composition space while constraining the thickness of the structures. Coupling the algorithm to first-principles total-energy methods, we show that this approach can successfully identify known 2D materials and find low-energy ones. We present the details of the algorithm, including suitable objective functions, and illustrate its potential with a study of the Sn-S and C-Si binary materials systems. The algorithm identifies several 2D structures of InP, recovers known 2D structures in the binary Sn-S and C-Si systems, and finds two 1D Si defects in graphene with formation energies below that of isolated substitutional Si atoms.
NASA Astrophysics Data System (ADS)
Wu, J.; Yang, Y.; Wu, J.
2011-12-01
In this study, a new hybrid multi-objective evolutionary algorithm (MOEA), the niched Pareto tabu search combined with a genetic algorithm (NPTSGA), is proposed for the management of groundwater resources under variable density conditions. Relatively few MOEAs can possess global search ability contenting with intensified search in local area. Moreover, the overall searching ability of tabu search (TS) based MOEAs is very sensitive to neighborhood step size. The NPTSGA is developed on the thought of integrating genetic algorithm (GA) with a TS based MOEA, niched Pareto tabu search (NPTS), which helps to alleviate both of the above difficulties. Here, the global search ability of the NPTS is improved by the diversification of candidate solutions arose from the evolving genetic algorithm population. Furthermore, the proposed methodology coupled with a density-dependent groundwater flow and solute transport simulator, SEAWAT, is developed and its performance is evaluated through a synthetic seawater intrusion management problem. Optimization results indicate that the NPTSGA offers a tradeoff between the two conflicting objectives. A key conclusion of this study is that the NPTSGA can balance the tradeoff between the intensification of nondomination and the diversification of near Pareto-optimal solutions and is a stable and robust method for implementing the multi-objective design of variable-density groundwater resources.
An Evolutionary Algorithm with Double-Level Archives for Multiobjective Optimization.
Chen, Ni; Chen, Wei-Neng; Gong, Yue-Jiao; Zhan, Zhi-Hui; Zhang, Jun; Li, Yun; Tan, Yu-Song
2015-09-01
Existing multiobjective evolutionary algorithms (MOEAs) tackle a multiobjective problem either as a whole or as several decomposed single-objective sub-problems. Though the problem decomposition approach generally converges faster through optimizing all the sub-problems simultaneously, there are two issues not fully addressed, i.e., distribution of solutions often depends on a priori problem decomposition, and the lack of population diversity among sub-problems. In this paper, a MOEA with double-level archives is developed. The algorithm takes advantages of both the multiobjective-problem-level and the sub-problem-level approaches by introducing two types of archives, i.e., the global archive and the sub-archive. In each generation, self-reproduction with the global archive and cross-reproduction between the global archive and sub-archives both breed new individuals. The global archive and sub-archives communicate through cross-reproduction, and are updated using the reproduced individuals. Such a framework thus retains fast convergence, and at the same time handles solution distribution along Pareto front (PF) with scalability. To test the performance of the proposed algorithm, experiments are conducted on both the widely used benchmarks and a set of truly disconnected problems. The results verify that, compared with state-of-the-art MOEAs, the proposed algorithm offers competitive advantages in distance to the PF, solution coverage, and search speed. PMID:25343775
XTALOPT: An open-source evolutionary algorithm for crystal structure prediction
NASA Astrophysics Data System (ADS)
Lonie, David C.; Zurek, Eva
2011-02-01
The implementation and testing of XTALOPT, an evolutionary algorithm for crystal structure prediction, is outlined. We present our new periodic displacement (ripple) operator which is ideally suited to extended systems. It is demonstrated that hybrid operators, which combine two pure operators, reduce the number of duplicate structures in the search. This allows for better exploration of the potential energy surface of the system in question, while simultaneously zooming in on the most promising regions. A continuous workflow, which makes better use of computational resources as compared to traditional generation based algorithms, is employed. Various parameters in XTALOPT are optimized using a novel benchmarking scheme. XTALOPT is available under the GNU Public License, has been interfaced with various codes commonly used to study extended systems, and has an easy to use, intuitive graphical interface. Program summaryProgram title:XTALOPT Catalogue identifier: AEGX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPL v2.1 or later [1] No. of lines in distributed program, including test data, etc.: 36 849 No. of bytes in distributed program, including test data, etc.: 1 149 399 Distribution format: tar.gz Programming language: C++ Computer: PCs, workstations, or clusters Operating system: Linux Classification: 7.7 External routines: QT [2], OpenBabel [3], AVOGADRO [4], SPGLIB [8] and one of: VASP [5], PWSCF [6], GULP [7]. Nature of problem: Predicting the crystal structure of a system from its stoichiometry alone remains a grand challenge in computational materials science, chemistry, and physics. Solution method: Evolutionary algorithms are stochastic search techniques which use concepts from biological evolution in order to locate the global minimum on their potential energy surface. Our evolutionary algorithm, XTALOPT, is freely
Analysis of (1+1) evolutionary algorithm and randomized local search with memory.
Sung, Chi Wan; Yuen, Shiu Yin
2011-01-01
This paper considers the scenario of the (1+1) evolutionary algorithm (EA) and randomized local search (RLS) with memory. Previously explored solutions are stored in memory until an improvement in fitness is obtained; then the stored information is discarded. This results in two new algorithms: (1+1) EA-m (with a raw list and hash table option) and RLS-m+ (and RLS-m if the function is a priori known to be unimodal). These two algorithms can be regarded as very simple forms of tabu search. Rigorous theoretical analysis of the expected time to find the globally optimal solutions for these algorithms is conducted for both unimodal and multimodal functions. A unified mathematical framework, involving the new concept of spatially invariant neighborhood, is proposed. Under this framework, both (1+1) EA with standard uniform mutation and RLS can be considered as particular instances and in the most general cases, all functions can be considered to be unimodal. Under this framework, it is found that for unimodal functions, the improvement by memory assistance is always positive but at most by one half. For multimodal functions, the improvement is significant; for functions with gaps and another hard function, the order of growth is reduced; for at least one example function, the order can change from exponential to polynomial. Empirical results, with a reasonable fitness evaluation time assumption, verify that (1+1) EA-m and RLS-m+ are superior to their conventional counterparts. Both new algorithms are promising for use in a memetic algorithm. In particular, RLS-m+ makes the previously impractical RLS practical, and surprisingly, does not require any extra memory in actual implementation. PMID:20868262
NASA Astrophysics Data System (ADS)
Sofiev, M.; Vira, J.; Kouznetsov, R.; Prank, M.; Soares, J.; Genikhovich, E.
2015-11-01
The paper presents the transport module of the System for Integrated modeLling of Atmospheric coMposition SILAM v.5 based on the advection algorithm of Michael Galperin. This advection routine, so far weakly presented in the international literature, is positively defined, stable at any Courant number, and efficient computationally. We present the rigorous description of its original version, along with several updates that improve its monotonicity and shape preservation, allowing for applications to long-living species in conditions of complex atmospheric flows. The scheme is connected with other parts of the model in a way that preserves the sub-grid mass distribution information that is a cornerstone of the advection algorithm. The other parts include the previously developed vertical diffusion algorithm combined with dry deposition, a meteorological pre-processor, and chemical transformation modules. The quality of the advection routine is evaluated using a large set of tests. The original approach has been previously compared with several classic algorithms widely used in operational dispersion models. The basic tests were repeated for the updated scheme and extended with real-wind simulations and demanding global 2-D tests recently suggested in the literature, which allowed one to position the scheme with regard to sophisticated state-of-the-art approaches. The advection scheme performance was fully comparable with other algorithms, with a modest computational cost. This work was the last project of Dr. Sci. Michael Galperin, who passed away on 18 March 2008.
Combining evolutionary algorithms with oblique decision trees to detect bent-double galaxies
NASA Astrophysics Data System (ADS)
Cantu-Paz, Erick; Kamath, Chandrika
2000-10-01
Decision tress have long been popular in classification as they use simple and easy-to-understand tests at each node. Most variants of decision trees test a single attribute at a node, leading to axis- parallel trees, where the test results in a hyperplane which is parallel to one of the dimensions in the attribute space. These trees can be rather large and inaccurate in cases where the concept to be learned is best approximated by oblique hyperplanes. In such cases, it may be more appropriate to use an oblique decision tree, where the decision at each node is a linear combination of the attributes. Oblique decision trees have not gained wide popularity in part due to the complexity of constructing good oblique splits and the tendency of existing splitting algorithms to get stuck in local minima. Several alternatives have been proposed to handle these problems including randomization in conjunction wiht deterministic hill-climbing and the use of simulated annealing. In this paper, we use evolutionary algorithms (EAs) to determine the split. EAs are well suited for this problem because of their global search properties, their tolerance to noisy fitness evaluations, and their scalability to large dimensional search spaces. We demonstrate our technique on a synthetic data set, and then we apply it to a practical problem from astronomy, namely, the classification of galaxies with a bent-double morphology. In addition, we describe our experiences with several split evaluation criteria. Our results suggest that, in some cases, the evolutionary approach is faster and more accurate than existing oblique decision tree algorithms. However, for our astronomical data, the accuracy is not significantly different than the axis-parallel trees.
NASA Technical Reports Server (NTRS)
Galvan, Jose Ramon; Saxena, Abhinav; Goebel, Kai Frank
2012-01-01
This article discusses several aspects of uncertainty representation and management for model-based prognostics methodologies based on our experience with Kalman Filters when applied to prognostics for electronics components. In particular, it explores the implications of modeling remaining useful life prediction as a stochastic process, and how it relates to uncertainty representation, management and the role of prognostics in decision-making. A distinction between the interpretations of estimated remaining useful life probability density function is explained and a cautionary argument is provided against mixing interpretations for two while considering prognostics in making critical decisions.
Wang, Yuping; Feng, Junhong
2013-01-01
In association rule mining, evaluating an association rule needs to repeatedly scan database to compare the whole database with the antecedent, consequent of a rule and the whole rule. In order to decrease the number of comparisons and time consuming, we present an attribute index strategy. It only needs to scan database once to create the attribute index of each attribute. Then all metrics values to evaluate an association rule do not need to scan database any further, but acquire data only by means of the attribute indices. The paper visualizes association rule mining as a multiobjective problem rather than a single objective one. In order to make the acquired solutions scatter uniformly toward the Pareto frontier in the objective space, elitism policy and uniform design are introduced. The paper presents the algorithm of attribute index and uniform design based multiobjective association rule mining with evolutionary algorithm, abbreviated as IUARMMEA. It does not require the user-specified minimum support and minimum confidence anymore, but uses a simple attribute index. It uses a well-designed real encoding so as to extend its application scope. Experiments performed on several databases demonstrate that the proposed algorithm has excellent performance, and it can significantly reduce the number of comparisons and time consumption. PMID:23766683
An evolutionary algorithm for the segmentation of muscles and bones of the lower limb.
NASA Astrophysics Data System (ADS)
Lpez, Marco A.; Braidot, A.; Sattler, Anbal; Schira, Claudia; Uriburu, E.
2016-04-01
In the field of medical image segmentation, muscles segmentation is a problem that has not been fully resolved yet. This is due to the fact that the basic assumption of image segmentation, which asserts that a visual distinction should ex- ist between the different structures to be identified, is infringed. As the tissue composition of two different muscles is the same, it becomes extremely difficult to distinguish one another if they are near. We have developed an evolutionary algorithm which selects the set and the sequence of morphological operators that better segments muscles and bones from an MRI image. The achieved results shows that the developed algorithm presents average sensitivity values close to 75% in the segmentation of the different processed muscles and bones. It also presents average specificity values close to 93% for the same structures. Furthermore, the algorithm can identify muscles that are closely located through the path from their origin point to their insertions, with very low error values (below 7%) .
Meta-Model Based Optimisation Algorithms for Robust Optimization of 3D Forging Sequences
Fourment, Lionel
2007-04-07
In order to handle costly and complex 3D metal forming optimization problems, we develop a new optimization algorithm that allows finding satisfactory solutions within less than 50 iterations (/function evaluation) in the presence of local extrema. It is based on the sequential approximation of the problem objective function by the Meshless Finite Difference Method (MFDM). This changing meta-model allows taking into account the gradient information, if available, or not. It can be easily extended to take into account uncertainties on the optimization parameters. This new algorithm is first evaluated on analytic functions, before being applied to a 3D forging benchmark, the preform tool shape optimization that allows minimizing the potential of fold formation during the two-stepped forging sequence.
Development of a real-time model based safety monitoring algorithm for the SSME
NASA Astrophysics Data System (ADS)
Norman, A. M.; Maram, J.; Coleman, P.; D'Valentine, M.; Steffens, A.
1992-07-01
A safety monitoring system for the SSME incorporating a real time model of the engine has been developed for LeRC as a task of the LeRC Life Prediction for Rocket Engines contract, NAS3-25884. This paper describes the development of the algorithm and model to date, their capabilities and limitations, results of simulation tests, lessons learned, and the plans for implementation and test of the system.
A Model-Based Spike Sorting Algorithm for Removing Correlation Artifacts in Multi-Neuron Recordings
Chichilnisky, E. J.; Simoncelli, Eero P.
2013-01-01
We examine the problem of estimating the spike trains of multiple neurons from voltage traces recorded on one or more extracellular electrodes. Traditional spike-sorting methods rely on thresholding or clustering of recorded signals to identify spikes. While these methods can detect a large fraction of the spikes from a recording, they generally fail to identify synchronous or near-synchronous spikes: cases in which multiple spikes overlap. Here we investigate the geometry of failures in traditional sorting algorithms, and document the prevalence of such errors in multi-electrode recordings from primate retina. We then develop a method for multi-neuron spike sorting using a model that explicitly accounts for the superposition of spike waveforms. We model the recorded voltage traces as a linear combination of spike waveforms plus a stochastic background component of correlated Gaussian noise. Combining this measurement model with a Bernoulli prior over binary spike trains yields a posterior distribution for spikes given the recorded data. We introduce a greedy algorithm to maximize this posterior that we call “binary pursuit”. The algorithm allows modest variability in spike waveforms and recovers spike times with higher precision than the voltage sampling rate. This method substantially corrects cross-correlation artifacts that arise with conventional methods, and substantially outperforms clustering methods on both real and simulated data. Finally, we develop diagnostic tools that can be used to assess errors in spike sorting in the absence of ground truth. PMID:23671583
XTALOPT version r7: An open-source evolutionary algorithm for crystal structure prediction
NASA Astrophysics Data System (ADS)
Lonie, David C.; Zurek, Eva
2011-10-01
A new version of XTALOPT, a user-friendly GPL-licensed evolutionary algorithm for crystal structure prediction, is available for download from the CPC library or the XTALOPT website, http://xtalopt.openmolecules.net. The new version now supports four external geometry optimization codes (VASP, GULP, PWSCF, and CASTEP), as well as three queuing systems: PBS, SGE, SLURM, and “Local”. The local queuing system allows the geometry optimizations to be performed on the user's workstation if an external computational cluster is unavailable. Support for the Windows operating system has been added, and a Windows installer is provided. Numerous bugfixes and feature enhancements have been made in the new release as well.
Optimized sound diffusers based on sonic crystals using a multiobjective evolutionary algorithm.
Redondo, J; Sánchez-Pérez, J V; Blasco, X; Herrero, J M; Vorländer, M
2016-05-01
Sonic crystals have been demonstrated to be good candidates to substitute for conventional diffusers in order to overcome the need for extremely thick structures when low frequencies have to be scattered, however, their performance is limited to a narrow band. In this work, multiobjective evolutionary algorithms are used to extend the bandwidth to the whole low frequency range. The results show that diffusion can be significantly increased. Several cost functions are considered in the paper, on the one hand to illustrate the flexibility of the optimization and on the other hand to demonstrate the problems associated with the use of certain cost functions. A study of the robustness of the optimized diffusers is also presented, introducing a parameter that can help to choose among the best candidates. Finally, the advantages of the use of multiobjective optimization in comparison with conventional optimizations are discussed. PMID:27250173
Identifying irregularly shaped crime hot-spots using a multiobjective evolutionary algorithm
NASA Astrophysics Data System (ADS)
Wu, Xiaolan; Grubesic, Tony H.
2010-12-01
Spatial cluster detection techniques are widely used in criminology, geography, epidemiology, and other fields. In particular, spatial scan statistics are popular and efficient techniques for detecting areas of elevated crime or disease events. The majority of spatial scan approaches attempt to delineate geographic zones by evaluating the significance of clusters using likelihood ratio statistics tested with the Poisson distribution. While this can be effective, many scan statistics give preference to circular clusters, diminishing their ability to identify elongated and/or irregular shaped clusters. Although adjusting the shape of the scan window can mitigate some of these problems, both the significance of irregular clusters and their spatial structure must be accounted for in a meaningful way. This paper utilizes a multiobjective evolutionary algorithm to find clusters with maximum significance while quantitatively tracking their geographic structure. Crime data for the city of Cincinnati are utilized to demonstrate the advantages of the new approach and highlight its benefits versus more traditional scan statistics.
Duan, Hai-Bin; Xu, Chun-Fang; Xing, Zhi-Hui
2010-02-01
In this paper, a novel hybrid Artificial Bee Colony (ABC) and Quantum Evolutionary Algorithm (QEA) is proposed for solving continuous optimization problems. ABC is adopted to increase the local search capacity as well as the randomness of the populations. In this way, the improved QEA can jump out of the premature convergence and find the optimal value. To show the performance of our proposed hybrid QEA with ABC, a number of experiments are carried out on a set of well-known Benchmark continuous optimization problems and the related results are compared with two other QEAs: the QEA with classical crossover operation, and the QEA with 2-crossover strategy. The experimental comparison results demonstrate that the proposed hybrid ABC and QEA approach is feasible and effective in solving complex continuous optimization problems. PMID:20180252
NASA Astrophysics Data System (ADS)
Rocha, M. C.; Saraiva, J. T.
2012-10-01
The basic objective of Transmission Expansion Planning (TEP) is to schedule a number of transmission projects along an extended planning horizon minimizing the network construction and operational costs while satisfying the requirement of delivering power safely and reliably to load centres along the horizon. This principle is quite simple, but the complexity of the problem and the impact on society transforms TEP on a challenging issue. This paper describes a new approach to solve the dynamic TEP problem, based on an improved discrete integer version of the Evolutionary Particle Swarm Optimization (EPSO) meta-heuristic algorithm. The paper includes sections describing in detail the EPSO enhanced approach, the mathematical formulation of the TEP problem, including the objective function and the constraints, and a section devoted to the application of the developed approach to this problem. Finally, the use of the developed approach is illustrated using a case study based on the IEEE 24 bus 38 branch test system.
Alternating evolutionary pressure in a genetic algorithm facilitates protein model selection
Offman, Marc N; Tournier, Alexander L; Bates, Paul A
2008-01-01
Background Automatic protein modelling pipelines are becoming ever more accurate; this has come hand in hand with an increasingly complicated interplay between all components involved. Nevertheless, there are still potential improvements to be made in template selection, refinement and protein model selection. Results In the context of an automatic modelling pipeline, we analysed each step separately, revealing several non-intuitive trends and explored a new strategy for protein conformation sampling using Genetic Algorithms (GA). We apply the concept of alternating evolutionary pressure (AEP), i.e. intermediate rounds within the GA runs where unrestrained, linear growth of the model populations is allowed. Conclusion This approach improves the overall performance of the GA by allowing models to overcome local energy barriers. AEP enabled the selection of the best models in 40% of all targets; compared to 25% for a normal GA. PMID:18673557
NASA Technical Reports Server (NTRS)
Pulliam, T. H.; Nemec, M.; Holst, T.; Zingg, D. W.; Kwak, Dochan (Technical Monitor)
2002-01-01
A comparison between an Evolutionary Algorithm (EA) and an Adjoint-Gradient (AG) Method applied to a two-dimensional Navier-Stokes code for airfoil design is presented. Both approaches use a common function evaluation code, the steady-state explicit part of the code,ARC2D. The parameterization of the design space is a common B-spline approach for an airfoil surface, which together with a common griding approach, restricts the AG and EA to the same design space. Results are presented for a class of viscous transonic airfoils in which the optimization tradeoff between drag minimization as one objective and lift maximization as another, produces the multi-objective design space. Comparisons are made for efficiency, accuracy and design consistency.
Blind watermark algorithm on 3D motion model based on wavelet transform
NASA Astrophysics Data System (ADS)
Qi, Hu; Zhai, Lang
2013-12-01
With the continuous development of 3D vision technology, digital watermark technology, as the best choice for copyright protection, has fused with it gradually. This paper proposed a blind watermark plan of 3D motion model based on wavelet transform, and made it loaded into the Vega real-time visual simulation system. Firstly, put 3D model into affine transform, and take the distance from the center of gravity to the vertex of 3D object in order to generate a one-dimensional discrete signal; then make this signal into wavelet transform to change its frequency coefficients and embed watermark, finally generate 3D motion model with watermarking. In fixed affine space, achieve the robustness in translation, revolving and proportion transforms. The results show that this approach has better performances not only in robustness, but also in watermark- invisibility.
MOEA/D-ACO: a multiobjective evolutionary algorithm using decomposition and AntColony.
Ke, Liangjun; Zhang, Qingfu; Battiti, Roberto
2013-12-01
Combining ant colony optimization (ACO) and the multiobjective evolutionary algorithm (EA) based on decomposition (MOEA/D), this paper proposes a multiobjective EA, i.e., MOEA/D-ACO. Following other MOEA/D-like algorithms, MOEA/D-ACO decomposes a multiobjective optimization problem into a number of single-objective optimization problems. Each ant (i.e., agent) is responsible for solving one subproblem. All the ants are divided into a few groups, and each ant has several neighboring ants. An ant group maintains a pheromone matrix, and an individual ant has a heuristic information matrix. During the search, each ant also records the best solution found so far for its subproblem. To construct a new solution, an ant combines information from its group's pheromone matrix, its own heuristic information matrix, and its current solution. An ant checks the new solutions constructed by itself and its neighbors, and updates its current solution if it has found a better one in terms of its own objective. Extensive experiments have been conducted in this paper to study and compare MOEA/D-ACO with other algorithms on two sets of test problems. On the multiobjective 0-1 knapsack problem,MOEA/D-ACO outperforms the MOEA/D with conventional genetic operators and local search on all the nine test instances. We also demonstrate that the heuristic information matrices in MOEA/D-ACO are crucial to the good performance of MOEA/D-ACO for the knapsack problem. On the biobjective traveling salesman problem, MOEA/D-ACO performs much better than the BicriterionAnt on all the 12 test instances. We also evaluate the effects of grouping, neighborhood, and the location information of current solutions on the performance of MOEA/D-ACO. The work in this paper shows that reactive search optimization scheme, i.e., the "learning while optimizing" principle, is effective in improving multiobjective optimization algorithms. PMID:23757576
NASA Astrophysics Data System (ADS)
Clarkin, T. J.; Kasprzyk, J. R.; Raseman, W. J.; Herman, J. D.
2015-12-01
This study contributes a diagnostic assessment of multiobjective evolutionary algorithm (MOEA) search on a set of water resources problem formulations with different configurations of constraints. Unlike constraints in classical optimization modeling, constraints within MOEA simulation-optimization represent limits on acceptable performance that delineate whether solutions within the search problem are feasible. Constraints are relevant because of the emergent pressures on water resources systems: increasing public awareness of their sustainability, coupled with regulatory pressures on water management agencies. In this study, we test several state-of-the-art MOEAs that utilize restricted tournament selection for constraint handling on varying configurations of water resources planning problems. For example, a problem that has no constraints on performance levels will be compared with a problem with several severe constraints, and a problem with constraints that have less severe values on the constraint thresholds. One such problem, Lower Rio Grande Valley (LRGV) portfolio planning, has been solved with a suite of constraints that ensure high reliability, low cost variability, and acceptable performance in a single year severe drought. But to date, it is unclear whether or not the constraints are negatively affecting MOEAs' ability to solve the problem effectively. Two categories of results are explored. The first category uses control maps of algorithm performance to determine if the algorithm's performance is sensitive to user-defined parameters. The second category uses run-time performance metrics to determine the time required for the algorithm to reach sufficient levels of convergence and diversity on the solution sets. Our work exploring the effect of constraints will better enable practitioners to define MOEA problem formulations for real-world systems, especially when stakeholders are concerned with achieving fixed levels of performance according to one or
NASA Astrophysics Data System (ADS)
Franke, Timothy; Weadon, Tim; Ford, John; Garcia-Sanz, Mario
2015-10-01
Various forms of measurement errors limit telescope tracking performance in practice. A new method for identifying the correcting coefficients for encoder interpolation error is developed. The algorithm corrects the encoder measurement by identifying a harmonic model of the system and using that model to compute the necessary correction parameters. The approach improves upon others by explicitly modeling the unknown dynamics of the structure and controller and by not requiring a separate system identification to be performed. Experience gained from pin-pointing the source of encoder error on the Green Bank Radio Telescope (GBT) is presented. Several tell-tale indicators of encoder error are discussed. Experimental data from the telescope, tested with two different encoders, are presented. Demonstration of the identification methodology on the GBT as well as details of its implementation are discussed. A root mean square tracking error reduction from 0.68 arc seconds to 0.21 arc sec was achieved by changing encoders and was further reduced to 0.10 arc sec with the calibration algorithm. In particular, the ubiquity of this error source is shown and how, by careful correction, it is possible to go beyond the advertised accuracy of an encoder.
NASA Astrophysics Data System (ADS)
Liu, Jintao; Chen, Xi; Zhang, Xingnan; Hoagland, Kyle D.
2012-04-01
Recently developed hillslope storage dynamics theory can represent the essential physical behavior of a natural system by accounting explicitly for the plan shape of a hillslope in an elegant and simple way. As a result, this theory is promising for improving catchment-scale hydrologic modeling. In this study, grid digital elevation model (DEM) based algorithms for determination of hillslope geometric characteristics (e.g., hillslope units and width functions in hillslope storage dynamics models) are presented. This study further develops a method for hillslope partitioning, established by Fan and Bras (1998), by applying it on a grid network. On the basis of hillslope unit derivation, a flow distance transforms method (TD∞) is suggested in order to decrease the systematic error of grid DEM-based flow distance calculation caused by flow direction approximation to streamlines. Hillslope width transfer functions are then derived to convert the probability density functions of flow distance into hillslope width functions. These algorithms are applied and evaluated on five abstract hillslopes, and detailed tests and analyses are carried out by comparing the derivation results with theoretical width functions. The results demonstrate that the TD∞ improves estimations of the flow distance and thus hillslope width function. As the proposed procedures are further applied in a natural catchment, we find that the natural hillslope width function can be well fitted by the Gaussian function. This finding is very important for applying the newly developed hillslope storage dynamics models in a real catchment.
NASA Astrophysics Data System (ADS)
Reed, P. M.; Kollat, J. B.
2005-12-01
This study demonstrates the effectiveness of a modified version of Deb's Non-Dominated Sorted Genetic Algorithm II (NSGAII), which the authors have named the Epsilon-Dominance Non-Dominated Sorted Genetic Algorithm II (Epsilon-NSGAII), at solving a four objective long-term groundwater monitoring (LTM) design test case. The Epsilon-NSGAII incorporates prior theoretical competent evolutionary algorithm (EA) design concepts and epsilon-dominance archiving to improve the original NSGAII's efficiency, reliability, and ease-of-use. This algorithm eliminates much of the traditional trial-and-error parameterization associated with evolutionary multi-objective optimization (EMO) through epsilon-dominance archiving, dynamic population sizing, and automatic termination. The effectiveness and reliability of the new algorithm is compared to the original NSGAII as well as two other benchmark multi-objective evolutionary algorithms (MOEAs), the Epsilon-Dominance Multi-Objective Evolutionary Algorithm (Epsilon-MOEA) and the Strength Pareto Evolutionary Algorithm 2 (SPEA2). These MOEAs have been selected because they have been demonstrated to be highly effective at solving numerous multi-objective problems. The results presented in this study indicate superior performance of the Epsilon-NSGAII in terms of the hypervolume indicator, unary Epsilon-indicator, and first-order empirical attainment function metrics. In addition, the runtime metric results indicate that the diversity and convergence dynamics of the Epsilon-NSGAII are competitive to superior relative to the SPEA2, with both algorithms greatly outperforming the NSGAII and Epsilon-MOEA in terms of these metrics. The improvements in performance of the Epsilon-NSGAII over its parent algorithm the NSGAII demonstrate that the application of Epsilon-dominance archiving, dynamic population sizing with archive injection, and automatic termination greatly improve algorithm efficiency and reliability. In addition, the usability of
Bowen, J.; Dozier, G.
1996-12-31
This paper introduces a hybrid evolutionary hill-climbing algorithm that quickly solves (Constraint Satisfaction Problems (CSPs)). This hybrid uses opportunistic arc and path revision in an interleaved fashion to reduce the size of the search space and to realize when to quit if a CSP is based on an inconsistent constraint network. This hybrid outperforms a well known hill-climbing algorithm, the Iterative Descent Method, on a test suite of 750 randomly generated CSPs.
Geometric model-based fitting algorithm for orientation-selective PELDOR data
NASA Astrophysics Data System (ADS)
Abdullin, Dinar; Hagelueken, Gregor; Hunter, Robert I.; Smith, Graham M.; Schiemann, Olav
2015-03-01
Pulsed electron-electron double resonance (PELDOR or DEER) spectroscopy is frequently used to determine distances between spin centres in biomacromolecular systems. Experiments where mutual orientations of the spin pair are selectively excited provide the so-called orientation-selective PELDOR data. This data is characterised by the orientation dependence of the modulation depth parameter and of the dipolar frequencies. This dependence has to be taken into account in the data analysis in order to extract distance distributions accurately from the experimental time traces. In this work, a fitting algorithm for such data analysis is discussed. The approach is tested on PELDOR data-sets from the literature and is compared with the previous results.
An estimation of generalized bradley-terry models based on the em algorithm.
Fujimoto, Yu; Hino, Hideitsu; Murata, Noboru
2011-06-01
The Bradley-Terry model is a statistical representation for one's preference or ranking data by using pairwise comparison results of items. For estimation of the model, several methods based on the sum of weighted Kullback-Leibler divergences have been proposed from various contexts. The purpose of this letter is to interpret an estimation mechanism of the Bradley-Terry model from the viewpoint of flatness, a fundamental notion used in information geometry. Based on this point of view, a new estimation method is proposed on a framework of the em algorithm. The proposed method is different in its objective function from that of conventional methods, especially in treating unobserved comparisons, and it is consistently interpreted in a probability simplex. An estimation method with weight adaptation is also proposed from a viewpoint of the sensitivity. Experimental results show that the proposed method works appropriately, and weight adaptation improves accuracy of the estimate. PMID:21395441
Double-layer evolutionary algorithm for distributed optimization of particle detection on the Grid
NASA Astrophysics Data System (ADS)
Padée, Adam; Kurek, Krzysztof; Zaremba, Krzysztof
2013-08-01
Reconstruction of particle tracks from information collected by position-sensitive detectors is an important procedure in HEP experiments. It is usually controlled by a set of numerical parameters which have to be manually optimized. This paper proposes an automatic approach to this task by utilizing evolutionary algorithm (EA) operating on both real-valued and binary representations. Because of computational complexity of the task a special distributed architecture of the algorithm is proposed, designed to be run in grid environment. It is two-level hierarchical hybrid utilizing asynchronous master-slave EA on the level of clusters and island model EA on the level of the grid. The technical aspects of usage of production grid infrastructure are covered, including communication protocols on both levels. The paper deals also with the problem of heterogeneity of the resources, presenting efficiency tests on a benchmark function. These tests confirm that even relatively small islands (clusters) can be beneficial to the optimization process when connected to the larger ones. Finally a real-life usage example is presented, which is an optimization of track reconstruction in Large Angle Spectrometer of NA-58 COMPASS experiment held at CERN, using a sample of Monte Carlo simulated data. The overall reconstruction efficiency gain, achieved by the proposed method, is more than 4%, compared to the manually optimized parameters.
NASA Astrophysics Data System (ADS)
Sarkar, K.; Topsakal, M.; Wentzcovitch, R. M.
2015-12-01
We attempt to achieve the accuracy of full-potential linearized augmented-plane-wave (FLAPW) method, as implemented in the WIEN2k code, at the favorable computational efficiency of the projector augmented wave (PAW) method for ab initio calculations of solids. For decades, PAW datasets have been generated by manually choosing its parameters and by visually inspecting its logarithmic derivatives, partial wave, and projector basis set. In addition to being tedious and error-prone, this procedure is inadequate because it is impractical to manually explore the full parameter space, as an infinite number of PAW parameter sets for a given augmentation radius can be generated maintaining all the constraints on logarithmic derivatives and basis sets. Performance verification of all plausible solutions against FLAPW is also impractical. Here we report the development of a hybrid algorithm to construct optimized PAW basis sets that can closely reproduce FLAPW results from zero to ultra-high pressures. The approach applies evolutionary computing (EC) to generate optimum PAW parameter sets using the ATOMPAW code. We have the Quantum ESPRESSO distribution to generate equation of state (EOS) to be compared with WIEN2k EOSs set as target. Softer PAW potentials reproducing yet more closely FLAPW EOSs can be found with this method. We demonstrate its working principles and workability by optimizing PAW basis functions for carbon, magnesium, aluminum, silicon, calcium, and iron atoms. The algorithm requires minimal user intervention in a sense that there is no requirement of visual inspection of logarithmic derivatives or of projector functions.
NASA Astrophysics Data System (ADS)
Bonissone, Stefano R.
2001-11-01
There are many approaches to solving multi-objective optimization problems using evolutionary algorithms. We need to select methods for representing and aggregating preferences, as well as choosing strategies for searching in multi-dimensional objective spaces. First we suggest the use of linguistic variables to represent preferences and the use of fuzzy rule systems to implement tradeoff aggregations. After a review of alternatives EA methods for multi-objective optimizations, we explore the use of multi-sexual genetic algorithms (MSGA). In using a MSGA, we need to modify certain parts of the GAs, namely the selection and crossover operations. The selection operator groups solutions according to their gender tag to prepare them for crossover. The crossover is modified by appending a gender tag at the end of the chromosome. We use single and double point crossovers. We determine the gender of the offspring by the amount of genetic material provided by each parent. The parent that contributed the most to the creation of a specific offspring determines the gender that the offspring will inherit. This is still a work in progress, and in the conclusion we examine many future extensions and experiments.
Application of a multi-objective evolutionary algorithm to the spacecraft stationkeeping problem
NASA Astrophysics Data System (ADS)
Myers, Philip L.; Spencer, David B.
2016-10-01
Satellite operations are becoming an increasingly private industry, requiring increased profitability. Efficient and safe operation of satellites in orbit will ensure longer lasting and more profitable satellite services. This paper focuses on the use of a multi-objective evolutionary algorithm to schedule the maneuvers of a hypothetical satellite operating at geosynchronous altitude, by seeking to minimize the propellant consumed through the execution of stationkeeping maneuvers and the time the satellite is displaced from its desired orbital plane. Minimization of the time out of place increases the operational availability and minimizing the propellant usage which allows the spacecraft to operate longer. North-South stationkeeping was studied in this paper, through the use of a set of orbit inclination change maneuvers each year. Two cases for the maximum number of maneuvers to be executed were considered, with four and five maneuvers per year. The results delivered by the algorithm provide maneuver schedules which require 40-100 m/s of total Δv for two years of operation, with the satellite maintaining the satellite's orbital plane to within 0.1° between 84% and 96% of the two years being modeled.
Combining evolutionary algorithms with oblique decision trees to detect bent double galaxies
Cantu-Paz, E; Kamath, C
2000-06-22
Decision trees have long been popular in classification as they use simple and easy-to-understand tests at each node. Most variants of decision trees test a single attribute at a node, leading to axis-parallel trees, where the test results in a hyperplane which is parallel to one of the dimensions in the attribute space. These trees can be rather large and inaccurate in cases where the concept to be learnt is best approximated by oblique hyperplanes. In such cases, it may be more appropriate to use an oblique decision tree, where the decision at each node is a linear combination of the attributes. Oblique decision trees have not gained wide popularity in part due to the complexity of constructing good oblique splits and the tendency of existing splitting algorithms to get stuck in local minima. Several alternatives have been proposed to handle these problems including randomization in conjunction with deterministic hill climbing and the use of simulated annealing. In this paper, they use evolutionary algorithms (EAs) to determine the split. EAs are well suited for this problem because of their global search properties, their tolerance to noisy fitness evaluations, and their scalability to large dimensional search spaces. They demonstrate the technique on a practical problem from astronomy, namely, the classification of galaxies with a bent-double morphology, and describe their experiences with several split evaluation criteria.
Deconvolution of γ-spectra variably affected by space radiation using an evolutionary algorithm
NASA Astrophysics Data System (ADS)
McClanahan, Timothy P.; Loew, Murray H.; Trombka, Jacob I.; Evans, Larry G.
2007-09-01
An evolutionary algorithm (ES) for automated deconvolution of γ-ray spectra is described that fits peak shape morphologies typical of spectra acquired from variably radiation damaged γ-ray detectors. Space radiation effects significantly impair semi-conductor γ-ray detector efficiency and induce variable degrees of nuclide peak broadening, distortion in spectra. Mars Odyssey Gamma-ray spectrometer data are used to demonstrate applicability of described algorithms for three degrees of radiation damage. ES methods accurately identify and quantify the discrete set of nuclide peaks in an arbitrary spectrum using a nuclide library. A novel method of constraining peak low energy tails, broadened by detector radiation damage, reduces the peak shape model from six parameters to four yielding a significant minimization of model complexity. Benefits of this approach include the simple implementation of highly specific parameter constraints that appropriately define feasible solution spaces. Methods describe peak low energy tailing descriptors as a continuum of low energy peak tailing curves representing increasing degrees of radiation damage. Curves are addressable by a single real valued parameter. Results illustrate the use of methods to simply describe relative radiation dosimetry using this parameter. Analysis of degraded spectra indicates method sensitivity to low and high levels of space radiation damage prior to and post MO-GRS detector annealings.
NASA Astrophysics Data System (ADS)
Bonissone, Stefano R.; Subbu, Raj
2002-12-01
In multi-objective optimization (MOO) problems we need to optimize many possibly conflicting objectives. For instance, in manufacturing planning we might want to minimize the cost and production time while maximizing the product's quality. We propose the use of evolutionary algorithms (EAs) to solve these problems. Solutions are represented as individuals in a population and are assigned scores according to a fitness function that determines their relative quality. Strong solutions are selected for reproduction, and pass their genetic material to the next generation. Weak solutions are removed from the population. The fitness function evaluates each solution and returns a related score. In MOO problems, this fitness function is vector-valued, i.e. it returns a value for each objective. Therefore, instead of a global optimum, we try to find the Pareto-optimal or non-dominated frontier. We use multi-sexual EAs with as many genders as optimization criteria. We have created new crossover and gender assignment functions, and experimented with various parameters to determine the best setting (yielding the highest number of non-dominated solutions.) These experiments are conducted using a variety of fitness functions, and the algorithms are later evaluated on a flexible manufacturing problem with total cost and time minimization objectives.
NASA Astrophysics Data System (ADS)
Wang, J.; Cai, X.
2007-12-01
A water resources system can be defined as a large-scale spatial system, within which distributed ecological system interacts with the stream network and ground water system. Water resources management, the causative factors and hence the solutions to be developed have a significant spatial dimension. This motivates a modeling analysis of water resources management within a spatial analytical framework, where data is usually geo- referenced and in the form of a map. One of the important functions of Geographic information systems (GIS) is to identify spatial patterns of environmental variables. The role of spatial patterns in water resources management has been well established in the literature particularly regarding how to design better spatial patterns for satisfying the designated objectives of water resources management. Evolutionary algorithms (EA) have been demonstrated to be successful in solving complex optimization models for water resources management due to its flexibility to incorporate complex simulation models in the optimal search procedure. The idea of combining GIS and EA motivates the development and application of spatial evolutionary algorithms (SEA). SEA assimilates spatial information into EA, and even changes the representation and operators of EA. In an EA used for water resources management, the mathematical optimization model should be modified to account the spatial patterns; however, spatial patterns are usually implicit, and it is difficult to impose appropriate patterns to spatial data. Also it is difficult to express complex spatial patterns by explicit constraints included in the EA. The GIS can help identify the spatial linkages and correlations based on the spatial knowledge of the problem. These linkages are incorporated in the fitness function for the preference of the compatible vegetation distribution. Unlike a regular GA for spatial models, the SEA employs a special hierarchical hyper-population and spatial genetic operators
NASA Astrophysics Data System (ADS)
Manzano-Agugliaro, F.; San-Antonio-Gómez, C.; López, S.; Montoya, F. G.; Gil, C.
2013-08-01
When historical map data are compared with modern cartography, the old map coordinates must be transformed to the current system. However, historical data often exhibit heterogeneous quality. In calculating the transformation parameters between the historical and modern maps, it is often necessary to discard highly uncertain data. An optimal balance between the objectives of minimising the transformation error and eliminating as few points as possible can be achieved by generating a Pareto front of solutions using evolutionary genetic algorithms. The aim of this paper is to assess the performance of evolutionary algorithms in determining the accuracy of historical maps in regard to modern cartography. When applied to the 1787 Tomas Lopez map, the use of evolutionary algorithms reduces the linear error by 40% while eliminating only 2% of the data points. The main conclusion of this paper is that evolutionary algorithms provide a promising alternative for the transformation of historical map coordinates and determining the accuracy of historical maps in regard to modern cartography, particularly when the positional quality of the data points used cannot be assured.
A possibilistic approach to rotorcraft design through a multi-objective evolutionary algorithm
NASA Astrophysics Data System (ADS)
Chae, Han Gil
Most of the engineering design processes in use today in the field may be considered as a series of successive decision making steps. The decision maker uses information at hand, determines the direction of the procedure, and generates information for the next step and/or other decision makers. However, the information is often incomplete, especially in the early stages of the design process of a complex system. As the complexity of the system increases, uncertainties eventually become unmanageable using traditional tools. In such a case, the tools and analysis values need to be "softened" to account for the designer's intuition. One of the methods that deals with issues of intuition and incompleteness is possibility theory. Through the use of possibility theory coupled with fuzzy inference, the uncertainties estimated by the intuition of the designer are quantified for design problems. By involving quantified uncertainties in the tools, the solutions can represent a possible set, instead of a crisp spot, for predefined levels of certainty. From a different point of view, it is a well known fact that engineering design is a multi-objective problem or a set of such problems. The decision maker aims to find satisfactory solutions, sometimes compromising the objectives that conflict with each other. Once the candidates of possible solutions are generated, a satisfactory solution can be found by various decision-making techniques. A number of multi-objective evolutionary algorithms (MOEAs) have been developed, and can be found in the literature, which are capable of generating alternative solutions and evaluating multiple sets of solutions in one single execution of an algorithm. One of the MOEA techniques that has been proven to be very successful for this class of problems is the strength Pareto evolutionary algorithm (SPEA) which falls under the dominance-based category of methods. The Pareto dominance that is used in SPEA, however, is not enough to account for the
Chen, Wei-Chen; Maitra, Ranjan
2011-01-01
We propose a model-based approach for clustering time series regression data in an unsupervised machine learning framework to identify groups under the assumption that each mixture component follows a Gaussian autoregressive regression model of order p. Given the number of groups, the traditional maximum likelihood approach of estimating the parameters using the expectation-maximization (EM) algorithm can be employed, although it is computationally demanding. The somewhat fast tune to the EM folk song provided by the Alternating Expectation Conditional Maximization (AECM) algorithm can alleviate the problem to some extent. In this article, we develop an alternative partial expectation conditional maximization algorithm (APECM) that uses an additional data augmentation storage step to efficiently implement AECM for finite mixture models. Results on our simulation experiments show improved performance in both fewer numbers of iterations and computation time. The methodology is applied to the problem of clustering mutual funds data on the basis of their average annual per cent returns and in the presence of economic indicators.
NASA Astrophysics Data System (ADS)
Polverino, Pierpaolo; Esposito, Angelo; Pianese, Cesare; Ludwig, Bastian; Iwanschitz, Boris; Mai, Andreas
2016-02-01
In the current energetic scenario, Solid Oxide Fuel Cells (SOFCs) exhibit appealing features which make them suitable for environmental-friendly power production, especially for stationary applications. An example is represented by micro-combined heat and power (μ-CHP) generation units based on SOFC stacks, which are able to produce electric and thermal power with high efficiency and low pollutant and greenhouse gases emissions. However, the main limitations to their diffusion into the mass market consist in high maintenance and production costs and short lifetime. To improve these aspects, the current research activity focuses on the development of robust and generalizable diagnostic techniques, aimed at detecting and isolating faults within the entire system (i.e. SOFC stack and balance of plant). Coupled with appropriate recovery strategies, diagnosis can prevent undesired system shutdowns during faulty conditions, with consequent lifetime increase and maintenance costs reduction. This paper deals with the on-line experimental validation of a model-based diagnostic algorithm applied to a pre-commercial SOFC system. The proposed algorithm exploits a Fault Signature Matrix based on a Fault Tree Analysis and improved through fault simulations. The algorithm is characterized on the considered system and it is validated by means of experimental induction of faulty states in controlled conditions.
A simple model based magnet sorting algorithm for planar hybrid undulators
Rakowsky, G.
2010-05-23
Various magnet sorting strategies have been used to optimize undulator performance, ranging from intuitive pairing of high- and low-strength magnets, to full 3D FEM simulation with 3-axis Helmholtz coil magnet data. In the extreme, swapping magnets in a full field model to minimize trajectory wander and rms phase error can be time consuming. This paper presents a simpler approach, extending the field error signature concept to obtain trajectory displacement, kick angle and phase error signatures for each component of magnetization error from a Radia model of a short hybrid-PM undulator. We demonstrate that steering errors and phase errors are essentially decoupled and scalable from measured X, Y and Z components of magnetization. Then, for any given sequence of magnets, rms trajectory and phase errors are obtained from simple cumulative sums of the scaled displacements and phase errors. The cost function (a weighted sum of these errors) is then minimized by swapping magnets, using one's favorite optimization algorithm. This approach was applied recently at NSLS to a short in-vacuum undulator, which required no subsequent trajectory or phase shimming. Trajectory and phase signatures are also obtained for some mechanical errors, to guide 'virtual shimming' and specifying mechanical tolerances. Some simple inhomogeneities are modeled to assess their error contributions.
Energy efficient model based algorithm for control of building HVAC systems.
Kirubakaran, V; Sahu, Chinmay; Radhakrishnan, T K; Sivakumaran, N
2015-11-01
Energy efficient designs are receiving increasing attention in various fields of engineering. Heating ventilation and air conditioning (HVAC) control system designs involve improved energy usage with an acceptable relaxation in thermal comfort. In this paper, real time data from a building HVAC system provided by BuildingLAB is considered. A resistor-capacitor (RC) framework for representing thermal dynamics of the building is estimated using particle swarm optimization (PSO) algorithm. With objective costs as thermal comfort (deviation of room temperature from required temperature) and energy measure (Ecm) explicit MPC design for this building model is executed based on its state space representation of the supply water temperature (input)/room temperature (output) dynamics. The controllers are subjected to servo tracking and external disturbance (ambient temperature) is provided from the real time data during closed loop control. The control strategies are ported on a PIC32mx series microcontroller platform. The building model is implemented in MATLAB and hardware in loop (HIL) testing of the strategies is executed over a USB port. Results indicate that compared to traditional proportional integral (PI) controllers, the explicit MPC's improve both energy efficiency and thermal comfort significantly. PMID:25869418
Dietrich, Arne; Haider, Hilde
2015-08-01
Creative thinking is arguably the pinnacle of cerebral functionality. Like no other mental faculty, it has been omnipotent in transforming human civilizations. Probing the neural basis of this most extraordinary capacity, however, has been doggedly frustrated. Despite a flurry of activity in cognitive neuroscience, recent reviews have shown that there is no coherent picture emerging from the neuroimaging work. Based on this, we take a different route and apply two well established paradigms to the problem. First is the evolutionary framework that, despite being part and parcel of creativity research, has no informed experimental work in cognitive neuroscience. Second is the emerging prediction framework that recognizes predictive representations as an integrating principle of all cognition. We show here how the prediction imperative revealingly synthesizes a host of new insights into the way brains process variation-selection thought trials and present a new neural mechanism for the partial sightedness in human creativity. Our ability to run offline simulations of expected future environments and action outcomes can account for some of the characteristic properties of cultural evolutionary algorithms running in brains, such as degrees of sightedness, the formation of scaffolds to jump over unviable intermediate forms, or how fitness criteria are set for a selection process that is necessarily hypothetical. Prospective processing in the brain also sheds light on how human creating and designing - as opposed to biological creativity - can be accompanied by intentions and foresight. This paper raises questions about the nature of creative thought that, as far as we know, have never been asked before. PMID:25304474
NASA Astrophysics Data System (ADS)
Gair, Jonathan R.; Porter, Edward K.
2009-11-01
We describe a hybrid evolutionary algorithm that can simultaneously search for multiple supermassive black hole binary (SMBHB) inspirals in LISA data. The algorithm mixes evolutionary computation, Metropolis-Hastings methods and Nested Sampling. The inspiral of SMBHBs presents an interesting problem for gravitational wave data analysis since, due to the LISA response function, the sources have a bi-modal sky solution. We show here that it is possible not only to detect multiple SMBHBs in the data stream, but also to investigate simultaneously all the various modes of the global solution. In all cases, the algorithm returns parameter determinations within 5σ (as estimated from the Fisher matrix) of the true answer, for both the actual and antipodal sky solutions.
NASA Astrophysics Data System (ADS)
Tom, Brian C.
Intensity Modulated Radiation Therapy (IMRT) has enjoyed success in the clinic by achieving dose escalation to the target while sparing nearby critical structures. For DMLC plans, regularization is introduced in order to smooth the fluence maps. In this dissertation, regularization is used to smooth the fluence profiles. Since SMLC plans have a limited number of intensity levels, smoothing is not a problem. However, in many treatment planning systems, the plans are optimized with beam weights that are continuous. Only after the optimization is complete is when the fluence maps are quantized. This dissertation will study the effects, if any, of quantizing the beam weights. In order to study both smoothing DMLC plans and the quantization of SMLC plans, a multi-objective evolutionary algorithm is employed as the optimization method. The main advantages of using these stochastic algorithms is that the beam weights can be represented either in binary or real strings. Clearly, a binary representation is suited for SMLC delivery (discrete intensity levels), while a real representation is more suited for DMLC. Further, in the case of real beam weights, multi-objective evolutionary algorithms can handle conflicting objective functions very well. In fact, regularization can be thought of as having two competing functions: to maintain fidelity to the data, and smoothing the data. The main disadvantage of regularization is the need to specify the regularization parameter, which controls how important the two objectives are relative to one another. Multi-objective evolutionary algorithms do not need such a parameter. In addition, such algorithms yield a set of solutions, each solution representing differing importance factors of the two (or more) objective functions. Multi-objective evolutionary algorithms can thus be used to study the effects of quantizing the beam weights for SMLC delivery systems as well studying how regularization can reduce the difference between the
Evolutionary Design of one-dimensional Rule Changing cellular automata using genetic algorithms
NASA Astrophysics Data System (ADS)
Yun, Wu; Kanoh, Hitoshi
In this paper we propose a new method to obtain transition rules of one-dimensional two-state cellular automata (CAs) using genetic algorithms (GAs). CAs have the advantages of producing complex systems from the interaction of simple elements, and have attracted increased research interest. However, the difficulty of designing CAs' transition rules to perform a particular task has severely limited their applications. The evolutionary design of CA rules has been studied by the EVCA group in detail. A GA was used to evolve CAs for two tasks: density classification and synchronization problems. That GA was shown to have discovered rules that gave rise to sophisticated emergent computational strategies. Sipper has studied a cellular programming algorithm for 2-state non-uniform CAs, in which each cell may contain a different rule. Meanwhile, Land and Belew proved that the perfect two-state rule for performing the density classification task does not exist. However, Fuks´ showed that a pair of human written rules performs the task perfectly when the size of neighborhood is one. In this paper, we consider a pair of rules and the number of rule iterations as a chromosome, whereas the EVCA group considers a rule as a chromosome. The present method is meant to reduce the complexity of a given problem by dividing the problem into smaller ones and assigning a distinct rule to each one. Experimental results for the two tasks prove that our method is more efficient than a conventional method. Some of the obtained rules agree with the human written rules shown by Fuks´. We also grouped 1000 rules with high fitness into 4 classes according to the Langton's λ parameter. The rules obtained by the proposed method belong to Class- I, II, III or IV, whereas most of the rules by the conventional method belong to Class-IV only. This result shows that the combination of simple rules can perform complex tasks.
NASA Astrophysics Data System (ADS)
Asouti, V. G.; Giannakoglou, K. C.
2012-07-01
This article presents a solution method to the unit commitment problem with probabilistic unit failures and repairs, which is based on evolutionary algorithms and Monte Carlo simulations. Regarding the latter, thousands of availability-unavailability trial time patterns along the scheduling horizon are generated. The objective function to be minimised is the expected total operating cost, computed after adapting any candidate solution, i.e. any series of generating/non-generating (ON/OFF) unit states, to the availability-unavailability patterns and performing evaluations by considering fuel, start-up and shutdown costs as well as the cost for buying electricity from external resources, if necessary. The proposed method introduces a new efficient chromosome representation: the decision variables are integer IDs corresponding to the binary-to-decimal converted ON/OFF (1/0) scenarios that cover the demand in each hour. In contrast to previous methods using binary strings as chromosomes, the new chromosome must be penalised only if any of the constraints regarding start-up, shutdown and ramp times cannot be met, chromosome repair is avoided and, consequently, the dispatch problems are solved once in the preparatory phase instead of during the evolution. For all these reasons, with or without probabilistic outages, the proposed algorithm has much lower CPU cost. In addition, if probabilistic outages are taken into account, a hierarchical evaluation scheme offers extra noticeable gain in CPU cost: the population members are approximately pre-evaluated using a small 'representative' set of the Monte Carlo simulations and only a few top population members undergo evaluations through the full Monte Carlo simulations. The hierarchical scheme makes the proposed method about one order of magnitude faster than its conventional counterpart.
NASA Astrophysics Data System (ADS)
Holdsworth, C. H.; Corwin, D.; Stewart, R. D.; Rockne, R.; Trister, A. D.; Swanson, K. R.; Phillips, M.
2012-12-01
We demonstrate a patient-specific method of adaptive IMRT treatment for glioblastoma using a multiobjective evolutionary algorithm (MOEA). The MOEA generates spatially optimized dose distributions using an iterative dialogue between the MOEA and a mathematical model of tumor cell proliferation, diffusion and response. Dose distributions optimized on a weekly basis using biological metrics have the potential to substantially improve and individualize treatment outcomes. Optimized dose distributions were generated using three different decision criteria for the tumor and compared with plans utilizing standard dose of 1.8 Gy/fraction to the CTV (T2-visible MRI region plus a 2.5 cm margin). The sets of optimal dose distributions generated using the MOEA approach the Pareto Front (the set of IMRT plans that delineate optimal tradeoffs amongst the clinical goals of tumor control and normal tissue sparing). MOEA optimized doses demonstrated superior performance as judged by three biological metrics according to simulated results. The predicted number of reproductively viable cells 12 weeks after treatment was found to be the best target objective for use in the MOEA.
Analysis of high resolution FTIR spectra from synchrotron sources using evolutionary algorithms
NASA Astrophysics Data System (ADS)
van Wijngaarden, Jennifer; Desmond, Durell; Leo Meerts, W.
2015-09-01
Room temperature Fourier transform infrared spectra of the four-membered heterocycle trimethylene sulfide were collected with a resolution of 0.00096 cm-1 using synchrotron radiation from the Canadian Light Source from 500 to 560 cm-1. The in-plane ring deformation mode (ν13) at ∼529 cm-1 exhibits dense rotational structure due to the presence of ring inversion tunneling and leads to a doubling of all transitions. Preliminary analysis of the experimental spectrum was pursued via traditional methods involving assignment of quantum numbers to individual transitions in order to conduct least squares fitting to determine the spectroscopic parameters. Following this approach, the assignment of 2358 transitions led to the experimental determination of an effective Hamiltonian. This model describes transitions in the P and R branches to J‧ = 60 and Ka‧ = 10 that connect the tunneling split ground and vibrationally excited states of the ν13 band although a small number of low intensity features remained unassigned. The use of evolutionary algorithms (EA) for automated assignment was explored in tandem and yielded a set of spectroscopic constants that re-create this complex experimental spectrum to a similar degree. The EA routine was also applied to the previously well-understood ring puckering vibration of another four-membered ring, azetidine (Zaporozan et al., 2010). This test provided further evidence of the robust nature of the EA method when applied to spectra for which the underlying physics is well understood.
Enhancements of evolutionary algorithm for the complex requirements of a nurse scheduling problem
NASA Astrophysics Data System (ADS)
Tein, Lim Huai; Ramli, Razamin
2014-12-01
Over the years, nurse scheduling is a noticeable problem that is affected by the global nurse turnover crisis. The more nurses are unsatisfied with their working environment the more severe the condition or implication they tend to leave. Therefore, the current undesirable work schedule is partly due to that working condition. Basically, there is a lack of complimentary requirement between the head nurse's liability and the nurses' need. In particular, subject to highly nurse preferences issue, the sophisticated challenge of doing nurse scheduling is failure to stimulate tolerance behavior between both parties during shifts assignment in real working scenarios. Inevitably, the flexibility in shifts assignment is hard to achieve for the sake of satisfying nurse diverse requests with upholding imperative nurse ward coverage. Hence, Evolutionary Algorithm (EA) is proposed to cater for this complexity in a nurse scheduling problem (NSP). The restriction of EA is discussed and thus, enhancement on the EA operators is suggested so that the EA would have the characteristic of a flexible search. This paper consists of three types of constraints which are the hard, semi-hard and soft constraints that can be handled by the EA with enhanced parent selection and specialized mutation operators. These operators and EA as a whole contribute to the efficiency of constraint handling, fitness computation as well as flexibility in the search, which correspond to the employment of exploration and exploitation principles.
Exploring PtSO4 and PdSO4 phases: an evolutionary algorithm based investigation.
Sharma, Hom; Sharma, Vinit; Huan, Tran Doan
2015-07-21
Metal sulfate formation is one of the major challenges to the emission aftertreatment catalysts. Unlike the incredibly sulfation prone nature of Pd to form PdSO4, no experimental evidence exists for PtSO4 formation. Given the mystery of nonexistence of PtSO4, we explore PtSO4 using a combined approach of an evolutionary algorithm based search technique and quantum mechanical computations. Experimentally known PdSO4 is considered for the comparison and validation of our results. We predict many possible low-energy phases of PtSO4 and PdSO4 at 0 K, which are further investigated in a wide range of temperature-pressure conditions. An entirely new low-energy (tetragonal P42/m) structure of PtSO4 and PdSO4 is predicted, which appears to be the most stable phase of PtSO4 and a competing phase of the experimentally known monoclinic C2/c phase of PdSO4. Phase stability at a finite temperature is further examined and verified by Gibbs free energy calculations of sulfates towards their possible decomposition products. Finally, temperature-pressure phase diagrams are computationally established for both PtSO4 and PdSO4. PMID:26103206
NASA Astrophysics Data System (ADS)
Ong, Zhiyang; Lo, Andy Hao-Wei; Berryman, Matthew; Abbott, Derek
2005-12-01
The trade-off between pleiotropy and redundancy in telecommunications networks is analyzed in this paper. They are optimized to reduce installation costs and propagation delays. Pleiotropy of a server in a telecommunications network is defined as the number of clients and servers that it can service whilst redundancy is described as the number of servers servicing a client. Telecommunications networks containing many servers with large pleiotropy are cost-effective but vulnerable to network failures and attacks. Conversely, those networks containing many servers with high redundancy are reliable but costly. Several key issues regarding the choice of cost functions and techniques in evolutionary computation (such as the modeling of Darwinian evolution, and mutualism and commensalism) will be discussed, and a future research agenda is outlined. Experimental results indicate that the pleiotropy of servers in the optimum network does improve, whilst the redundancy of clients do not vary significantly, as expected, with evolving networks. This is due to the controlled evolution of networks that is modeled by the steady-state genetic algorithm; changes in telecommunications networks that occur drastically over a very short period of time are rare.
NASA Astrophysics Data System (ADS)
Smith, R.; Kasprzyk, J. R.; Zagona, E. A.
2013-12-01
Population growth and climate change, combined with difficulties in building new infrastructure, motivate portfolio-based solutions to ensuring sufficient water supply. Powerful simulation models with graphical user interfaces (GUI) are often used to evaluate infrastructure portfolios; these GUI based models require manual modification of the system parameters, such as reservoir operation rules, water transfer schemes, or system capacities. Multiobjective evolutionary algorithm (MOEA) based optimization can be employed to balance multiple objectives and automatically suggest designs for infrastructure systems, but MOEA based decision support typically uses a fixed problem formulation (i.e., a single set of objectives, decisions, and constraints). This presentation suggests a dynamic framework for linking GUI-based infrastructure models with MOEA search. The framework begins with an initial formulation which is solved using a MOEA. Then, stakeholders can interact with candidate solutions, viewing their properties in the GUI model. This is followed by changes in the formulation which represent users' evolving understanding of exigent system properties. Our case study is built using RiverWare, an object-oriented, data-centered model that facilitates the representation of a diverse array of water resources systems. Results suggest that assumptions within the initial MOEA search are violated after investigating tradeoffs and reveal how formulations should be modified to better capture stakeholders' preferences.
NASA Astrophysics Data System (ADS)
Friedel, Michael; Buscema, Massimo
2016-04-01
Aquatic ecosystem models can potentially be used to understand the influence of stresses on catchment resource quality. Given that catchment responses are functions of natural and anthropogenic stresses reflected in sparse and spatiotemporal biological, physical, and chemical measurements, an ecosystem is difficult to model using statistical or numerical methods. We propose an artificial adaptive systems approach to model ecosystems. First, an unsupervised machine-learning (ML) network is trained using the set of available sparse and disparate data variables. Second, an evolutionary algorithm with genetic doping is applied to reduce the number of ecosystem variables to an optimal set. Third, the optimal set of ecosystem variables is used to retrain the ML network. Fourth, a stochastic cross-validation approach is applied to quantify and compare the nonlinear uncertainty in selected predictions of the original and reduced models. Results are presented for aquatic ecosystems (tens of thousands of square kilometers) undergoing landscape change in the USA: Upper Illinois River Basin and Central Colorado Assessment Project Area, and Southland region, NZ.
NASA Astrophysics Data System (ADS)
Sarjaš, Andrej; Chowdhury, Amor; Svečko, Rajko
2016-09-01
This paper presents the synthesis of an optimal robust controller design using the polynomial pole placement technique and multi-criteria optimisation procedure via an evolutionary computation algorithm - differential evolution. The main idea of the design is to provide a reliable fixed-order robust controller structure and an efficient closed-loop performance with a preselected nominally characteristic polynomial. The multi-criteria objective functions have quasi-convex properties that significantly improve convergence and the regularity of the optimal/sub-optimal solution. The fundamental aim of the proposed design is to optimise those quasi-convex functions with fixed closed-loop characteristic polynomials, the properties of which are unrelated and hard to present within formal algebraic frameworks. The objective functions are derived from different closed-loop criteria, such as robustness with metric ?∞, time performance indexes, controller structures, stability properties, etc. Finally, the design results from the example verify the efficiency of the controller design and also indicate broader possibilities for different optimisation criteria and control structures.
Parameter extraction from experimental PEFC data using an evolutionary optimization algorithm
NASA Astrophysics Data System (ADS)
Zaglio, M.; Schuler, G.; Wokaun, A.; Mantzaras, J.; Büchi, F. N.
2011-05-01
The accurate characterization of the parameters related to the charge and water transport in the ionomer membrane of polymer electrolyte fuel cells (PEFC) is highly important for the understanding and interpretation of the overall cell behavior. Despite the big efforts to experimentally determine these parameters, a large scatter of data is reported in the literature, due to the inherent experimental difficulties. Likewise, the porosity and tortuosity of the gas diffusion layers affect the membrane water content and the local cell performance, but the published data are usually measured ex-situ, not accounting for the effect of clamping pressure. Using a quasi two-dimensional model and experimental current density data from a linear cell of technical size, a multiparameter optimization procedure based on an evolutionary algorithm has been applied to determine eight material properties highly influencing the cell performance. The optimization procedure converges towards a well defined solution and the resulting parameter values are compared to those available in the literature. The quality of the set of parameters extracted by the optimization procedure is assessed by a sensitivity analysis.
Optimization of thin noise barrier designs using Evolutionary Algorithms and a Dual BEM Formulation
NASA Astrophysics Data System (ADS)
Toledo, R.; Aznárez, J. J.; Maeso, O.; Greiner, D.
2015-01-01
This work aims at assessing the acoustic efficiency of different thin noise barrier models. These designs frequently feature complex profiles and their implementation in shape optimization processes may not always be easy in terms of determining their topological feasibility. A methodology to conduct both overall shape and top edge optimizations of thin cross section acoustic barriers by idealizing them as profiles with null boundary thickness is proposed. This procedure is based on the maximization of the insertion loss of candidate profiles proposed by an evolutionary algorithm. The special nature of these sorts of barriers makes necessary the implementation of a complementary formulation to the classical Boundary Element Method (BEM). Numerical simulations of the barriers' performance are conducted by using a 2D Dual BEM code in eight different barrier configurations (covering overall shaped and top edge configurations; spline curved and polynomial shaped based designs; rigid and noise absorbing boundaries materials). While results are achieved by using a specific receivers' scheme, the influence of the receivers' location on the acoustic performance is previously addressed. With the purpose of testing the methodology here presented, a numerical model validation on the basis of experimental results from a scale model test [34] is conducted. Results obtained show the usefulness of representing complex thin barrier configurations as null boundary thickness-like models.
NASA Astrophysics Data System (ADS)
Karakostas, Spiros
2015-05-01
The multi-objective nature of most spatial planning initiatives and the numerous constraints that are introduced in the planning process by decision makers, stakeholders, etc., synthesize a complex spatial planning context in which the concept of solid and meaningful optimization is a unique challenge. This article investigates new approaches to enhance the effectiveness of multi-objective evolutionary algorithms (MOEAs) via the adoption of a well-known metaheuristic: the non-dominated sorting genetic algorithm II (NSGA-II). In particular, the contribution of a sophisticated crossover operator coupled with an enhanced initialization heuristic is evaluated against a series of metrics measuring the effectiveness of MOEAs. Encouraging results emerge for both the convergence rate of the evolutionary optimization process and the occupation of valuable regions of the objective space by non-dominated solutions, facilitating the work of spatial planners and decision makers. Based on the promising behaviour of both heuristics, topics for further research are proposed to improve their effectiveness.
NASA Astrophysics Data System (ADS)
Hadia, Sarman K.; Thakker, R. A.; Bhatt, Kirit R.
2016-05-01
The study proposes an application of evolutionary algorithms, specifically an artificial bee colony (ABC), variant ABC and particle swarm optimisation (PSO), to extract the parameters of metal oxide semiconductor field effect transistor (MOSFET) model. These algorithms are applied for the MOSFET parameter extraction problem using a Pennsylvania surface potential model. MOSFET parameter extraction procedures involve reducing the error between measured and modelled data. This study shows that ABC algorithm optimises the parameter values based on intelligent activities of honey bee swarms. Some modifications have also been applied to the basic ABC algorithm. Particle swarm optimisation is a population-based stochastic optimisation method that is based on bird flocking activities. The performances of these algorithms are compared with respect to the quality of the solutions. The simulation results of this study show that the PSO algorithm performs better than the variant ABC and basic ABC algorithm for the parameter extraction of the MOSFET model; also the implementation of the ABC algorithm is shown to be simpler than that of the PSO algorithm.
NASA Astrophysics Data System (ADS)
Timoshenko, Janis; Anspoks, Andris; Kalinko, Aleksandr; Kuzmin, Alexei
2016-05-01
Extended x-ray absorption fine structure (EXAFS) spectroscopy combined with reverse Monte Carlo (RMC) and evolutionary algorithm (EA) modelling is used to advance the understanding of the local structure and lattice dynamics of copper nitride (Cu3N). The RMC/EA-EXAFS method provides a possibility to probe correlations in the motion of neighboring atoms and allows us to analyze the influence of anisotropic motion of copper atoms in Cu3N.
NASA Astrophysics Data System (ADS)
Sánchez-Escobar, Juan Jaime; Barbosa Santillán, Liliana Ibeth
2015-09-01
This paper describes the use of a hybrid evolutionary optimization algorithm (HEOA) for computing the wavefront aberration from real interferometric data. By finding the near-optimal solution to an optimization problem, this algorithm calculates the Zernike polynomial expansion coefficients from a Fizeau interferogram, showing the validity for the reconstruction of the wavefront aberration. The proposed HEOA incorporates the advantages of both a multimember evolution strategy and locally weighted linear regression in order to minimize an objective function while avoiding premature convergence to a local minimum. The numerical results demonstrate that our HEOA is robust for analyzing real interferograms degraded by noise.
NASA Astrophysics Data System (ADS)
Tsoukalas, Ioannis; Kossieris, Panagiotis; Efstratiadis, Andreas; Makropoulos, Christos
2015-04-01
In water resources optimization problems, the calculation of the objective function usually presumes to first run a simulation model and then evaluate its outputs. In several cases, however, long simulation times may pose significant barriers to the optimization procedure. Often, to obtain a solution within a reasonable time, the user has to substantially restrict the allowable number of function evaluations, thus terminating the search much earlier than required by the problem's complexity. A promising novel strategy to address these shortcomings is the use of surrogate modelling techniques within global optimization algorithms. Here we introduce the Surrogate-Enhanced Evolutionary Annealing-Simplex (SE-EAS) algorithm that couples the strengths of surrogate modelling with the effectiveness and efficiency of the EAS method. The algorithm combines three different optimization approaches (evolutionary search, simulated annealing and the downhill simplex search scheme), in which key decisions are partially guided by numerical approximations of the objective function. The performance of the proposed algorithm is benchmarked against other surrogate-assisted algorithms, in both theoretical and practical applications (i.e. test functions and hydrological calibration problems, respectively), within a limited budget of trials (from 100 to 1000). Results reveal the significant potential of using SE-EAS in challenging optimization problems, involving time-consuming simulations.
Holmes, Tim; Zanker, Johannes M.
2013-01-01
Studying aesthetic preference is notoriously difficult because it targets individual experience. Eye movements provide a rich source of behavioral measures that directly reflect subjective choice. To determine individual preferences for simple composition rules we here use fixation duration as the fitness measure in a Gaze Driven Evolutionary Algorithm (GDEA), which has been demonstrated as a tool to identify aesthetic preferences (Holmes and Zanker, 2012). In the present study, the GDEA was used to investigate the preferred combination of color and shape which have been promoted in the Bauhaus arts school. We used the same three shapes (square, circle, triangle) used by Kandinsky (1923), with the three color palette from the original experiment (A), an extended seven color palette (B), and eight different shape orientation (C). Participants were instructed to look for their preferred circle, triangle or square in displays with eight stimuli of different shapes, colors and rotations, in an attempt to test for a strong preference for red squares, yellow triangles and blue circles in such an unbiased experimental design and with an extended set of possible combinations. We Tested six participants extensively on the different conditions and found consistent preferences for color-shape combinations for individuals, but little evidence at the group level for clear color/shape preference consistent with Kandinsky's claims, apart from some weak link between yellow and triangles. Our findings suggest substantial inter-individual differences in the presence of stable individual associations of color and shapes, but also that these associations are robust within a single individual. These individual differences go some way toward challenging the claims of the universal preference for color/shape combinations proposed by Kandinsky, but also indicate that a much larger sample size would be needed to confidently reject that hypothesis. Moreover, these experiments highlight the
NASA Astrophysics Data System (ADS)
Parasyris, Antonios E.; Spanoudaki, Katerina; Kampanis, Nikolaos A.
2016-04-01
Groundwater level monitoring networks provide essential information for water resources management, especially in areas with significant groundwater exploitation for agricultural and domestic use. Given the high maintenance costs of these networks, development of tools, which can be used by regulators for efficient network design is essential. In this work, a monitoring network optimisation tool is presented. The network optimisation tool couples geostatistical modelling based on the Spartan family variogram with a genetic algorithm method and is applied to Mires basin in Crete, Greece, an area of high socioeconomic and agricultural interest, which suffers from groundwater overexploitation leading to a dramatic decrease of groundwater levels. The purpose of the optimisation tool is to determine which wells to exclude from the monitoring network because they add little or no beneficial information to groundwater level mapping of the area. Unlike previous relevant investigations, the network optimisation tool presented here uses Ordinary Kriging with the recently-established non-differentiable Spartan variogram for groundwater level mapping, which, based on a previous geostatistical study in the area leads to optimal groundwater level mapping. Seventy boreholes operate in the area for groundwater abstraction and water level monitoring. The Spartan variogram gives overall the most accurate groundwater level estimates followed closely by the power-law model. The geostatistical model is coupled to an integer genetic algorithm method programmed in MATLAB 2015a. The algorithm is used to find the set of wells whose removal leads to the minimum error between the original water level mapping using all the available wells in the network and the groundwater level mapping using the reduced well network (error is defined as the 2-norm of the difference between the original mapping matrix with 70 wells and the mapping matrix of the reduced well network). The solution to the
NASA Astrophysics Data System (ADS)
Ketabchi, Hamed; Ataie-Ashtiani, Behzad
2015-01-01
This paper surveys the literature associated with the application of evolutionary algorithms (EAs) in coastal groundwater management problems (CGMPs). This review demonstrates that previous studies were mostly relied on the application of limited and particular EAs, mainly genetic algorithm (GA) and its variants, to a number of specific problems. The exclusive investigation of these problems is often not the representation of the variety of feasible processes may be occurred in coastal aquifers. In this study, eight EAs are evaluated for CGMPs. The considered EAs are: GA, continuous ant colony optimization (CACO), particle swarm optimization (PSO), differential evolution (DE), artificial bee colony optimization (ABC), harmony search (HS), shuffled complex evolution (SCE), and simplex simulated annealing (SIMPSA). The first application of PSO, ABC, HS, and SCE in CGMPs is reported here. Moreover, the four benchmark problems with different degree of difficulty and variety are considered to address the important issues of groundwater resources in coastal regions. Hence, the wide ranges of popular objective functions and constraints with the number of decision variables ranging from 4 to 15 are included. These benchmark problems are applied in the combined simulation-optimization model to examine the optimization scenarios. Some preliminary experiments are performed to select the most efficient parameters values for EAs to set a fair comparison. The specific capabilities of each EA toward CGMPs in terms of results quality and required computational time are compared. The evaluation of the results highlights EA's applicability in CGMPs, besides the remarkable strengths and weaknesses of them. The comparisons show that SCE, CACO, and PSO yield superior solutions among the EAs according to the quality of solutions whereas ABC presents the poor performance. CACO provides the better solutions (up to 17%) than the worst EA (ABC) for the problem with the highest decision
NASA Astrophysics Data System (ADS)
Ma, Zhanshan (Sam)
Competition, cooperation and communication are the three fundamental relationships upon which natural selection acts in the evolution of life. Evolutionary game theory (EGT) is a 'marriage' between game theory and Darwin's evolution theory; it gains additional modeling power and flexibility by adopting population dynamics theory. In EGT, natural selection acts as optimization agents and produces inherent strategies, which eliminates some essential assumptions in traditional game theory such as rationality and allows more realistic modeling of many problems. Prisoner's Dilemma (PD) and Sir Philip Sidney (SPS) games are two well-known examples of EGT, which are formulated to study cooperation and communication, respectively. Despite its huge success, EGT exposes a certain degree of weakness in dealing with time-, space- and covariate-dependent (i.e., dynamic) uncertainty, vulnerability and deception. In this paper, I propose to extend EGT in two ways to overcome the weakness. First, I introduce survival analysis modeling to describe the lifetime or fitness of game players. This extension allows more flexible and powerful modeling of the dynamic uncertainty and vulnerability (collectively equivalent to the dynamic frailty in survival analysis). Secondly, I introduce agreement algorithms, which can be the Agreement algorithms in distributed computing (e.g., Byzantine Generals Problem [6][8], Dynamic Hybrid Fault Models [12]) or any algorithms that set and enforce the rules for players to determine their consensus. The second extension is particularly useful for modeling dynamic deception (e.g., asymmetric faults in fault tolerance and deception in animal communication). From a computational perspective, the extended evolutionary game theory (EEGT) modeling, when implemented in simulation, is equivalent to an optimization methodology that is similar to evolutionary computing approaches such as Genetic algorithms with dynamic populations [15][17].
Dunet, Vincent; Hachulla, Anne-Lise; Grimm, Jochen; Beigelman-Aubry, Catherine
2016-01-01
Background Model-based iterative reconstruction (MBIR) reduces image noise and improves image quality (IQ) but its influence on post-processing tools including maximal intensity projection (MIP) and minimal intensity projection (mIP) remains unknown. Purpose To evaluate the influence on IQ of MBIR on native, mIP, MIP axial and coronal reformats of reduced dose computed tomography (RD-CT) chest acquisition. Material and Methods Raw data of 50 patients, who underwent a standard dose CT (SD-CT) and a follow-up RD-CT with a CT dose index (CTDI) of 2–3 mGy, were reconstructed by MBIR and FBP. Native slices, 4-mm-thick MIP, and 3-mm-thick mIP axial and coronal reformats were generated. The relative IQ, subjective IQ, image noise, and number of artifacts were determined in order to compare different reconstructions of RD-CT with reference SD-CT. Results The lowest noise was observed with MBIR. RD-CT reconstructed by MBIR exhibited the best relative and subjective IQ on coronal view regardless of the post-processing tool. MBIR generated the lowest rate of artefacts on coronal mIP/MIP reformats and the highest one on axial reformats, mainly represented by distortions and stairsteps artifacts. Conclusion The MBIR algorithm reduces image noise but generates more artifacts than FBP on axial mIP and MIP reformats of RD-CT. Conversely, it significantly improves IQ on coronal views, without increasing artifacts, regardless of the post-processing technique.
NASA Astrophysics Data System (ADS)
Carlsson Tedgren, Åsa; Alm Carlsson, Gudrun
2013-04-01
Model-based dose calculation algorithms (MBDCAs), recently introduced in treatment planning systems (TPS) for brachytherapy, calculate tissue absorbed doses. In the TPS framework, doses have hereto been reported as dose to water and water may still be preferred as a dose specification medium. Dose to tissue medium Dmed then needs to be converted into dose to water in tissue Dw,med. Methods to calculate absorbed dose to differently sized water compartments/cavities inside tissue, infinitesimal (used for definition of absorbed dose), small, large or intermediate, are reviewed. Burlin theory is applied to estimate photon energies at which cavity sizes in the range 1 nm-10 mm can be considered small or large. Photon and electron energy spectra are calculated at 1 cm distance from the central axis in cylindrical phantoms of bone, muscle and adipose tissue for 20, 50, 300 keV photons and photons from 125I, 169Yb and 192Ir sources; ratios of mass-collision-stopping powers and mass energy absorption coefficients are calculated as applicable to convert Dmed into Dw,med for small and large cavities. Results show that 1-10 nm sized cavities are small at all investigated photon energies; 100 µm cavities are large only at photon energies <20 keV. A choice of an appropriate conversion coefficient Dw, med/Dmed is discussed in terms of the cavity size in relation to the size of important cellular targets. Free radicals from DNA bound water of nanometre dimensions contribute to DNA damage and cell killing and may be the most important water compartment in cells implying use of ratios of mass-collision-stopping powers for converting Dmed into Dw,med.
NASA Astrophysics Data System (ADS)
Shahamatnia, Ehsan; Dorotovič, Ivan; Fonseca, Jose M.; Ribeiro, Rita A.
2016-03-01
Developing specialized software tools is essential to support studies of solar activity evolution. With new space missions such as Solar Dynamics Observatory (SDO), solar images are being produced in unprecedented volumes. To capitalize on that huge data availability, the scientific community needs a new generation of software tools for automatic and efficient data processing. In this paper a prototype of a modular framework for solar feature detection, characterization, and tracking is presented. To develop an efficient system capable of automatic solar feature tracking and measuring, a hybrid approach combining specialized image processing, evolutionary optimization, and soft computing algorithms is being followed. The specialized hybrid algorithm for tracking solar features allows automatic feature tracking while gathering characterization details about the tracked features. The hybrid algorithm takes advantages of the snake model, a specialized image processing algorithm widely used in applications such as boundary delineation, image segmentation, and object tracking. Further, it exploits the flexibility and efficiency of Particle Swarm Optimization (PSO), a stochastic population based optimization algorithm. PSO has been used successfully in a wide range of applications including combinatorial optimization, control, clustering, robotics, scheduling, and image processing and video analysis applications. The proposed tool, denoted PSO-Snake model, was already successfully tested in other works for tracking sunspots and coronal bright points. In this work, we discuss the application of the PSO-Snake algorithm for calculating the sidereal rotational angular velocity of the solar corona. To validate the results we compare them with published manual results performed by an expert.
NASA Astrophysics Data System (ADS)
Ramli, Razamin; Tein, Lim Huai
2016-08-01
A good work schedule can improve hospital operations by providing better coverage with appropriate staffing levels in managing nurse personnel. Hence, constructing the best nurse work schedule is the appropriate effort. In doing so, an improved selection operator in the Evolutionary Algorithm (EA) strategy for a nurse scheduling problem (NSP) is proposed. The smart and efficient scheduling procedures were considered. Computation of the performance of each potential solution or schedule was done through fitness evaluation. The best so far solution was obtained via special Maximax&Maximin (MM) parent selection operator embedded in the EA, which fulfilled all constraints considered in the NSP.
Yi, Jianbing; Yang, Xuan Li, Yan-Ran; Chen, Guoliang
2015-10-15
Purpose: Image-guided radiotherapy is an advanced 4D radiotherapy technique that has been developed in recent years. However, respiratory motion causes significant uncertainties in image-guided radiotherapy procedures. To address these issues, an innovative lung motion estimation model based on a robust point matching is proposed in this paper. Methods: An innovative robust point matching algorithm using dynamic point shifting is proposed to estimate patient-specific lung motion during free breathing from 4D computed tomography data. The correspondence of the landmark points is determined from the Euclidean distance between the landmark points and the similarity between the local images that are centered at points at the same time. To ensure that the points in the source image correspond to the points in the target image during other phases, the virtual target points are first created and shifted based on the similarity between the local image centered at the source point and the local image centered at the virtual target point. Second, the target points are shifted by the constrained inverse function mapping the target points to the virtual target points. The source point set and shifted target point set are used to estimate the transformation function between the source image and target image. Results: The performances of the authors’ method are evaluated on two publicly available DIR-lab and POPI-model lung datasets. For computing target registration errors on 750 landmark points in six phases of the DIR-lab dataset and 37 landmark points in ten phases of the POPI-model dataset, the mean and standard deviation by the authors’ method are 1.11 and 1.11 mm, but they are 2.33 and 2.32 mm without considering image intensity, and 1.17 and 1.19 mm with sliding conditions. For the two phases of maximum inhalation and maximum exhalation in the DIR-lab dataset with 300 landmark points of each case, the mean and standard deviation of target registration errors on the
NASA Astrophysics Data System (ADS)
Cody, B. M.; Gonzalez-Nicolas, A.; Bau, D. A.
2011-12-01
Carbon capture and storage (CCS) has been proposed as a method of reducing global carbon dioxide (CO2) emissions. Although CCS has the potential to greatly retard greenhouse gas loading to the atmosphere while cleaner, more sustainable energy solutions are developed, there is a possibility that sequestered CO2 may leak and intrude into and adversely affect groundwater resources. It has been reported [1] that, while CO2 intrusion typically does not directly threaten underground drinking water resources, it may cause secondary effects, such as the mobilization of hazardous inorganic constituents present in aquifer minerals and changes in pH values. These risks must be fully understood and minimized before CCS project implementation. Combined management of project resources and leakage risk is crucial for the implementation of CCS. In this work, we present a method of: (a) minimizing the total CCS cost, the summation of major project costs with the cost associated with CO2 leakage; and (b) maximizing the mass of injected CO2, for a given proposed sequestration site. Optimization decision variables include the number of CO2 injection wells, injection rates, and injection well locations. The capital and operational costs of injection wells are directly related to injection well depth, location, injection flow rate, and injection duration. The cost of leakage is directly related to the mass of CO2 leaked through weak areas, such as abandoned oil wells, in the cap rock layers overlying the injected formation. Additional constraints on fluid overpressure caused by CO2 injection are imposed to maintain predefined effective stress levels that prevent cap rock fracturing. Here, both mass leakage and fluid overpressure are estimated using two semi-analytical models based upon work by [2,3]. A multi-objective evolutionary algorithm coupled with these semi-analytical leakage flow models is used to determine Pareto-optimal trade-off sets giving minimum total cost vs. maximum mass
NASA Astrophysics Data System (ADS)
Marghany, M.
2015-06-01
Oil spill pollution has a substantial role in damaging the marine ecosystem. Oil spill that floats on top of water, as well as decreasing the fauna populations, affects the food chain in the ecosystem. In fact, oil spill is reducing the sunlight penetrates the water, limiting the photosynthesis of marine plants and phytoplankton. Moreover, marine mammals for instance, disclosed to oil spills their insulating capacities are reduced, and so making them more vulnerable to temperature variations and much less buoyant in the seawater. This study has demonstrated a design tool for oil spill detection in SAR satellite data using optimization of Entropy based Multi-Objective Evolutionary Algorithm (E-MMGA) which based on Pareto optimal solutions. The study also shows that optimization entropy based Multi-Objective Evolutionary Algorithm provides an accurate pattern of oil slick in SAR data. This shown by 85 % for oil spill, 10 % look-alike and 5 % for sea roughness using the receiver-operational characteristics (ROC) curve. The E-MMGA also shows excellent performance in SAR data. In conclusion, E-MMGA can be used as optimization for entropy to perform an automatic detection of oil spill in SAR satellite data.
NASA Astrophysics Data System (ADS)
Gosálvez, M. A.; Ferrando, N.; Xing, Y.; Pal, Prem; Sato, K.; Cerdá, J.; Gadea, R.
2011-06-01
An evolutionary algorithm is presented for the automated calibration of the continuous cellular automaton for the simulation of isotropic and anisotropic wet chemical etching of silicon in as many as 31 widely different and technologically relevant etchants, including KOH, KOH+IPA, TMAH and TMAH+Triton, in various concentrations and temperatures. Based on state-of-the-art evolutionary operators, we implement a robust algorithm for the simultaneous optimization of roughly 150 microscopic removal rates based on the minimization of a cost function with four quantitative error measures, including (i) the error between simulated and experimental macroscopic etch rates for numerous surface orientations all over the unit sphere, (ii) the error due to underetching asymmetries and floor corrugation features observed in simulated silicon samples masked using a circular pattern, (iii) the error associated with departures from a step-flow-based hierarchy in the values of the microscopic removal rates, and (iv) the error associated with deviations from a step-flow-based clustering of the microscopic removal rates. For the first time, we present the calibration and successful simulation of two technologically relevant CMOS compatible etchants, namely TMAH and, especially, TMAH+Triton, providing several comparisons between simulated and experimental MEMS structures based on multi-step etching in these etchants.
Comparison of Algorithms for Prediction of Protein Structural Features from Evolutionary Data
Bywater, Robert P.
2016-01-01
Proteins have many functions and predicting these is still one of the major challenges in theoretical biophysics and bioinformatics. Foremost amongst these functions is the need to fold correctly thereby allowing the other genetically dictated tasks that the protein has to carry out to proceed efficiently. In this work, some earlier algorithms for predicting protein domain folds are revisited and they are compared with more recently developed methods. In dealing with intractable problems such as fold prediction, when different algorithms show convergence onto the same result there is every reason to take all algorithms into account such that a consensus result can be arrived at. In this work it is shown that the application of different algorithms in protein structure prediction leads to results that do not converge as such but rather they collude in a striking and useful way that has never been considered before. PMID:26963911
Mutual information image registration based on improved bee evolutionary genetic algorithm
NASA Astrophysics Data System (ADS)
Xu, Gang; Tu, Jingzhi
2009-07-01
In recent years, the mutual information is regarded as a more efficient similarity metrics in the image registration. According to the features of mutual information image registration, the Bee Evolution Genetic Algorithm (BEGA) is chosen for optimizing parameters, which imitates swarm mating. Besides, we try our best adaptively set the initial parameters to improve the BEGA. The programming result shows the wonderful precision of the algorithm.
NASA Astrophysics Data System (ADS)
Wu, J.; Yang, Y.; Luo, Q.; Wu, J.
2012-12-01
This study presents a new hybrid multi-objective evolutionary algorithm, the niched Pareto tabu search combined with a genetic algorithm (NPTSGA), whereby the global search ability of niched Pareto tabu search (NPTS) is improved by the diversification of candidate solutions arose from the evolving nondominated sorting genetic algorithm II (NSGA-II) population. Also, the NPTSGA coupled with the commonly used groundwater flow and transport codes, MODFLOW and MT3DMS, is developed for multi-objective optimal design of groundwater remediation systems. The proposed methodology is then applied to a large-scale field groundwater remediation system for cleanup of large trichloroethylene (TCE) plume at the Massachusetts Military Reservation (MMR) in Cape Cod, Massachusetts. Furthermore, a master-slave (MS) parallelization scheme based on the Message Passing Interface (MPI) is incorporated into the NPTSGA to implement objective function evaluations in distributed processor environment, which can greatly improve the efficiency of the NPTSGA in finding Pareto-optimal solutions to the real-world application. This study shows that the MS parallel NPTSGA in comparison with the original NPTS and NSGA-II can balance the tradeoff between diversity and optimality of solutions during the search process and is an efficient and effective tool for optimizing the multi-objective design of groundwater remediation systems under complicated hydrogeologic conditions.
Holdsworth, Clay; Kim, Minsun; Liao, Jay; Phillips, Mark
2012-01-01
Purpose: To evaluate how a more flexible and thorough multiobjective search of feasible IMRT plans affects performance in IMRT optimization. Methods: A multiobjective evolutionary algorithm (MOEA) was used as a tool to investigate how expanding the search space to include a wider range of penalty functions affects the quality of the set of IMRT plans produced. The MOEA uses a population of IMRT plans to generate new IMRT plans through deterministic minimization of recombined penalty functions that are weighted sums of multiple, tissue-specific objective functions. The quality of the generated plans are judged by an independent set of nonconvex, clinically relevant decision criteria, and all dominated plans are eliminated. As this process repeats itself, better plans are produced so that the population of IMRT plans will approach the Pareto front. Three different approaches were used to explore the effects of expanding the search space. First, the evolutionary algorithm used genetic optimization principles to search by simultaneously optimizing both the weights and tissue-specific dose parameters in penalty functions. Second, penalty function parameters were individually optimized for each voxel in all organs at risk (OARs) in the MOEA. Finally, a heuristic voxel-specific improvement (VSI) algorithm that can be used on any IMRT plan was developed that incrementally improves voxel-specific penalty function parameters for all structures (OARs and targets). Different approaches were compared using the concept of domination comparison applied to the sets of plans obtained by multiobjective optimization. Results: MOEA optimizations that simultaneously searched both importance weights and dose parameters generated sets of IMRT plans that were superior to sets of plans produced when either type of parameter was fixed for four example prostate plans. The amount of improvement increased with greater overlap between OARs and targets. Allowing the MOEA to search for voxel
Kyriacou, Theocharis
2012-04-01
A biologically inspired model of head direction cells is presented and tested on a small mobile robot. Head direction cells (discovered in the brain of rats in 1984) encode the head orientation of their host irrespective of the host's location in the environment. The head direction system thus acts as a biological compass (though not a magnetic one) for its host. Head direction cells are influenced in different ways by idiothetic (host-centred) and allothetic (not host-centred) cues. The model presented here uses the visual, vestibular and kinesthetic inputs that are simulated by robot sensors. Real robot-sensor data has been used in order to train the model's artificial neural network connections. The main contribution of this paper lies in the use of an evolutionary algorithm in order to determine the values of parameters that determine the behaviour of the model. More importantly, the objective function of the evolutionary strategy used takes into consideration quantitative biological observations reported in the literature. PMID:21785973
Alagar, Ananda Giri Babu; Kadirampatti Mani, Ganesh; Karunakaran, Kaviarasu
2016-01-01
Small fields smaller than 4 × 4 cm2 are used in stereotactic and conformal treatments where heterogeneity is normally present. Since dose calculation accuracy in both small fields and heterogeneity often involves more discrepancy, algorithms used by treatment planning systems (TPS) should be evaluated for achieving better treatment results. This report aims at evaluating accuracy of four model-based algorithms, X-ray Voxel Monte Carlo (XVMC) from Monaco, Superposition (SP) from CMS-Xio, AcurosXB (AXB) and analytical anisotropic algorithm (AAA) from Eclipse are tested against the measurement. Measurements are done using Exradin W1 plastic scintillator in Solid Water phantom with heterogeneities like air, lung, bone, and aluminum, irradiated with 6 and 15 MV photons of square field size ranging from 1 to 4 cm2. Each heterogeneity is introduced individually at two different depths from depth-of-dose maximum (Dmax), one setup being nearer and another farther from the Dmax. The central axis percentage depth-dose (CADD) curve for each setup is measured separately and compared with the TPS algorithm calculated for the same setup. The percentage normalized root mean squared deviation (%NRMSD) is calculated, which represents the whole CADD curve's deviation against the measured. It is found that for air and lung heterogeneity, for both 6 and 15 MV, all algorithms show maximum deviation for field size 1 × 1 cm2 and gradually reduce when field size increases, except for AAA. For aluminum and bone, all algorithms' deviations are less for 15 MV irrespective of setup. In all heterogeneity setups, 1 × 1 cm2 field showed maximum deviation, except in 6MV bone setup. All algorithms in the study, irrespective of energy and field size, when any heterogeneity is nearer to Dmax, the dose deviation is higher compared to the same heterogeneity far from the Dmax. Also, all algorithms show maximum deviation in lower-density materials compared to high-density materials. PMID:26894345
Srinivasan, D.; Tettamanzi, A.G.B.
1997-02-01
An integrated framework for modeling and evaluating the economic impacts of environmental dispatching and fuel switching is presented in this paper. It explores the potential for operational changes in utility commitment and dispatching to achieve least cost operation while complying to rigorous environmental standards. The work reported here employs a heuristics-guided evolutionary algorithm to solve this multiobjective constrained optimization problem, and provides the decision maker a whole range of alternatives along the Pareto-optimal frontier. Heuristics are used to ensure the feasibility of each solution, and to reduce the computation time. The capabilities of this approach are illustrated via tests on a 19-unit system. Various emission compliance strategies are considered to reveal the economic trade-offs that come into play.
NASA Astrophysics Data System (ADS)
Al-Aqtash, Nabil; Sabirianov, Renat
2014-03-01
The evolutionary algorithm coupled with the first-principles Density Functional Theory (DFT) method is used to identify the global energy minimum structure of Fe3Se4. The structure is processed by free-energy based evolutionary crystal structure optimization algorithms, as implemented USPEX and XtalOpt codes, which predict structure of the system solely based on the chemical formula without prior experimental information. This is very challenging task for verifying the validity of this approach on Fe3Se4 structure. Fe3Se4 has highly anisotropic structure, and its structure demonstrates ordering of vacancies that makes the system ``open'', i.e. breaking traditional coordination rules. By using USPEX and XtalOpt we identify the global minimum of Fe3Se4 structure. The randomly generated initial population had 20 structures. The enthalpy (tolerance of 0.002 eV), and space group were used for niching. The enthalpy of the lowest energy structure, out of 700 generated structures that were generated, is (-81.126 eV). Bulk Fe3Se4 has a monoclinic structure with a space group of I2/m and a = 6.208Å, b = 3.541Å, and c = 11.281Å. The crystal structure and the lattice parameters of Fe3Se4 optimized from our calculations are similar to the experimental existing structure parameters. Fe3Se4 exhibits large magnetocrystalline anisotropy of 6x106 erg/cm3 and coercivity up to 40kOe due to its unusual properties.
Zhou, Jianzhong; Song, Lixiang; Kursan, Suncana; Liu, Yi
2015-05-01
A two-dimensional coupled water quality model is developed for modeling the flow-mass transport in shallow water. To simulate shallow flows on complex topography with wetting and drying, an unstructured grid, well-balanced, finite volume algorithm is proposed for numerical resolution of a modified formulation of two-dimensional shallow water equations. The slope-limited linear reconstruction method is used to achieve second-order accuracy in space. The algorithm adopts a HLLC-based integrated solver to compute the flow and mass transport fluxes simultaneously, and uses Hancock's predictor-corrector scheme for efficient time stepping as well as second-order temporal accuracy. The continuity and momentum equations are updated in both wet and dry cells. A new hybrid method, which can preserve the well-balanced property of the algorithm for simulations involving flooding and recession, is proposed for bed slope terms approximation. The effectiveness and robustness of the proposed algorithm are validated by the reasonable good agreement between numerical and reference results of several benchmark test cases. Results show that the proposed coupled flow-mass transport model can simulate complex flows and mass transport in shallow water. PMID:25686488
Xiao, Chuan-Le; Chen, Xiao-Zhou; Du, Yang-Li; Sun, Xuesong; Zhang, Gong; He, Qing-Yu
2013-01-01
Mass spectrometry has become one of the most important technologies in proteomic analysis. Tandem mass spectrometry (LC-MS/MS) is a major tool for the analysis of peptide mixtures from protein samples. The key step of MS data processing is the identification of peptides from experimental spectra by searching public sequence databases. Although a number of algorithms to identify peptides from MS/MS data have been already proposed, e.g. Sequest, OMSSA, X!Tandem, Mascot, etc., they are mainly based on statistical models considering only peak-matches between experimental and theoretical spectra, but not peak intensity information. Moreover, different algorithms gave different results from the same MS data, implying their probable incompleteness and questionable reproducibility. We developed a novel peptide identification algorithm, ProVerB, based on a binomial probability distribution model of protein tandem mass spectrometry combined with a new scoring function, making full use of peak intensity information and, thus, enhancing the ability of identification. Compared with Mascot, Sequest, and SQID, ProVerB identified significantly more peptides from LC-MS/MS data sets than the current algorithms at 1% False Discovery Rate (FDR) and provided more confident peptide identifications. ProVerB is also compatible with various platforms and experimental data sets, showing its robustness and versatility. The open-source program ProVerB is available at http://bioinformatics.jnu.edu.cn/software/proverb/ . PMID:23163785
Constructing large-scale genetic maps using an evolutionary strategy algorithm.
Mester, D; Ronin, Y; Minkov, D; Nevo, E; Korol, A
2003-01-01
This article is devoted to the problem of ordering in linkage groups with many dozens or even hundreds of markers. The ordering problem belongs to the field of discrete optimization on a set of all possible orders, amounting to n!/2 for n loci; hence it is considered an NP-hard problem. Several authors attempted to employ the methods developed in the well-known traveling salesman problem (TSP) for multilocus ordering, using the assumption that for a set of linked loci the true order will be the one that minimizes the total length of the linkage group. A novel, fast, and reliable algorithm developed for the TSP and based on evolution-strategy discrete optimization was applied in this study for multilocus ordering on the basis of pairwise recombination frequencies. The quality of derived maps under various complications (dominant vs. codominant markers, marker misclassification, negative and positive interference, and missing data) was analyzed using simulated data with approximately 50-400 markers. High performance of the employed algorithm allows systematic treatment of the problem of verification of the obtained multilocus orders on the basis of computing-intensive bootstrap and/or jackknife approaches for detecting and removing questionable marker scores, thereby stabilizing the resulting maps. Parallel calculation technology can easily be adopted for further acceleration of the proposed algorithm. Real data analysis (on maize chromosome 1 with 230 markers) is provided to illustrate the proposed methodology. PMID:14704202
Liu, Rong; Li, Xi; Zhang, Wei; Zhou, Hong-Hao
2015-01-01
Objective Multiple linear regression (MLR) and machine learning techniques in pharmacogenetic algorithm-based warfarin dosing have been reported. However, performances of these algorithms in racially diverse group have never been objectively evaluated and compared. In this literature-based study, we compared the performances of eight machine learning techniques with those of MLR in a large, racially-diverse cohort. Methods MLR, artificial neural network (ANN), regression tree (RT), multivariate adaptive regression splines (MARS), boosted regression tree (BRT), support vector regression (SVR), random forest regression (RFR), lasso regression (LAR) and Bayesian additive regression trees (BART) were applied in warfarin dose algorithms in a cohort from the International Warfarin Pharmacogenetics Consortium database. Covariates obtained by stepwise regression from 80% of randomly selected patients were used to develop algorithms. To compare the performances of these algorithms, the mean percentage of patients whose predicted dose fell within 20% of the actual dose (mean percentage within 20%) and the mean absolute error (MAE) were calculated in the remaining 20% of patients. The performances of these techniques in different races, as well as the dose ranges of therapeutic warfarin were compared. Robust results were obtained after 100 rounds of resampling. Results BART, MARS and SVR were statistically indistinguishable and significantly out performed all the other approaches in the whole cohort (MAE: 8.84–8.96 mg/week, mean percentage within 20%: 45.88%–46.35%). In the White population, MARS and BART showed higher mean percentage within 20% and lower mean MAE than those of MLR (all p values < 0.05). In the Asian population, SVR, BART, MARS and LAR performed the same as MLR. MLR and LAR optimally performed among the Black population. When patients were grouped in terms of warfarin dose range, all machine learning techniques except ANN and LAR showed significantly
Osaba, E.; Carballedo, R.; Diaz, F.; Onieva, E.; de la Iglesia, I.; Perallos, A.
2014-01-01
Since their first formulation, genetic algorithms (GAs) have been one of the most widely used techniques to solve combinatorial optimization problems. The basic structure of the GAs is known by the scientific community, and thanks to their easy application and good performance, GAs are the focus of a lot of research works annually. Although throughout history there have been many studies analyzing various concepts of GAs, in the literature there are few studies that analyze objectively the influence of using blind crossover operators for combinatorial optimization problems. For this reason, in this paper a deep study on the influence of using them is conducted. The study is based on a comparison of nine techniques applied to four well-known combinatorial optimization problems. Six of the techniques are GAs with different configurations, and the remaining three are evolutionary algorithms that focus exclusively on the mutation process. Finally, to perform a reliable comparison of these results, a statistical study of them is made, performing the normal distribution z-test. PMID:25165731
Ingle, Atul; Varghese, Tomy
2014-01-01
Tissue stiffness estimation plays an important role in cancer detection and treatment. The presence of stiffer regions in healthy tissue can be used as an indicator for the possibility of pathological changes. Electrode vibration elastography involves tracking of a mechanical shear wave in tissue using radio-frequency ultrasound echoes. Based on appropriate assumptions on tissue elasticity, this approach provides a direct way of measuring tissue stiffness from shear wave velocity, and enabling visualization in the form of tissue stiffness maps. In this study, two algorithms for shear wave velocity reconstruction in an electrode vibration setup are presented. The first method models the wave arrival time data using a hidden Markov model whose hidden states are local wave velocities that are estimated using a particle filter implementation. This is compared to a direct optimization-based function fitting approach that uses sequential quadratic programming to estimate the unknown velocities and locations of interfaces. The mean shear wave velocities obtained using the two algorithms are within 10%of each other. Moreover, the Young’s modulus estimates obtained from an incompressibility assumption are within 15 kPa of those obtained from the true stiffness data obtained from mechanical testing. Based on visual inspection of the two filtering algorithms, the particle filtering method produces smoother velocity maps. PMID:25285187
NASA Astrophysics Data System (ADS)
Palacin, J.; Salleras, M.; Puig, M.; Samitier, J.; Marco, S.
2004-07-01
In this work, we approach the problem of extracting a dynamic multiport thermal compact model from thermal impedance transients of microsystems using genetic algorithms. The model takes the form of a unique RC network, using a thermal-electrical analogy. The model topology is codified in a binary chromosoma and nonlinear least squares is used for sizing their components. The compact model topology evolution is genetically controlled to obtain the RC network that minimizes the reconstruction error of the thermal impedance transients. As an example, the proposed methodology is applied to an innovative silicon microthruster and compared with random search and sequential forward selection.
On the Effects of Migration on the Fitness Distribution of Parallel Evolutionary Algorithms
Cantu-Paz, E.
2000-04-25
Migration of individuals between populations may increase the selection pressure. This has the desirable consequence of speeding up convergence, but it may result in an excessively rapid loss of variation that may cause the search to fail. This paper investigates the effects of migration on the distribution of fitness. It considers arbitrary migration rates and topologies with different number of neighbors, and it compares algorithms that are configured to have the same selection intensity. The results suggest that migration preserves more diversity as the number of neighbors of a deme increases.
NASA Astrophysics Data System (ADS)
Lu, Yujie; Zhu, Banghe; Darne, Chinmay; Tan, I.-Chih; Rasmussen, John C.; Sevick-Muraca, Eva M.
2011-12-01
The goal of preclinical fluorescence-enhanced optical tomography (FEOT) is to provide three-dimensional fluorophore distribution for a myriad of drug and disease discovery studies in small animals. Effective measurements, as well as fast and robust image reconstruction, are necessary for extensive applications. Compared to bioluminescence tomography (BLT), FEOT may result in improved image quality through higher detected photon count rates. However, background signals that arise from excitation illumination affect the reconstruction quality, especially when tissue fluorophore concentration is low and/or fluorescent target is located deeply in tissues. We show that near-infrared fluorescence (NIRF) imaging with an optimized filter configuration significantly reduces the background noise. Model-based reconstruction with a high-order approximation to the radiative transfer equation further improves the reconstruction quality compared to the diffusion approximation. Improvements in FEOT are demonstrated experimentally using a mouse-shaped phantom with targets of pico- and subpico-mole NIR fluorescent dye.
Hsieh, PingHsun; Woerner, August E.; Wall, Jeffrey D.; Lachance, Joseph; Tishkoff, Sarah A.; Gutenkunst, Ryan N.; Hammer, Michael F.
2016-01-01
Comparisons of whole-genome sequences from ancient and contemporary samples have pointed to several instances of archaic admixture through interbreeding between the ancestors of modern non-Africans and now extinct hominids such as Neanderthals and Denisovans. One implication of these findings is that some adaptive features in contemporary humans may have entered the population via gene flow with archaic forms in Eurasia. Within Africa, fossil evidence suggests that anatomically modern humans (AMH) and various archaic forms coexisted for much of the last 200,000 yr; however, the absence of ancient DNA in Africa has limited our ability to make a direct comparison between archaic and modern human genomes. Here, we use statistical inference based on high coverage whole-genome data (greater than 60×) from contemporary African Pygmy hunter-gatherers as an alternative means to study the evolutionary history of the genus Homo. Using whole-genome simulations that consider demographic histories that include both isolation and gene flow with neighboring farming populations, our inference method rejects the hypothesis that the ancestors of AMH were genetically isolated in Africa, thus providing the first whole genome-level evidence of African archaic admixture. Our inferences also suggest a complex human evolutionary history in Africa, which involves at least a single admixture event from an unknown archaic population into the ancestors of AMH, likely within the last 30,000 yr. PMID:26888264
Hsieh, PingHsun; Woerner, August E; Wall, Jeffrey D; Lachance, Joseph; Tishkoff, Sarah A; Gutenkunst, Ryan N; Hammer, Michael F
2016-03-01
Comparisons of whole-genome sequences from ancient and contemporary samples have pointed to several instances of archaic admixture through interbreeding between the ancestors of modern non-Africans and now extinct hominids such as Neanderthals and Denisovans. One implication of these findings is that some adaptive features in contemporary humans may have entered the population via gene flow with archaic forms in Eurasia. Within Africa, fossil evidence suggests that anatomically modern humans (AMH) and various archaic forms coexisted for much of the last 200,000 yr; however, the absence of ancient DNA in Africa has limited our ability to make a direct comparison between archaic and modern human genomes. Here, we use statistical inference based on high coverage whole-genome data (greater than 60×) from contemporary African Pygmy hunter-gatherers as an alternative means to study the evolutionary history of the genus Homo. Using whole-genome simulations that consider demographic histories that include both isolation and gene flow with neighboring farming populations, our inference method rejects the hypothesis that the ancestors of AMH were genetically isolated in Africa, thus providing the first whole genome-level evidence of African archaic admixture. Our inferences also suggest a complex human evolutionary history in Africa, which involves at least a single admixture event from an unknown archaic population into the ancestors of AMH, likely within the last 30,000 yr. PMID:26888264
NASA Astrophysics Data System (ADS)
Hu, Yan-Yan; Li, Dong-Sheng
2016-01-01
The hyperspectral images(HSI) consist of many closely spaced bands carrying the most object information. While due to its high dimensionality and high volume nature, it is hard to get satisfactory classification performance. In order to reduce HSI data dimensionality preparation for high classification accuracy, it is proposed to combine a band selection method of artificial immune systems (AIS) with a hybrid kernels support vector machine (SVM-HK) algorithm. In fact, after comparing different kernels for hyperspectral analysis, the approach mixed radial basis function kernel (RBF-K) with sigmoid kernel (Sig-K) and applied the optimized hybrid kernels in SVM classifiers. Then the SVM-HK algorithm used to induce the bands selection of an improved version of AIS. The AIS was composed of clonal selection and elite antibody mutation, including evaluation process with optional index factor (OIF). Experimental classification performance was on a San Diego Naval Base acquired by AVIRIS, the HRS dataset shows that the method is able to efficiently achieve bands redundancy removal while outperforming the traditional SVM classifier.
NASA Astrophysics Data System (ADS)
Ward, V. L.; Singh, R.; Reed, P. M.; Keller, K.
2014-12-01
As water resources problems typically involve several stakeholders with conflicting objectives, multi-objective evolutionary algorithms (MOEAs) are now key tools for understanding management tradeoffs. Given the growing complexity of water planning problems, it is important to establish if an algorithm can consistently perform well on a given class of problems. This knowledge allows the decision analyst to focus on eliciting and evaluating appropriate problem formulations. This study proposes a multi-objective adaptation of the classic environmental economics "Lake Problem" as a computationally simple but mathematically challenging MOEA benchmarking problem. The lake problem abstracts a fictional town on a lake which hopes to maximize its economic benefit without degrading the lake's water quality to a eutrophic (polluted) state through excessive phosphorus loading. The problem poses the challenge of maintaining economic activity while confronting the uncertainty of potentially crossing a nonlinear and potentially irreversible pollution threshold beyond which the lake is eutrophic. Objectives for optimization are maximizing economic benefit from lake pollution, maximizing water quality, maximizing the reliability of remaining below the environmental threshold, and minimizing the probability that the town will have to drastically change pollution policies in any given year. The multi-objective formulation incorporates uncertainty with a stochastic phosphorus inflow abstracting non-point source pollution. We performed comprehensive diagnostics using 6 algorithms: Borg, MOEAD, eMOEA, eNSGAII, GDE3, and NSGAII to ascertain their controllability, reliability, efficiency, and effectiveness. The lake problem abstracts elements of many current water resources and climate related management applications where there is the potential for crossing irreversible, nonlinear thresholds. We show that many modern MOEAs can fail on this test problem, indicating its suitability as a
NASA Astrophysics Data System (ADS)
Liu, Jiulong; Zhang, Xue; Zhang, Xiaoqun; Zhao, Hongkai; Gao, Yu; Thomas, David; Low, Daniel A.; Gao, Hao
2015-11-01
4D cone-beam computed tomography (4DCBCT) reconstructs a temporal sequence of CBCT images for the purpose of motion management or 4D treatment in radiotherapy. However the image reconstruction often involves the binning of projection data to each temporal phase, and therefore suffers from deteriorated image quality due to inaccurate or uneven binning in phase, e.g., under the non-periodic breathing. A 5D model has been developed as an accurate model of (periodic and non-periodic) respiratory motion. That is, given the measurements of breathing amplitude and its time derivative, the 5D model parametrizes the respiratory motion by three time-independent variables, i.e., one reference image and two vector fields. In this work we aim to develop a new 4DCBCT reconstruction method based on 5D model. Instead of reconstructing a temporal sequence of images after the projection binning, the new method reconstructs time-independent reference image and vector fields with no requirement of binning. The image reconstruction is formulated as a optimization problem with total-variation regularization on both reference image and vector fields, and the problem is solved by the proximal alternating minimization algorithm, during which the split Bregman method is used to reconstruct the reference image, and the Chambolle's duality-based algorithm is used to reconstruct the vector fields. The convergence analysis of the proposed algorithm is provided for this nonconvex problem. Validated by the simulation studies, the new method has significantly improved image reconstruction accuracy due to no binning and reduced number of unknowns via the use of the 5D model.
NASA Astrophysics Data System (ADS)
Knapmeyer-Endrun, B.; Hammer, C.
2014-12-01
The seismometers that the Apollo astronauts deployed on the Moon provide the only recordings of seismic events from any extra-terrestrial body so far. These lunar events are significantly different from ones recorded on Earth, in terms of both signal shape and source processes. Thus they are a valuable test case for any experiment in planetary seismology. In this study, we analyze Apollo 16 data with a single-station event detection and classification algorithm in view of NASA's upcoming InSight mission to Mars. InSight, scheduled for launch in early 2016, has the goal to investigate Mars' internal structure by deploying a seismometer on its surface. As the mission does not feature any orbiter, continuous data will be relayed to Earth at a reduced rate. Full range data will only be available by requesting specific time-windows within a few days after the receipt of the original transmission. We apply a recently introduced algorithm based on hidden Markov models that requires only a single example waveform of each event class for training appropriate models. After constructing the prototypes we detect and classify impacts and deep and shallow moonquakes. Initial results for 1972 (year of station installation with 8 months of data) indicate a high detection rate of over 95% for impacts, of which more than 80% are classified correctly. Deep moonquakes, which occur in large amounts, but often show only very weak signals, are detected with less certainty (~70%). As there is only one weak shallow moonquake covered, results for this event class are not statistically significant. Daily adjustments of the background noise model help to reduce false alarms, which are mainly erroneous deep moonquake detections, by about 25%. The algorithm enables us to classify events that were previously listed in the catalog without classification, and, through the combined use of long period and short period data, identify some unlisted local impacts as well as at least two yet unreported
Searching good strategies in evolutionary minority game using variable length genetic algorithm
NASA Astrophysics Data System (ADS)
Yang, Wei-Song; Wang, Bing-Hong; Wu, Yi-Lin; Xie, Yan-Bo
2004-08-01
We propose and study a new adaptation minority game for understanding the complex dynamical behavior characterized by agent interactions competing limited resource in many natural and social systems. We compare the strategy of agents in the model to chromosome in biology. In our model, the agents with poor performance during certain time period may modify their strategies via variable length genetic algorithm which consists of cut and splice operator, imitating similar processes in biology. The performances of the agents in our model are calculated for different parameter conditions and different evolution mechanism. It is found that the system may evolve into a much more ideal equilibrium state, which implies much stronger cooperation among agents and much more effective utilization of the social resources. It is also found that the distribution of the strategies held by agents will tend towards a state concentrating upon small m region.
NASA Astrophysics Data System (ADS)
Amian, M.; Setarehdan, S. Kamaledin; Yousefi, H.
2014-09-01
Functional Near infrared spectroscopy (fNIRS) is a newly noninvasive way to measure oxy hemoglobin and deoxy hemoglobin concentration changes of human brain. Relatively safe and affordable than other functional imaging techniques such as fMRI, it is widely used for some special applications such as infant examinations and pilot's brain monitoring. In such applications, fNIRS data sometimes suffer from undesirable movements of subject's head which called motion artifact and lead to a signal corruption. Motion artifact in fNIRS data may result in fallacy of concluding or diagnosis. In this work we try to reduce these artifacts by a novel Kalman filtering algorithm that is based on an autoregressive moving average (ARMA) model for fNIRS system. Our proposed method does not require to any additional hardware and sensor and also it does not need to whole data together that once were of ineluctable necessities in older algorithms such as adaptive filter and Wiener filtering. Results show that our approach is successful in cleaning contaminated fNIRS data.
Mei, Xue; Li, Zhenhua; Xu, Songsong; Guo, Xiaoyan
2014-01-01
Multimodality image registration and fusion has complementary significance for guiding dental implant surgery. As the needs of the different resolution image registration, we develop an improved Iterative Closest Point (ICP) algorithm that focuses on the registration of Cone Beam Computed Tomography (CT) image and high-resolution Blue-light scanner image. The proposed algorithm includes two major phases, coarse and precise registration. Firstly, for reducing the matching interference of human subjective factors, we extract feature points based on curvature characteristics and use the improved three point's translational transformation method to realize coarse registration. Then, the feature point set and reference point set, obtained by the initial registered transformation, are processed in the precise registration step. Even with the unsatisfactory initial values, this two steps registration method can guarantee the global convergence and the convergence precision. Experimental results demonstrate that the method has successfully realized the registration of the Cone Beam CT dental model and the blue-ray scanner model with higher accuracy. So the method could provide researching foundation for the relevant software development in terms of the registration of multi-modality medical data. PMID:24511309
2013-01-01
Background Proteins are essential biological molecules which play vital roles in nearly all biological processes. It is the tertiary structure of a protein that determines its functions. Therefore the prediction of a protein's tertiary structure based on its primary amino acid sequence has long been the most important and challenging subject in biochemistry, molecular biology and biophysics. In the past, the HP lattice model was one of the ab initio methods that many researchers used to forecast the protein structure. Although these kinds of simplified methods could not achieve high resolution, they provided a macrocosm-optimized protein structure. The model has been employed to investigate general principles of protein folding, and plays an important role in the prediction of protein structures. Methods In this paper, we present an improved evolutionary algorithm for the protein folding problem. We study the problem on the 3D FCC lattice HP model which has been widely used in previous research. Our focus is to develop evolutionary algorithms (EA) which are robust, easy to implement and can handle various energy functions. We propose to combine three different local search methods, including lattice rotation for crossover, K-site move for mutation, and generalized pull move; these form our key components to improve previous EA-based approaches. Results We have carried out experiments over several data sets which were used in previous research. The results of the experiments show that our approach is able to find optimal conformations which were not found by previous EA-based approaches. Conclusions We have investigated the geometric properties of the 3D FCC lattice and developed several local search techniques to improve traditional EA-based approaches to the protein folding problem. It is known that EA-based approaches are robust and can handle arbitrary energy functions. Our results further show that by extensive development of local searches, EA can also be very
NASA Astrophysics Data System (ADS)
Tang, Shihua; Li, Feida; Liu, Yintao; Lan, Lan; Zhou, Conglin; Huang, Qing
2015-12-01
With the advantage of high speed, big transport capacity, low energy consumption, good economic benefits and so on, high-speed railway is becoming more and more popular all over the world. It can reach 350 kilometers per hour, which requires high security performances. So research on the prediction of high-speed railway settlement that as one of the important factors affecting the safety of high-speed railway becomes particularly important. This paper takes advantage of genetic algorithms to seek all the data in order to calculate the best result and combines the advantage of strong learning ability and high accuracy of wavelet neural network, then build the model of genetic wavelet neural network for the prediction of high-speed railway settlement. By the experiment of back propagation neural network, wavelet neural network and genetic wavelet neural network, it shows that the absolute value of residual errors in the prediction of high-speed railway settlement based on genetic algorithm is the smallest, which proves that genetic wavelet neural network is better than the other two methods. The correlation coefficient of predicted and observed value is 99.9%. Furthermore, the maximum absolute value of residual error, minimum absolute value of residual error-mean value of relative error and value of root mean squared error(RMSE) that predicted by genetic wavelet neural network are all smaller than the other two methods'. The genetic wavelet neural network in the prediction of high-speed railway settlement is more stable in terms of stability and more accurate in the perspective of accuracy.
Beyer, Hans-Georg
2014-01-01
The convergence behaviors of so-called natural evolution strategies (NES) and of the information-geometric optimization (IGO) approach are considered. After a review of the NES/IGO ideas, which are based on information geometry, the implications of this philosophy w.r.t. optimization dynamics are investigated considering the optimization performance on the class of positive quadratic objective functions (the ellipsoid model). Exact differential equations describing the approach to the optimizer are derived and solved. It is rigorously shown that the original NES philosophy optimizing the expected value of the objective functions leads to very slow (i.e., sublinear) convergence toward the optimizer. This is the real reason why state of the art implementations of IGO algorithms optimize the expected value of transformed objective functions, for example, by utility functions based on ranking. It is shown that these utility functions are localized fitness functions that change during the IGO flow. The governing differential equations describing this flow are derived. In the case of convergence, the solutions to these equations exhibit an exponentially fast approach to the optimizer (i.e., linear convergence order). Furthermore, it is proven that the IGO philosophy leads to an adaptation of the covariance matrix that equals in the asymptotic limit-up to a scalar factor-the inverse of the Hessian of the objective function considered. PMID:24922548
NASA Astrophysics Data System (ADS)
An, Zhao; Zhounian, Lai; Peng, Wu; Linlin, Cao; Dazhuan, Wu
2016-07-01
This paper describes the shape optimization of a low specific speed centrifugal pump at the design point. The target pump has already been manually modified on the basis of empirical knowledge. A genetic algorithm (NSGA-II) with certain enhancements is adopted to improve its performance further with respect to two goals. In order to limit the number of design variables without losing geometric information, the impeller is parametrized using the Bézier curve and a B-spline. Numerical simulation based on a Reynolds averaged Navier-Stokes (RANS) turbulent model is done in parallel to evaluate the flow field. A back-propagating neural network is constructed as a surrogate for performance prediction to save computing time, while initial samples are selected according to an orthogonal array. Then global Pareto-optimal solutions are obtained and analysed. The results manifest that unexpected flow structures, such as the secondary flow on the meridian plane, have diminished or vanished in the optimized pump.
Knowledge-based function optimization using fuzzy cultural algorithms with evolutionary programming.
Reynolds, R G; Zhu, S
2001-01-01
In this paper, the advantages of a fuzzy representation in problem solving and search is investigated using the framework of Cultural algorithms (CAs). Since all natural languages contain a fuzzy component, the natural question is "Does this fuzzy representation facilitate the problem-solving process, within these systems". In order to investigate this question we use the CA framework of Reynolds (1996), CAs are a computational model of cultural evolution derived from and used to express basic anthropological models of culture and its development. A mathematical model of a full fuzzy CA is developed there. In it, the problem solving knowledge is represented using a fuzzy framework. Several theoretical results concerning its properties are presented. The model is then applied to the solution of a set of 12 difficult, benchmark problems in nonlinear real-valued function optimization. The performance of the full fuzzy model is compared with 8 other fuzzy and crisp architectures. The results suggest that a fuzzy approach can produce a statistically significant improvement in search efficiency over nonfuzzy versions for the entire set of functions, the then investigate the class of performance functions for which the full fuzzy system exhibits the greatest improvements over nonfuzzy systems. In general, these are functions which require some preliminary investigation in order to embark on an effective search. PMID:18244764
NASA Astrophysics Data System (ADS)
Arnaout, A.; Fruhwirth, R.; Winter, M.; Esmael, B.; Thonhauser, G.
2012-04-01
The use of neural networks and advanced machine learning techniques in the oil & gas industry is a growing trend in the market. Especially in drilling oil & gas wells, prediction and monitoring different drilling parameters is an essential task to prevent serious problems like "Kick", "Lost Circulation" or "Stuck Pipe" among others. The hookload represents the weight load of the drill string at the crane hook. It is one of the most important parameters. During drilling the parameter "Weight on Bit" is controlled by the driller whereby the hookload is the only measure to monitor how much weight on bit is applied to the bit to generate the hole. Any changes in weight on bit will be directly reflected at the hookload. Furthermore any unwanted contact between the drill string and the wellbore - potentially leading to stuck pipe problem - will appear directly in the measurements of the hookload. Therefore comparison of the measured to the predicted hookload will not only give a clear idea on what is happening down-hole, it also enables the prediction of a number of important events that may cause problems in the borehole and yield in some - fortunately rare - cases in catastrophes like blow-outs. Heuristic models using highly sophisticated neural networks were designed for the hookload prediction; the training data sets were prepared in cooperation with drilling experts. Sensor measurements as well as a set of derived feature channels were used as input to the models. The contents of the final data set can be separated into (1) features based on rig operation states, (2) real-time sensors features and (3) features based on physics. A combination of novel neural network architecture - the Completely Connected Perceptron and parallel learning techniques which avoid trapping into local error minima - was used for building the models. In addition automatic network growing algorithms and highly sophisticated stopping criterions offer robust and efficient estimation of the
NASA Astrophysics Data System (ADS)
Smith, R.; Kasprzyk, J. R.; Zagona, E. A.
2015-12-01
Instead of building new infrastructure to increase their supply reliability, water resource managers are often tasked with better management of current systems. The managers often have existing simulation models that aid their planning, and lack methods for efficiently generating and evaluating planning alternatives. This presentation discusses how multiobjective evolutionary algorithm (MOEA) decision support can be used with the sophisticated water infrastructure model, RiverWare, in highly constrained water planning environments. We first discuss a study that performed a many-objective tradeoff analysis of water supply in the Tarrant Regional Water District (TRWD) in Texas. RiverWare is combined with the Borg MOEA to solve a seven objective problem that includes systemwide performance objectives and individual reservoir storage reliability. Decisions within the formulation balance supply in multiple reservoirs and control pumping between the eastern and western parts of the system. The RiverWare simulation model is forced by two stochastic hydrology scenarios to inform how management changes in wet versus dry conditions. The second part of the presentation suggests how a broader set of RiverWare-MOEA studies can inform tradeoffs in other systems, especially in political situations where multiple actors are in conflict over finite water resources. By incorporating quantitative representations of diverse parties' objectives during the search for solutions, MOEAs may provide support for negotiations and lead to more widely beneficial water management outcomes.
NASA Astrophysics Data System (ADS)
Grimminck, Dennis L. A. G.; Polman, Ben J. W.; Kentgens, Arno P. M.; Leo Meerts, W.
2011-08-01
A fast and accurate fit program is presented for deconvolution of one-dimensional solid-state quadrupolar NMR spectra of powdered materials. Computational costs of the synthesis of theoretical spectra are reduced by the use of libraries containing simulated time/frequency domain data. These libraries are calculated once and with the use of second-party simulation software readily available in the NMR community, to ensure a maximum flexibility and accuracy with respect to experimental conditions. EASY-GOING deconvolution ( EGdeconv) is equipped with evolutionary algorithms that provide robust many-parameter fitting and offers efficient parallellised computing. The program supports quantification of relative chemical site abundances and (dis)order in the solid-state by incorporation of (extended) Czjzek and order parameter models. To illustrate EGdeconv's current capabilities, we provide three case studies. Given the program's simple concept it allows a straightforward extension to include other NMR interactions. The program is available as is for 64-bit Linux operating systems.
Guerra, J G; Rubiano, J G; Winter, G; Guerra, A G; Alonso, H; Arnedo, M A; Tejera, A; Gil, J M; Rodríguez, R; Martel, P; Bolivar, J P
2015-11-01
The determination in a sample of the activity concentration of a specific radionuclide by gamma spectrometry needs to know the full energy peak efficiency (FEPE) for the energy of interest. The difficulties related to the experimental calibration make it advisable to have alternative methods for FEPE determination, such as the simulation of the transport of photons in the crystal by the Monte Carlo method, which requires an accurate knowledge of the characteristics and geometry of the detector. The characterization process is mainly carried out by Canberra Industries Inc. using proprietary techniques and methodologies developed by that company. It is a costly procedure (due to shipping and to the cost of the process itself) and for some research laboratories an alternative in situ procedure can be very useful. The main goal of this paper is to find an alternative to this costly characterization process, by establishing a method for optimizing the parameters of characterizing the detector, through a computational procedure which could be reproduced at a standard research lab. This method consists in the determination of the detector geometric parameters by using Monte Carlo simulation in parallel with an optimization process, based on evolutionary algorithms, starting from a set of reference FEPEs determined experimentally or computationally. The proposed method has proven to be effective and simple to implement. It provides a set of characterization parameters which it has been successfully validated for different source-detector geometries, and also for a wide range of environmental samples and certified materials. PMID:26188622
NASA Astrophysics Data System (ADS)
Ott, Julien G.; Becce, Fabio; Monnin, Pascal; Schmidt, Sabine; Bochud, François O.; Verdun, Francis R.
2014-08-01
The state of the art to describe image quality in medical imaging is to assess the performance of an observer conducting a task of clinical interest. This can be done by using a model observer leading to a figure of merit such as the signal-to-noise ratio (SNR). Using the non-prewhitening (NPW) model observer, we objectively characterised the evolution of its figure of merit in various acquisition conditions. The NPW model observer usually requires the use of the modulation transfer function (MTF) as well as noise power spectra. However, although the computation of the MTF poses no problem when dealing with the traditional filtered back-projection (FBP) algorithm, this is not the case when using iterative reconstruction (IR) algorithms, such as adaptive statistical iterative reconstruction (ASIR) or model-based iterative reconstruction (MBIR). Given that the target transfer function (TTF) had already shown it could accurately express the system resolution even with non-linear algorithms, we decided to tune the NPW model observer, replacing the standard MTF by the TTF. It was estimated using a custom-made phantom containing cylindrical inserts surrounded by water. The contrast differences between the inserts and water were plotted for each acquisition condition. Then, mathematical transformations were performed leading to the TTF. As expected, the first results showed a dependency of the image contrast and noise levels on the TTF for both ASIR and MBIR. Moreover, FBP also proved to be dependent of the contrast and noise when using the lung kernel. Those results were then introduced in the NPW model observer. We observed an enhancement of SNR every time we switched from FBP to ASIR to MBIR. IR algorithms greatly improve image quality, especially in low-dose conditions. Based on our results, the use of MBIR could lead to further dose reduction in several clinical applications.
NASA Astrophysics Data System (ADS)
Kotegawa, Tatsuya
Complexity in the Air Transportation System (ATS) arises from the intermingling of many independent physical resources, operational paradigms, and stakeholder interests, as well as the dynamic variation of these interactions over time. Currently, trade-offs and cost benefit analyses of new ATS concepts are carried out on system-wide evaluation simulations driven by air traffic forecasts that assume fixed airline routes. However, this does not well reflect reality as airlines regularly add and remove routes. A airline service route network evolution model that projects route addition and removal was created and combined with state-of-the-art air traffic forecast methods to better reflect the dynamic properties of the ATS in system-wide simulations. Guided by a system-of-systems framework, network theory metrics and machine learning algorithms were applied to develop the route network evolution models based on patterns extracted from historical data. Constructing the route addition section of the model posed the greatest challenge due to the large pool of new link candidates compared to the actual number of routes historically added to the network. Of the models explored, algorithms based on logistic regression, random forests, and support vector machines showed best route addition and removal forecast accuracies at approximately 20% and 40%, respectively, when validated with historical data. The combination of network evolution models and a system-wide evaluation tool quantified the impact of airline route network evolution on air traffic delay. The expected delay minutes when considering network evolution increased approximately 5% for a forecasted schedule on 3/19/2020. Performance trade-off studies between several airline route network topologies from the perspectives of passenger travel efficiency, fuel burn, and robustness were also conducted to provide bounds that could serve as targets for ATS transformation efforts. The series of analysis revealed that high
Wang, Lilie; Ding, George X
2014-07-01
The out-of-field dose can be clinically important as it relates to the dose of the organ-at-risk, although the accuracy of its calculation in commercial radiotherapy treatment planning systems (TPSs) receives less attention. This study evaluates the uncertainties of out-of-field dose calculated with a model based dose calculation algorithm, anisotropic analytical algorithm (AAA), implemented in a commercial radiotherapy TPS, Varian Eclipse V10, by using Monte Carlo (MC) simulations, in which the entire accelerator head is modeled including the multi-leaf collimators. The MC calculated out-of-field doses were validated by experimental measurements. The dose calculations were performed in a water phantom as well as CT based patient geometries and both static and highly modulated intensity-modulated radiation therapy (IMRT) fields were evaluated. We compared the calculated out-of-field doses, defined as lower than 5% of the prescription dose, in four H&N cancer patients and two lung cancer patients treated with volumetric modulated arc therapy (VMAT) and IMRT techniques. The results show that the discrepancy of calculated out-of-field dose profiles between AAA and the MC depends on the depth and is generally less than 1% for in water phantom comparisons and in CT based patient dose calculations for static field and IMRT. In cases of VMAT plans, the difference between AAA and MC is <0.5%. The clinical impact resulting from the error on the calculated organ doses were analyzed by using dose-volume histograms. Although the AAA algorithm significantly underestimated the out-of-field doses, the clinical impact on the calculated organ doses in out-of-field regions may not be significant in practice due to very low out-of-field doses relative to the target dose. PMID:24925858
NASA Astrophysics Data System (ADS)
Wang, Lilie; Ding, George X.
2014-07-01
The out-of-field dose can be clinically important as it relates to the dose of the organ-at-risk, although the accuracy of its calculation in commercial radiotherapy treatment planning systems (TPSs) receives less attention. This study evaluates the uncertainties of out-of-field dose calculated with a model based dose calculation algorithm, anisotropic analytical algorithm (AAA), implemented in a commercial radiotherapy TPS, Varian Eclipse V10, by using Monte Carlo (MC) simulations, in which the entire accelerator head is modeled including the multi-leaf collimators. The MC calculated out-of-field doses were validated by experimental measurements. The dose calculations were performed in a water phantom as well as CT based patient geometries and both static and highly modulated intensity-modulated radiation therapy (IMRT) fields were evaluated. We compared the calculated out-of-field doses, defined as lower than 5% of the prescription dose, in four H&N cancer patients and two lung cancer patients treated with volumetric modulated arc therapy (VMAT) and IMRT techniques. The results show that the discrepancy of calculated out-of-field dose profiles between AAA and the MC depends on the depth and is generally less than 1% for in water phantom comparisons and in CT based patient dose calculations for static field and IMRT. In cases of VMAT plans, the difference between AAA and MC is <0.5%. The clinical impact resulting from the error on the calculated organ doses were analyzed by using dose-volume histograms. Although the AAA algorithm significantly underestimated the out-of-field doses, the clinical impact on the calculated organ doses in out-of-field regions may not be significant in practice due to very low out-of-field doses relative to the target dose.
Clausen, Rudy; Ma, Buyong; Nussinov, Ruth; Shehu, Amarda
2015-01-01
An important goal in molecular biology is to understand functional changes upon single-point mutations in proteins. Doing so through a detailed characterization of structure spaces and underlying energy landscapes is desirable but continues to challenge methods based on Molecular Dynamics. In this paper we propose a novel algorithm, SIfTER, which is based instead on stochastic optimization to circumvent the computational challenge of exploring the breadth of a protein’s structure space. SIfTER is a data-driven evolutionary algorithm, leveraging experimentally-available structures of wildtype and variant sequences of a protein to define a reduced search space from where to efficiently draw samples corresponding to novel structures not directly observed in the wet laboratory. The main advantage of SIfTER is its ability to rapidly generate conformational ensembles, thus allowing mapping and juxtaposing landscapes of variant sequences and relating observed differences to functional changes. We apply SIfTER to variant sequences of the H-Ras catalytic domain, due to the prominent role of the Ras protein in signaling pathways that control cell proliferation, its well-studied conformational switching, and abundance of documented mutations in several human tumors. Many Ras mutations are oncogenic, but detailed energy landscapes have not been reported until now. Analysis of SIfTER-computed energy landscapes for the wildtype and two oncogenic variants, G12V and Q61L, suggests that these mutations cause constitutive activation through two different mechanisms. G12V directly affects binding specificity while leaving the energy landscape largely unchanged, whereas Q61L has pronounced, starker effects on the landscape. An implementation of SIfTER is made available at http://www.cs.gmu.edu/~ashehu/?q=OurTools. We believe SIfTER is useful to the community to answer the question of how sequence mutations affect the function of a protein, when there is an abundance of experimental
Huang, Lei; Liao, Li; Wu, Cathy H.
2016-01-01
Revealing the underlying evolutionary mechanism plays an important role in understanding protein interaction networks in the cell. While many evolutionary models have been proposed, the problem about applying these models to real network data, especially for differentiating which model can better describe evolutionary process for the observed network urgently remains as a challenge. The traditional way is to use a model with presumed parameters to generate a network, and then evaluate the fitness by summary statistics, which however cannot capture the complete network structures information and estimate parameter distribution. In this work we developed a novel method based on Approximate Bayesian Computation and modified Differential Evolution (ABC-DEP) that is capable of conducting model selection and parameter estimation simultaneously and detecting the underlying evolutionary mechanisms more accurately. We tested our method for its power in differentiating models and estimating parameters on the simulated data and found significant improvement in performance benchmark, as compared with a previous method. We further applied our method to real data of protein interaction networks in human and yeast. Our results show Duplication Attachment model as the predominant evolutionary mechanism for human PPI networks and Scale-Free model as the predominant mechanism for yeast PPI networks. PMID:26357273
Guturu, Parthasarathy; Dantu, Ram
2008-06-01
Many graph- and set-theoretic problems, because of their tremendous application potential and theoretical appeal, have been well investigated by the researchers in complexity theory and were found to be NP-hard. Since the combinatorial complexity of these problems does not permit exhaustive searches for optimal solutions, only near-optimal solutions can be explored using either various problem-specific heuristic strategies or metaheuristic global-optimization methods, such as simulated annealing, genetic algorithms, etc. In this paper, we propose a unified evolutionary algorithm (EA) to the problems of maximum clique finding, maximum independent set, minimum vertex cover, subgraph and double subgraph isomorphism, set packing, set partitioning, and set cover. In the proposed approach, we first map these problems onto the maximum clique-finding problem (MCP), which is later solved using an evolutionary strategy. The proposed impatient EA with probabilistic tabu search (IEA-PTS) for the MCP integrates the best features of earlier successful approaches with a number of new heuristics that we developed to yield a performance that advances the state of the art in EAs for the exploration of the maximum cliques in a graph. Results of experimentation with the 37 DIMACS benchmark graphs and comparative analyses with six state-of-the-art algorithms, including two from the smaller EA community and four from the larger metaheuristics community, indicate that the IEA-PTS outperforms the EAs with respect to a Pareto-lexicographic ranking criterion and offers competitive performance on some graph instances when individually compared to the other heuristic algorithms. It has also successfully set a new benchmark on one graph instance. On another benchmark suite called Benchmarks with Hidden Optimal Solutions, IEA-PTS ranks second, after a very recent algorithm called COVER, among its peers that have experimented with this suite. PMID:18558530
Patton, Robert M; Cui, Xiaohui; Jiao, Yu; Potok, Thomas E
2008-01-01
The rate at which information overwhelms humans is significantly more than the rate at which humans have learned to process, analyze, and leverage this information. To overcome this challenge, new methods of computing must be formulated, and scientist and engineers have looked to nature for inspiration in developing these new methods. Consequently, evolutionary computing has emerged as new paradigm for computing, and has rapidly demonstrated its ability to solve real-world problems where traditional techniques have failed. This field of work has now become quite broad and encompasses areas ranging from artificial life to neural networks. This chapter focuses specifically on two sub-areas of nature-inspired computing: Evolutionary Algorithms and Swarm Intelligence.
Cao, Buwen; Luo, Jiawei; Liang, Cheng; Wang, Shulin; Song, Dan
2015-10-01
The identification of protein complexes in protein-protein interaction (PPI) networks has greatly advanced our understanding of biological organisms. Existing computational methods to detect protein complexes are usually based on specific network topological properties of PPI networks. However, due to the inherent complexity of the network structures, the identification of protein complexes may not be fully addressed by using single network topological property. In this study, we propose a novel MultiObjective Evolutionary Programming Genetic Algorithm (MOEPGA) which integrates multiple network topological features to detect biologically meaningful protein complexes. Our approach first systematically analyzes the multiobjective problem in terms of identifying protein complexes from PPI networks, and then constructs the objective function of the iterative algorithm based on three common topological properties of protein complexes from the benchmark dataset, finally we describe our algorithm, which mainly consists of three steps, population initialization, subgraph mutation and subgraph selection operation. To show the utility of our method, we compared MOEPGA with several state-of-the-art algorithms on two yeast PPI datasets. The experiment results demonstrate that the proposed method can not only find more protein complexes but also achieve higher accuracy in terms of fscore. Moreover, our approach can cover a certain number of proteins in the input PPI network in terms of the normalized clustering score. Taken together, our method can serve as a powerful framework to detect protein complexes in yeast PPI networks, thereby facilitating the identification of the underlying biological functions. PMID:26298638
Samei, Ehsan; Richard, Samuel
2015-01-15
Purpose: Different computed tomography (CT) reconstruction techniques offer different image quality attributes of resolution and noise, challenging the ability to compare their dose reduction potential against each other. The purpose of this study was to evaluate and compare the task-based imaging performance of CT systems to enable the assessment of the dose performance of a model-based iterative reconstruction (MBIR) to that of an adaptive statistical iterative reconstruction (ASIR) and a filtered back projection (FBP) technique. Methods: The ACR CT phantom (model 464) was imaged across a wide range of mA setting on a 64-slice CT scanner (GE Discovery CT750 HD, Waukesha, WI). Based on previous work, the resolution was evaluated in terms of a task-based modulation transfer function (MTF) using a circular-edge technique and images from the contrast inserts located in the ACR phantom. Noise performance was assessed in terms of the noise-power spectrum (NPS) measured from the uniform section of the phantom. The task-based MTF and NPS were combined with a task function to yield a task-based estimate of imaging performance, the detectability index (d′). The detectability index was computed as a function of dose for two imaging tasks corresponding to the detection of a relatively small and a relatively large feature (1.5 and 25 mm, respectively). The performance of MBIR in terms of the d′ was compared with that of ASIR and FBP to assess its dose reduction potential. Results: Results indicated that MBIR exhibits a variability spatial resolution with respect to object contrast and noise while significantly reducing image noise. The NPS measurements for MBIR indicated a noise texture with a low-pass quality compared to the typical midpass noise found in FBP-based CT images. At comparable dose, the d′ for MBIR was higher than those of FBP and ASIR by at least 61% and 19% for the small feature and the large feature tasks, respectively. Compared to FBP and ASIR, MBIR
Rogošić, Marko; Šimović, Ena; Tišler, Vesna; Bolanča, Tomislav
2013-01-01
Gradient ion chromatography was used for the separation of eight sugars: arabitol, cellobiose, fructose, fucose, lactulose, melibiose, N-acetyl-D-glucosamine, and raffinose. The separation method was optimized using a combination of simplex or genetic algorithm with the isocratic-to-gradient retention modeling. Both the simplex and genetic algorithms provided well separated chromatograms in a similar analysis time. However, the simplex methodology showed severe drawbacks when dealing with local minima. Thus the genetic algorithm methodology proved as a method of choice for gradient optimization in this case. All the calculated/predicted chromatograms were compared with the real sample data, showing more than a satisfactory agreement. PMID:24349824
NASA Astrophysics Data System (ADS)
Baturin, V. S.; Lepeshkin, S. V.; Matsko, N. L.; Uspenskii, Yu A.
2016-02-01
We investigate the structural and thermodynamical properties of small silicon clusters. Using the graph theory applied to previously obtained structures of Si10H2m clusters we trace the connection between geometry and passivation degree. The existing data on these clusters and structures of Si10O4n clusters obtained here using evolutionary calculations allowed to analyze the features of Si10H2m clusters in hydrogen atmosphere and Si10O4n clusters in oxygen atmosphere. We have shown the basic differences between structures and thermodynamical properties of silicon clusters, passivated by hydrogen and silicon oxide clusters.
Pan, Indranil; Das, Saptarshi; Gupta, Amitava
2011-10-01
The issues of stochastically varying network delays and packet dropouts in Networked Control System (NCS) applications have been simultaneously addressed by time domain optimal tuning of fractional order (FO) PID controllers. Different variants of evolutionary algorithms are used for the tuning process and their performances are compared. Also the effectiveness of the fractional order PI(λ)D(μ) controllers over their integer order counterparts is looked into. Two standard test bench plants with time delay and unstable poles which are encountered in process control applications are tuned with the proposed method to establish the validity of the tuning methodology. The proposed tuning methodology is independent of the specific choice of plant and is also applicable for less complicated systems. Thus it is useful in a wide variety of scenarios. The paper also shows the superiority of FOPID controllers over their conventional PID counterparts for NCS applications. PMID:21621208
NASA Astrophysics Data System (ADS)
Tillett, Jason C.; Rao, Raghuveer; Sahin, Ferat; Rao, T. M.
2004-08-01
When wireless sensors are capable of variable transmit power and are battery powered, it is important to select the appropriate transmit power level for the node. Lowering the transmit power of the sensor nodes imposes a natural clustering on the network and has been shown to improve throughput of the network. However, a common transmit power level is not appropriate for inhomogeneous networks. A possible fitness-based approach, motivated by an evolutionary optimization technique, Particle Swarm Optimization (PSO) is proposed and extended in a novel way to determine the appropriate transmit power of each sensor node. A distributed version of PSO is developed and explored using experimental fitness to achieve an approximation of least-cost connectivity.
NASA Astrophysics Data System (ADS)
Kim, Young-Joon
2000-09-01
An algorithm is introduced to remove the directional ambiguities in ocean surface winds measured by scatterometers, which requires scatterometer data only. It is based on two versions of PBL (planetary boundary layer) models and a low-pass filter. A pressure field is first derived from the median-filtered scatterometer winds, is then noise-filtered, and is finally converted back to the winds, respectively, by an inverted PBL model, a smoothing algorithm, and a PBL model. The derived wind field is used to remove the directional ambiguities in the scatterometer data. This new algorithm is applied to Hurricane Eugene and produces results comparable to those from the current standard ambiguity removal algorithm for NASA/JPL SeaWinds project, which requires external numerical weather forecast/analyses data.
NASA Astrophysics Data System (ADS)
Primorac, E.; Kuhlenbeck, H.; Freund, H.-J.
2016-07-01
The structure of a thin MoO3 layer on Au(111) with a c(4 × 2) superstructure was studied with LEED I/V analysis. As proposed previously (Quek et al., Surf. Sci. 577 (2005) L71), the atomic structure of the layer is similar to that of a MoO3 single layer as found in regular α-MoO3. The layer on Au(111) has a glide plane parallel to the short unit vector of the c(4 × 2) unit cell and the molybdenum atoms are bridge-bonded to two surface gold atoms with the structure of the gold surface being slightly distorted. The structural refinement of the structure was performed with the CMA-ES evolutionary strategy algorithm which could reach a Pendry R-factor of ∼ 0.044. In the second part the performance of CMA-ES is compared with that of the differential evolution method, a genetic algorithm and the Powell optimization algorithm employing I/V curves calculated with tensor LEED.
NASA Astrophysics Data System (ADS)
Fourment, Lionel; Ducloux, Richard; Marie, Stéphane; Ejday, Mohsen; Monnereau, Dominique; Massé, Thomas; Montmitonnet, Pierre
2010-06-01
The use of material processing numerical simulation allows a strategy of trial and error to improve virtual processes without incurring material costs or interrupting production and therefore save a lot of money, but it requires user time to analyze the results, adjust the operating conditions and restart the simulation. Automatic optimization is the perfect complement to simulation. Evolutionary Algorithm coupled with metamodelling makes it possible to obtain industrially relevant results on a very large range of applications within a few tens of simulations and without any specific automatic optimization technique knowledge. Ten industrial partners have been selected to cover the different area of the mechanical forging industry and provide different examples of the forming simulation tools. It aims to demonstrate that it is possible to obtain industrially relevant results on a very large range of applications within a few tens of simulations and without any specific automatic optimization technique knowledge. The large computational time is handled by a metamodel approach. It allows interpolating the objective function on the entire parameter space by only knowing the exact function values at a reduced number of "master points". Two algorithms are used: an evolution strategy combined with a Kriging metamodel and a genetic algorithm combined with a Meshless Finite Difference Method. The later approach is extended to multi-objective optimization. The set of solutions, which corresponds to the best possible compromises between the different objectives, is then computed in the same way. The population based approach allows using the parallel capabilities of the utilized computer with a high efficiency. An optimization module, fully embedded within the Forge2009 IHM, makes possible to cover all the defined examples, and the use of new multi-core hardware to compute several simulations at the same time reduces the needed time dramatically. The presented examples
NASA Astrophysics Data System (ADS)
Shen, Xin; Zhang, Jing; Yao, Huang
2015-12-01
Remote sensing satellites play an increasingly prominent role in environmental monitoring and disaster rescue. Taking advantage of almost the same sunshine condition to same place and global coverage, most of these satellites are operated on the sun-synchronous orbit. However, it brings some problems inevitably, the most significant one is that the temporal resolution of sun-synchronous orbit satellite can't satisfy the demand of specific region monitoring mission. To overcome the disadvantages, two methods are exploited: the first one is to build satellite constellation which contains multiple sunsynchronous satellites, just like the CHARTER mechanism has done; the second is to design non-predetermined orbit based on the concrete mission demand. An effective method for remote sensing satellite orbit design based on multiobjective evolution algorithm is presented in this paper. Orbit design problem is converted into a multi-objective optimization problem, and a fast and elitist multi-objective genetic algorithm is utilized to solve this problem. Firstly, the demand of the mission is transformed into multiple objective functions, and the six orbit elements of the satellite are taken as genes in design space, then a simulate evolution process is performed. An optimal resolution can be obtained after specified generation via evolution operation (selection, crossover, and mutation). To examine validity of the proposed method, a case study is introduced: Orbit design of an optical satellite for regional disaster monitoring, the mission demand include both minimizing the average revisit time internal of two objectives. The simulation result shows that the solution for this mission obtained by our method meet the demand the users' demand. We can draw a conclusion that the method presented in this paper is efficient for remote sensing orbit design.
NASA Technical Reports Server (NTRS)
Celaya, Jose R.; Saxen, Abhinav; Goebel, Kai
2012-01-01
This article discusses several aspects of uncertainty representation and management for model-based prognostics methodologies based on our experience with Kalman Filters when applied to prognostics for electronics components. In particular, it explores the implications of modeling remaining useful life prediction as a stochastic process and how it relates to uncertainty representation, management, and the role of prognostics in decision-making. A distinction between the interpretations of estimated remaining useful life probability density function and the true remaining useful life probability density function is explained and a cautionary argument is provided against mixing interpretations for the two while considering prognostics in making critical decisions.
2014-01-01
Background The ability of science to produce experimental data has outpaced the ability to effectively visualize and integrate the data into a conceptual framework that can further higher order understanding. Multidimensional and shape-based observational data of regenerative biology presents a particularly daunting challenge in this regard. Large amounts of data are available in regenerative biology, but little progress has been made in understanding how organisms such as planaria robustly achieve and maintain body form. An example of this kind of data can be found in a new repository (PlanformDB) that encodes descriptions of planaria experiments and morphological outcomes using a graph formalism. Results We are developing a model discovery framework that uses a cell-based modeling platform combined with evolutionary search to automatically search for and identify plausible mechanisms for the biological behavior described in PlanformDB. To automate the evolutionary search we developed a way to compare the output of the modeling platform to the morphological descriptions stored in PlanformDB. We used a flexible connected component algorithm to create a graph representation of the virtual worm from the robust, cell-based simulation data. These graphs can then be validated and compared with target data from PlanformDB using the well-known graph-edit distance calculation, which provides a quantitative metric of similarity between graphs. The graph edit distance calculation was integrated into a fitness function that was able to guide automated searches for unbiased models of planarian regeneration. We present a cell-based model of planarian that can regenerate anatomical regions following bisection of the organism, and show that the automated model discovery framework is capable of searching for and finding models of planarian regeneration that match experimental data stored in PlanformDB. Conclusion The work presented here, including our algorithm for converting cell
Hastie, David I.; Zeller, Tanja; Liquet, Benoit; Newcombe, Paul; Yengo, Loic; Wild, Philipp S.; Schillert, Arne; Ziegler, Andreas; Nielsen, Sune F.; Butterworth, Adam S.; Ho, Weang Kee; Castagné, Raphaële; Munzel, Thomas; Tregouet, David; Falchi, Mario; Cambien, François; Nordestgaard, Børge G.; Fumeron, Fredéric; Tybjærg-Hansen, Anne; Froguel, Philippe; Danesh, John; Petretto, Enrico; Blankenberg, Stefan; Tiret, Laurence; Richardson, Sylvia
2013-01-01
Genome-wide association studies (GWAS) yielded significant advances in defining the genetic architecture of complex traits and disease. Still, a major hurdle of GWAS is narrowing down multiple genetic associations to a few causal variants for functional studies. This becomes critical in multi-phenotype GWAS where detection and interpretability of complex SNP(s)-trait(s) associations are complicated by complex Linkage Disequilibrium patterns between SNPs and correlation between traits. Here we propose a computationally efficient algorithm (GUESS) to explore complex genetic-association models and maximize genetic variant detection. We integrated our algorithm with a new Bayesian strategy for multi-phenotype analysis to identify the specific contribution of each SNP to different trait combinations and study genetic regulation of lipid metabolism in the Gutenberg Health Study (GHS). Despite the relatively small size of GHS (n = 3,175), when compared with the largest published meta-GWAS (n>100,000), GUESS recovered most of the major associations and was better at refining multi-trait associations than alternative methods. Amongst the new findings provided by GUESS, we revealed a strong association of SORT1 with TG-APOB and LIPC with TG-HDL phenotypic groups, which were overlooked in the larger meta-GWAS and not revealed by competing approaches, associations that we replicated in two independent cohorts. Moreover, we demonstrated the increased power of GUESS over alternative multi-phenotype approaches, both Bayesian and non-Bayesian, in a simulation study that mimics real-case scenarios. We showed that our parallel implementation based on Graphics Processing Units outperforms alternative multi-phenotype methods. Beyond multivariate modelling of multi-phenotypes, our Bayesian model employs a flexible hierarchical prior structure for genetic effects that adapts to any correlation structure of the predictors and increases the power to identify associated variants. This
NASA Astrophysics Data System (ADS)
Rodrigo, Deepal
2007-12-01
This dissertation introduces a novel approach for optimally operating a day-ahead electricity market not only by economically dispatching the generation resources but also by minimizing the influences of market manipulation attempts by the individual generator-owning companies while ensuring that the power system constraints are not violated. Since economic operation of the market conflicts with the individual profit maximization tactics such as market manipulation by generator-owning companies, a methodology that is capable of simultaneously optimizing these two competing objectives has to be selected. Although numerous previous studies have been undertaken on the economic operation of day-ahead markets and other independent studies have been conducted on the mitigation of market power, the operation of a day-ahead electricity market considering these two conflicting objectives simultaneously has not been undertaken previously. These facts provided the incentive and the novelty for this study. A literature survey revealed that many of the traditional solution algorithms convert multi-objective functions into either a single-objective function using weighting schemas or undertake optimization of one function at a time. Hence, these approaches do not truly optimize the multi-objectives concurrently. Due to these inherent deficiencies of the traditional algorithms, the use of alternative non-traditional solution algorithms for such problems has become popular and widely used. Of these, multi-objective evolutionary algorithms (MOEA) have received wide acceptance due to their solution quality and robustness. In the present research, three distinct algorithms were considered: a non-dominated sorting genetic algorithm II (NSGA II), a multi-objective tabu search algorithm (MOTS) and a hybrid of multi-objective tabu search and genetic algorithm (MOTS/GA). The accuracy and quality of the results from these algorithms for applications similar to the problem investigated here
NASA Astrophysics Data System (ADS)
Ayala, Helon Vicente Hultmann; Coelho, Leandro dos Santos
2016-02-01
The present work introduces a procedure for input selection and parameter estimation for system identification based on Radial Basis Functions Neural Networks (RBFNNs) models with an improved objective function based on the residuals and its correlation function coefficients. We show the results when the proposed methodology is applied to model a magnetorheological damper, with real acquired data, and other two well-known benchmarks. The canonical genetic and differential evolution algorithms are used in cascade to decompose the problem of defining the lags taken as the inputs of the model and its related parameters based on the simultaneous minimization of the residuals and higher orders correlation functions. The inner layer of the cascaded approach is composed of a population which represents the lags on the inputs and outputs of the system and an outer layer represents the corresponding parameters of the RBFNN. The approach is able to define both the inputs of the model and its parameters. This is interesting as it frees the designer of manual procedures, which are time consuming and prone to error, usually done to define the model inputs. We compare the proposed methodology with other works found in the literature, showing overall better results for the cascaded approach.
NASA Astrophysics Data System (ADS)
Moradi, M.; Delavar, M. R.; Moradi, A.
2015-12-01
Being one of the natural disasters, earthquake can seriously damage buildings, urban facilities and cause road blockage. Post-earthquake route planning is problem that has been addressed in frequent researches. The main aim of this research is to present a route planning model for after earthquake. It is assumed in this research that no damage data is available. The presented model tries to find the optimum route based on a number of contributing factors which mainly indicate the length, width and safety of the road. The safety of the road is represented by a number of criteria such as distance to faults, percentage of non-standard buildings and percentage of high buildings around the route. An integration of genetic algorithm and ordered weighted averaging operator is employed in the model. The former searches the problem space among all alternatives, while the latter aggregates the scores of road segments to compute an overall score for each alternative. Ordered weighted averaging operator enables the users of the system to evaluate the alternative routes based on their decision strategy. Based on the proposed model, an optimistic user tries to find the shortest path between the two points, whereas a pessimistic user tends to pay more attention to safety parameters even if it enforces a longer route. The results depicts that decision strategy can considerably alter the optimum route. Moreover, post-earthquake route planning is a function of not only the length of the route but also the probability of the road blockage.
Evolutionary tree reconstruction
NASA Technical Reports Server (NTRS)
Cheeseman, Peter; Kanefsky, Bob
1990-01-01
It is described how Minimum Description Length (MDL) can be applied to the problem of DNA and protein evolutionary tree reconstruction. If there is a set of mutations that transform a common ancestor into a set of the known sequences, and this description is shorter than the information to encode the known sequences directly, then strong evidence for an evolutionary relationship has been found. A heuristic algorithm is described that searches for the simplest tree (smallest MDL) that finds close to optimal trees on the test data. Various ways of extending the MDL theory to more complex evolutionary relationships are discussed.
Herman, Matthew R; Nejadhashemi, A Pouyan; Daneshvar, Fariborz; Abouali, Mohammad; Ross, Dennis M; Woznicki, Sean A; Zhang, Zhen
2016-10-01
The emission of greenhouse gases continues to amplify the impacts of global climate change. This has led to the increased focus on using renewable energy sources, such as biofuels, due to their lower impact on the environment. However, the production of biofuels can still have negative impacts on water resources. This study introduces a new strategy to optimize bioenergy landscapes while improving stream health for the region. To accomplish this, several hydrological models including the Soil and Water Assessment Tool, Hydrologic Integrity Tool, and Adaptive Neruro Fuzzy Inference System, were linked to develop stream health predictor models. These models are capable of estimating stream health scores based on the Index of Biological Integrity. The coupling of the aforementioned models was used to guide a genetic algorithm to design watershed-scale bioenergy landscapes. Thirteen bioenergy managements were considered based on the high probability of adaptation by farmers in the study area. Results from two thousand runs identified an optimum bioenergy crops placement that maximized the stream health for the Flint River Watershed in Michigan. The final overall stream health score was 50.93, which was improved from the current stream health score of 48.19. This was shown to be a significant improvement at the 1% significant level. For this final bioenergy landscape the most often used management was miscanthus (27.07%), followed by corn-soybean-rye (19.00%), corn stover-soybean (18.09%), and corn-soybean (16.43%). The technique introduced in this study can be successfully modified for use in different regions and can be used by stakeholders and decision makers to develop bioenergy landscapes that maximize stream health in the area of interest. PMID:27420165
NASA Astrophysics Data System (ADS)
Paton, F. L.; Maier, H. R.; Dandy, G. C.
2014-08-01
Cities around the world are increasingly involved in climate action and mitigating greenhouse gas (GHG) emissions. However, in the context of responding to climate pressures in the water sector, very few studies have investigated the impacts of changing water use on GHG emissions, even though water resource adaptation often requires greater energy use. Consequently, reducing GHG emissions, and thus focusing on both mitigation and adaptation responses to climate change in planning and managing urban water supply systems, is necessary. Furthermore, the minimization of GHG emissions is likely to conflict with other objectives. Thus, applying a multiobjective evolutionary algorithm (MOEA), which can evolve an approximation of entire trade-off (Pareto) fronts of multiple objectives in a single run, would be beneficial. Consequently, the main aim of this paper is to incorporate GHG emissions into a MOEA framework to take into consideration both adaptation and mitigation responses to climate change for a city's water supply system. The approach is applied to a case study based on Adelaide's southern water supply system to demonstrate the framework's practical management implications. Results indicate that trade-offs exist between GHG emissions and risk-based performance, as well as GHG emissions and economic cost. Solutions containing rainwater tanks are expensive, while GHG emissions greatly increase with increased desalinated water supply. Consequently, while desalination plants may be good adaptation options to climate change due to their climate-independence, rainwater may be a better mitigation response, albeit more expensive.
NASA Astrophysics Data System (ADS)
S, Kyriacou; E, Kontoleontos; S, Weissenberger; L, Mangani; E, Casartelli; I, Skouteropoulou; M, Gattringer; A, Gehrer; M, Buchmayr
2014-03-01
An efficient hydraulic optimization procedure, suitable for industrial use, requires an advanced optimization tool (EASY software), a fast solver (block coupled CFD) and a flexible geometry generation tool. EASY optimization software is a PCA-driven metamodel-assisted Evolutionary Algorithm (MAEA (PCA)) that can be used in both single- (SOO) and multiobjective optimization (MOO) problems. In MAEAs, low cost surrogate evaluation models are used to screen out non-promising individuals during the evolution and exclude them from the expensive, problem specific evaluation, here the solution of Navier-Stokes equations. For additional reduction of the optimization CPU cost, the PCA technique is used to identify dependences among the design variables and to exploit them in order to efficiently drive the application of the evolution operators. To further enhance the hydraulic optimization procedure, a very robust and fast Navier-Stokes solver has been developed. This incompressible CFD solver employs a pressure-based block-coupled approach, solving the governing equations simultaneously. This method, apart from being robust and fast, also provides a big gain in terms of computational cost. In order to optimize the geometry of hydraulic machines, an automatic geometry and mesh generation tool is necessary. The geometry generation tool used in this work is entirely based on b-spline curves and surfaces. In what follows, the components of the tool chain are outlined in some detail and the optimization results of hydraulic machine components are shown in order to demonstrate the performance of the presented optimization procedure.
Long-Boyle, Janel; Savic, Rada; Yan, Shirley; Bartelink, Imke; Musick, Lisa; French, Deborah; Law, Jason; Horn, Biljana; Cowan, Morton J.; Dvorak, Christopher C.
2014-01-01
Background Population pharmacokinetic (PK) studies of busulfan in children have shown that individualized model-based algorithms provide improved targeted busulfan therapy when compared to conventional dosing. The adoption of population PK models into routine clinical practice has been hampered by the tendency of pharmacologists to develop complex models too impractical for clinicians to use. The authors aimed to develop a population PK model for busulfan in children that can reliably achieve therapeutic exposure (concentration-at-steady-state, Css) and implement a simple, model-based tool for the initial dosing of busulfan in children undergoing HCT. Patients and Methods Model development was conducted using retrospective data available in 90 pediatric and young adult patients who had undergone HCT with busulfan conditioning. Busulfan drug levels and potential covariates influencing drug exposure were analyzed using the non-linear mixed effects modeling software, NONMEM. The final population PK model was implemented into a clinician-friendly, Microsoft Excel-based tool and used to recommend initial doses of busulfan in a group of 21 pediatric patients prospectively dosed based on the population PK model. Results Modeling of busulfan time-concentration data indicates busulfan CL displays non-linearity in children, decreasing up to approximately 20% between the concentrations of 250–2000 ng/mL. Important patient-specific covariates found to significantly impact busulfan CL were actual body weight and age. The percentage of individuals achieving a therapeutic Css was significantly higher in subjects receiving initial doses based on the population PK model (81%) versus historical controls dosed on conventional guidelines (52%) (p = 0.02). Conclusion When compared to the conventional dosing guidelines, the model-based algorithm demonstrates significant improvement for providing targeted busulfan therapy in children and young adults. PMID:25162216
Kurkcuoglu, Zeynep; Doruker, Pemra
2016-01-01
Incorporating receptor flexibility in small ligand-protein docking still poses a challenge for proteins undergoing large conformational changes. In the absence of bound structures, sampling conformers that are accessible by apo state may facilitate docking and drug design studies. For this aim, we developed an unbiased conformational search algorithm, by integrating global modes from elastic network model, clustering and energy minimization with implicit solvation. Our dataset consists of five diverse proteins with apo to complex RMSDs 4.7-15 Å. Applying this iterative algorithm on apo structures, conformers close to the bound-state (RMSD 1.4-3.8 Å), as well as the intermediate states were generated. Dockings to a sequence of conformers consisting of a closed structure and its "parents" up to the apo were performed to compare binding poses on different states of the receptor. For two periplasmic binding proteins and biotin carboxylase that exhibit hinge-type closure of two dynamics domains, the best pose was obtained for the conformer closest to the bound structure (ligand RMSDs 1.5-2 Å). In contrast, the best pose for adenylate kinase corresponded to an intermediate state with partially closed LID domain and open NMP domain, in line with recent studies (ligand RMSD 2.9 Å). The docking of a helical peptide to calmodulin was the most challenging case due to the complexity of its 15 Å transition, for which a two-stage procedure was necessary. The technique was first applied on the extended calmodulin to generate intermediate conformers; then peptide docking and a second generation stage on the complex were performed, which in turn yielded a final peptide RMSD of 2.9 Å. Our algorithm is effective in producing conformational states based on the apo state. This study underlines the importance of such intermediate states for ligand docking to proteins undergoing large transitions. PMID:27348230
Kurkcuoglu, Zeynep; Doruker, Pemra
2016-01-01
Incorporating receptor flexibility in small ligand-protein docking still poses a challenge for proteins undergoing large conformational changes. In the absence of bound structures, sampling conformers that are accessible by apo state may facilitate docking and drug design studies. For this aim, we developed an unbiased conformational search algorithm, by integrating global modes from elastic network model, clustering and energy minimization with implicit solvation. Our dataset consists of five diverse proteins with apo to complex RMSDs 4.7–15 Å. Applying this iterative algorithm on apo structures, conformers close to the bound-state (RMSD 1.4–3.8 Å), as well as the intermediate states were generated. Dockings to a sequence of conformers consisting of a closed structure and its “parents” up to the apo were performed to compare binding poses on different states of the receptor. For two periplasmic binding proteins and biotin carboxylase that exhibit hinge-type closure of two dynamics domains, the best pose was obtained for the conformer closest to the bound structure (ligand RMSDs 1.5–2 Å). In contrast, the best pose for adenylate kinase corresponded to an intermediate state with partially closed LID domain and open NMP domain, in line with recent studies (ligand RMSD 2.9 Å). The docking of a helical peptide to calmodulin was the most challenging case due to the complexity of its 15 Å transition, for which a two-stage procedure was necessary. The technique was first applied on the extended calmodulin to generate intermediate conformers; then peptide docking and a second generation stage on the complex were performed, which in turn yielded a final peptide RMSD of 2.9 Å. Our algorithm is effective in producing conformational states based on the apo state. This study underlines the importance of such intermediate states for ligand docking to proteins undergoing large transitions. PMID:27348230
Paduszyński, Kamil
2016-08-22
The aim of the paper is to address all the disadvantages of currently available models for calculating infinite dilution activity coefficients (γ(∞)) of molecular solutes in ionic liquids (ILs)-a relevant property from the point of view of many applications of ILs, particularly in separations. Three new models are proposed, each of them based on distinct machine learning algorithm: stepwise multiple linear regression (SWMLR), feed-forward artificial neural network (FFANN), and least-squares support vector machine (LSSVM). The models were established based on the most comprehensive γ(∞) data bank reported so far (>34 000 data points for 188 ILs and 128 solutes). Following the paper published previously [J. Chem. Inf. Model 2014, 54, 1311-1324], the ILs were treated in terms of group contributions, whereas the Abraham solvation parameters were used to quantify an impact of solute structure. Temperature is also included in the input data of the models so that they can be utilized to obtain temperature-dependent data and thus related thermodynamic functions. Both internal and external validation techniques were applied to assess the statistical significance and explanatory power of the final correlations. A comparative study of the overall performance of the investigated SWMLR/FFANN/LSSVM approaches is presented in terms of root-mean-square error and average absolute relative deviation between calculated and experimental γ(∞), evaluated for different families of ILs and solutes, as well as between calculated and experimental infinite dilution selectivity for separation problems benzene from n-hexane and thiophene from n-heptane. LSSVM is shown to be a method with the lowest values of both training and generalization errors. It is finally demonstrated that the established models exhibit an improved accuracy compared to the state-of-the-art model, namely, temperature-dependent group contribution linear solvation energy relationship, published in 2011 [J. Chem
Ott, Julien G; Becce, Fabio; Monnin, Pascal; Schmidt, Sabine; Bochud, François O; Verdun, Francis R
2014-08-01
The state of the art to describe image quality in medical imaging is to assess the performance of an observer conducting a task of clinical interest. This can be done by using a model observer leading to a figure of merit such as the signal-to-noise ratio (SNR). Using the non-prewhitening (NPW) model observer, we objectively characterised the evolution of its figure of merit in various acquisition conditions. The NPW model observer usually requires the use of the modulation transfer function (MTF) as well as noise power spectra. However, although the computation of the MTF poses no problem when dealing with the traditional filtered back-projection (FBP) algorithm, this is not the case when using iterative reconstruction (IR) algorithms, such as adaptive statistical iterative reconstruction (ASIR) or model-based iterative reconstruction (MBIR). Given that the target transfer function (TTF) had already shown it could accurately express the system resolution even with non-linear algorithms, we decided to tune the NPW model observer, replacing the standard MTF by the TTF. It was estimated using a custom-made phantom containing cylindrical inserts surrounded by water. The contrast differences between the inserts and water were plotted for each acquisition condition. Then, mathematical transformations were performed leading to the TTF. As expected, the first results showed a dependency of the image contrast and noise levels on the TTF for both ASIR and MBIR. Moreover, FBP also proved to be dependent of the contrast and noise when using the lung kernel. Those results were then introduced in the NPW model observer. We observed an enhancement of SNR every time we switched from FBP to ASIR to MBIR. IR algorithms greatly improve image quality, especially in low-dose conditions. Based on our results, the use of MBIR could lead to further dose reduction in several clinical applications. PMID:24990844
Model-based tomographic reconstruction
Chambers, David H.; Lehman, Sean K.; Goodman, Dennis M.
2012-06-26
A model-based approach to estimating wall positions for a building is developed and tested using simulated data. It borrows two techniques from geophysical inversion problems, layer stripping and stacking, and combines them with a model-based estimation algorithm that minimizes the mean-square error between the predicted signal and the data. The technique is designed to process multiple looks from an ultra wideband radar array. The processed signal is time-gated and each section processed to detect the presence of a wall and estimate its position, thickness, and material parameters. The floor plan of a building is determined by moving the array around the outside of the building. In this paper we describe how the stacking and layer stripping algorithms are combined and show the results from a simple numerical example of three parallel walls.
Bishop, Christopher M.
2013-01-01
Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications. PMID:23277612
Bishop, Christopher M
2013-02-13
Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications. PMID:23277612
NASA Astrophysics Data System (ADS)
Klotz, Daniel; Herrnegger, Mathew; Schulz, Karsten
2016-04-01
This contribution presents a framework, which enables the use of an Evolutionary Algorithm (EA) for the calibration and regionalization of the hydrological model COSEROreg. COSEROreg uses an updated version of the HBV-type model COSERO (Kling et al. 2014) for the modelling of hydrological processes and is embedded in a parameter regionalization scheme based on Samaniego et al. (2010). The latter uses subscale-information to estimate model via a-priori chosen transfer functions (often derived from pedotransfer functions). However, the transferability of the regionalization scheme to different model-concepts and the integration of new forms of subscale information is not straightforward. (i) The usefulness of (new) single sub-scale information layers is unknown beforehand. (ii) Additionally, the establishment of functional relationships between these (possibly meaningless) sub-scale information layers and the distributed model parameters remain a central challenge in the implementation of a regionalization procedure. The proposed method theoretically provides a framework to overcome this challenge. The implementation of the EA encompasses the following procedure: First, a formal grammar is specified (Ryan et al., 1998). The construction of the grammar thereby defines the set of possible transfer functions and also allows to incorporate hydrological domain knowledge into the search itself. The EA iterates over the given space by combining parameterized basic functions (e.g. linear- or exponential functions) and sub-scale information layers into transfer functions, which are then used in COSEROreg. However, a pre-selection model is applied beforehand to sort out unfeasible proposals by the EA and to reduce the necessary model runs. A second optimization routine is used to optimize the parameters of the transfer functions proposed by the EA. This concept, namely using two nested optimization loops, is inspired by the idea of Lamarckian Evolution and Baldwin Effect
Van Uytven, Eric Van Beek, Timothy; McCowan, Peter M.; Chytyk-Praznik, Krista; Greer, Peter B.; McCurdy, Boyd M. C.
2015-12-15
Purpose: Radiation treatments are trending toward delivering higher doses per fraction under stereotactic radiosurgery and hypofractionated treatment regimens. There is a need for accurate 3D in vivo patient dose verification using electronic portal imaging device (EPID) measurements. This work presents a model-based technique to compute full three-dimensional patient dose reconstructed from on-treatment EPID portal images (i.e., transmission images). Methods: EPID dose is converted to incident fluence entering the patient using a series of steps which include converting measured EPID dose to fluence at the detector plane and then back-projecting the primary source component of the EPID fluence upstream of the patient. Incident fluence is then recombined with predicted extra-focal fluence and used to calculate 3D patient dose via a collapsed-cone convolution method. This method is implemented in an iterative manner, although in practice it provides accurate results in a single iteration. The robustness of the dose reconstruction technique is demonstrated with several simple slab phantom and nine anthropomorphic phantom cases. Prostate, head and neck, and lung treatments are all included as well as a range of delivery techniques including VMAT and dynamic intensity modulated radiation therapy (IMRT). Results: Results indicate that the patient dose reconstruction algorithm compares well with treatment planning system computed doses for controlled test situations. For simple phantom and square field tests, agreement was excellent with a 2%/2 mm 3D chi pass rate ≥98.9%. On anthropomorphic phantoms, the 2%/2 mm 3D chi pass rates ranged from 79.9% to 99.9% in the planning target volume (PTV) region and 96.5% to 100% in the low dose region (>20% of prescription, excluding PTV and skin build-up region). Conclusions: An algorithm to reconstruct delivered patient 3D doses from EPID exit dosimetry measurements was presented. The method was applied to phantom and patient
Hunt, Tam
2014-01-01
Evolution as an idea has a lengthy history, even though the idea of evolution is generally associated with Darwin today. Rebecca Stott provides an engaging and thoughtful overview of this history of evolutionary thinking in her 2013 book, Darwin's Ghosts: The Secret History of Evolution. Since Darwin, the debate over evolution—both how it takes place and, in a long war of words with religiously-oriented thinkers, whether it takes place—has been sustained and heated. A growing share of this debate is now devoted to examining how evolutionary thinking affects areas outside of biology. How do our lives change when we recognize that all is in flux? What can we learn about life more generally if we study change instead of stasis? Carter Phipps’ book, Evolutionaries: Unlocking the Spiritual and Cultural Potential of Science's Greatest Idea, delves deep into this relatively new development. Phipps generally takes as a given the validity of the Modern Synthesis of evolutionary biology. His story takes us into, as the subtitle suggests, the spiritual and cultural implications of evolutionary thinking. Can religion and evolution be reconciled? Can evolutionary thinking lead to a new type of spirituality? Is our culture already being changed in ways that we don't realize by evolutionary thinking? These are all important questions and Phipps book is a great introduction to this discussion. Phipps is an author, journalist, and contributor to the emerging “integral” or “evolutionary” cultural movement that combines the insights of Integral Philosophy, evolutionary science, developmental psychology, and the social sciences. He has served as the Executive Editor of EnlightenNext magazine (no longer published) and more recently is the co-founder of the Institute for Cultural Evolution, a public policy think tank addressing the cultural roots of America's political challenges. What follows is an email interview with Phipps. PMID:26478766
Gabora, Liane; Kauffman, Stuart
2016-04-01
Dietrich and Haider (Psychonomic Bulletin & Review, 21 (5), 897-915, 2014) justify their integrative framework for creativity founded on evolutionary theory and prediction research on the grounds that "theories and approaches guiding empirical research on creativity have not been supported by the neuroimaging evidence." Although this justification is controversial, the general direction holds promise. This commentary clarifies points of disagreement and unresolved issues, and addresses mis-applications of evolutionary theory that lead the authors to adopt a Darwinian (versus Lamarckian) approach. To say that creativity is Darwinian is not to say that it consists of variation plus selection - in the everyday sense of the term - as the authors imply; it is to say that evolution is occurring because selection is affecting the distribution of randomly generated heritable variation across generations. In creative thought the distribution of variants is not key, i.e., one is not inclined toward idea A because 60 % of one's candidate ideas are variants of A while only 40 % are variants of B; one is inclined toward whichever seems best. The authors concede that creative variation is partly directed; however, the greater the extent to which variants are generated non-randomly, the greater the extent to which the distribution of variants can reflect not selection but the initial generation bias. Since each thought in a creative process can alter the selective criteria against which the next is evaluated, there is no demarcation into generations as assumed in a Darwinian model. We address the authors' claim that reduced variability and individuality are more characteristic of Lamarckism than Darwinian evolution, and note that a Lamarckian approach to creativity has addressed the challenge of modeling the emergent features associated with insight. PMID:26527351
Gorelik, Gregory; Shackelford, Todd K
2014-01-01
In this article, we advance the concept of "evolutionary awareness," a metacognitive framework that examines human thought and emotion from a naturalistic, evolutionary perspective. We begin by discussing the evolution and current functioning of the moral foundations on which our framework rests. Next, we discuss the possible applications of such an evolutionarily-informed ethical framework to several domains of human behavior, namely: sexual maturation, mate attraction, intrasexual competition, culture, and the separation between various academic disciplines. Finally, we discuss ways in which an evolutionary awareness can inform our cross-generational activities-which we refer to as "intergenerational extended phenotypes"-by helping us to construct a better future for ourselves, for other sentient beings, and for our environment. PMID:25300054
NASA Astrophysics Data System (ADS)
Imani, Moslem; You, Rey-Jer; Kuo, Chung-Yen
2014-10-01
Sea level forecasting at various time intervals is of great importance in water supply management. Evolutionary artificial intelligence (AI) approaches have been accepted as an appropriate tool for modeling complex nonlinear phenomena in water bodies. In the study, we investigated the ability of two AI techniques: support vector machine (SVM), which is mathematically well-founded and provides new insights into function approximation, and gene expression programming (GEP), which is used to forecast Caspian Sea level anomalies using satellite altimetry observations from June 1992 to December 2013. SVM demonstrates the best performance in predicting Caspian Sea level anomalies, given the minimum root mean square error (RMSE = 0.035) and maximum coefficient of determination (R2 = 0.96) during the prediction periods. A comparison between the proposed AI approaches and the cascade correlation neural network (CCNN) model also shows the superiority of the GEP and SVM models over the CCNN.
Model based manipulator control
NASA Technical Reports Server (NTRS)
Petrosky, Lyman J.; Oppenheim, Irving J.
1989-01-01
The feasibility of using model based control (MBC) for robotic manipulators was investigated. A double inverted pendulum system was constructed as the experimental system for a general study of dynamically stable manipulation. The original interest in dynamically stable systems was driven by the objective of high vertical reach (balancing), and the planning of inertially favorable trajectories for force and payload demands. The model-based control approach is described and the results of experimental tests are summarized. Results directly demonstrate that MBC can provide stable control at all speeds of operation and support operations requiring dynamic stability such as balancing. The application of MBC to systems with flexible links is also discussed.
Evolutionary Dynamics of Biological Games
NASA Astrophysics Data System (ADS)
Nowak, Martin A.; Sigmund, Karl
2004-02-01
Darwinian dynamics based on mutation and selection form the core of mathematical models for adaptation and coevolution of biological populations. The evolutionary outcome is often not a fitness-maximizing equilibrium but can include oscillations and chaos. For studying frequency-dependent selection, game-theoretic arguments are more appropriate than optimization algorithms. Replicator and adaptive dynamics describe short- and long-term evolution in phenotype space and have found applications ranging from animal behavior and ecology to speciation, macroevolution, and human language. Evolutionary game theory is an essential component of a mathematical and computational approach to biology.
Evolutionary software for autonomous path planning
Couture, S; Hage, M
1999-02-10
This research project demonstrated the effectiveness of using evolutionary software techniques in the development of path-planning algorithms and control programs for mobile vehicles in radioactive environments. The goal was to take maximum advantage of the programmer's intelligence by tasking the programmer with encoding the measures of success for a path-planning algorithm, rather than developing the path-planning algorithms themselves. Evolutionary software development techniques could then be used to develop algorithms most suitable to the particular environments of interest. The measures of path-planning success were encoded in the form of a fitness function for an evolutionary software development engine. The task for the evolutionary software development engine was to evaluate the performance of individual algorithms, select the best performers for the population based on the fitness function, and breed them to evolve the next generation of algorithms. The process continued for a set number of generations or until the algorithm converged to an optimal solution. The task environment was the navigation of a rover from an initial location to a goal, then to a processing point, in an environment containing physical and radioactive obstacles. Genetic algorithms were developed for a variety of environmental configurations. Algorithms were simple and non-robust strings of behaviors, but they could be evolved to be nearly optimal for a given environment. In addition, a genetic program was evolved in the form of a control algorithm that operates at every motion of the robot. Programs were more complex than algorithms and less optimal in a given environment. However, after training in a variety of different environments, they were more robust and could perform acceptably in environments they were not trained in. This paper describes the evolutionary software development engine and the performance of algorithms and programs evolved by it for the chosen task.
Model Based Reconstruction of UT Array Data
NASA Astrophysics Data System (ADS)
Calmon, P.; Iakovleva, E.; Fidahoussen, A.; Ribay, G.; Chatillon, S.
2008-02-01
Beyond the detection of defects, their characterization (identification, positioning, sizing) is one goal of great importance often assigned to the analysis of NDT data. The first step of such analysis in the case of ultrasonic testing amounts to image in the part the detected echoes. This operation is in general achieved by considering time of flights and by applying simplified algorithms which are often valid only on canonical situations. In this communication we present an overview of different imaging techniques studied at CEA LIST and based on the exploitation of direct models which enable to address complex configurations and are available in the CIVA software plat-form. We discuss in particular ray-model based algorithms, algorithms derived from classical synthetic focusing and processing of the full inter-element matrix (MUSIC algorithm).
Neurocontroller analysis via evolutionary network minimization.
Ganon, Zohar; Keinan, Alon; Ruppin, Eytan
2006-01-01
This study presents a new evolutionary network minimization (ENM) algorithm. Neurocontroller minimization is beneficial for finding small parsimonious networks that permit a better understanding of their workings. The ENM algorithm is specifically geared to an evolutionary agents setup, as it does not require any explicit supervised training error, and is very easily incorporated in current evolutionary algorithms. ENM is based on a standard genetic algorithm with an additional step during reproduction in which synaptic connections are irreversibly eliminated. It receives as input a successfully evolved neurocontroller and aims to output a pruned neurocontroller, while maintaining the original fitness level. The small neurocontrollers produced by ENM provide upper bounds on the neurocontroller size needed to perform a given task successfully, and can provide for more effcient hardware implementations. PMID:16859448
Model-based vision using geometric hashing
NASA Astrophysics Data System (ADS)
Akerman, Alexander, III; Patton, Ronald
1991-04-01
The Geometric Hashing technique developed by the NYU Courant Institute has been applied to various automatic target recognition applications. In particular, I-MATH has extended the hashing algorithm to perform automatic target recognition ofsynthetic aperture radar (SAR) imagery. For this application, the hashing is performed upon the geometric locations of dominant scatterers. In addition to being a robust model-based matching algorithm -- invariant under translation, scale, and 3D rotations of the target -- hashing is of particular utility because it can still perform effective matching when the target is partially obscured. Moreover, hashing is very amenable to a SIMD parallel processing architecture, and thus potentially realtime implementable.
Model-Based Fault Tolerant Control
NASA Technical Reports Server (NTRS)
Kumar, Aditya; Viassolo, Daniel
2008-01-01
The Model Based Fault Tolerant Control (MBFTC) task was conducted under the NASA Aviation Safety and Security Program. The goal of MBFTC is to develop and demonstrate real-time strategies to diagnose and accommodate anomalous aircraft engine events such as sensor faults, actuator faults, or turbine gas-path component damage that can lead to in-flight shutdowns, aborted take offs, asymmetric thrust/loss of thrust control, or engine surge/stall events. A suite of model-based fault detection algorithms were developed and evaluated. Based on the performance and maturity of the developed algorithms two approaches were selected for further analysis: (i) multiple-hypothesis testing, and (ii) neural networks; both used residuals from an Extended Kalman Filter to detect the occurrence of the selected faults. A simple fusion algorithm was implemented to combine the results from each algorithm to obtain an overall estimate of the identified fault type and magnitude. The identification of the fault type and magnitude enabled the use of an online fault accommodation strategy to correct for the adverse impact of these faults on engine operability thereby enabling continued engine operation in the presence of these faults. The performance of the fault detection and accommodation algorithm was extensively tested in a simulation environment.
NASA Technical Reports Server (NTRS)
Frisch, Harold P.
2007-01-01
Engineers, who design systems using text specification documents, focus their work upon the completed system to meet Performance, time and budget goals. Consistency and integrity is difficult to maintain within text documents for a single complex system and more difficult to maintain as several systems are combined into higher-level systems, are maintained over decades, and evolve technically and in performance through updates. This system design approach frequently results in major changes during the system integration and test phase, and in time and budget overruns. Engineers who build system specification documents within a model-based systems environment go a step further and aggregate all of the data. They interrelate all of the data to insure consistency and integrity. After the model is constructed, the various system specification documents are prepared, all from the same database. The consistency and integrity of the model is assured, therefore the consistency and integrity of the various specification documents is insured. This article attempts to define model-based systems relative to such an environment. The intent is to expose the complexity of the enabling problem by outlining what is needed, why it is needed and how needs are being addressed by international standards writing teams.
Toward a unifying framework for evolutionary processes
Paixão, Tiago; Badkobeh, Golnaz; Barton, Nick; Çörüş, Doğan; Dang, Duc-Cuong; Friedrich, Tobias; Lehre, Per Kristian; Sudholt, Dirk; Sutton, Andrew M.; Trubenová, Barbora
2015-01-01
The theory of population genetics and evolutionary computation have been evolving separately for nearly 30 years. Many results have been independently obtained in both fields and many others are unique to its respective field. We aim to bridge this gap by developing a unifying framework for evolutionary processes that allows both evolutionary algorithms and population genetics models to be cast in the same formal framework. The framework we present here decomposes the evolutionary process into its several components in order to facilitate the identification of similarities between different models. In particular, we propose a classification of evolutionary operators based on the defining properties of the different components. We cast several commonly used operators from both fields into this common framework. Using this, we map different evolutionary and genetic algorithms to different evolutionary regimes and identify candidates with the most potential for the translation of results between the fields. This provides a unified description of evolutionary processes and represents a stepping stone towards new tools and results to both fields. PMID:26215686
An inquiry into evolutionary inquiry
NASA Astrophysics Data System (ADS)
Donovan, Samuel S.
2005-11-01
While evolution education has received a great deal of attention within the science education research community it still poses difficult teaching and learning challenges. Understanding evolutionary biology has been given high priority in national science education policy because of its role in coordinating our understanding of the life sciences, its importance in our intellectual history, its role in the perception of humans' position in nature, and its impact on our current medical, agricultural, and conservation practices. The rhetoric used in evolution education policy statements emphasizes familiarity with the nature of scientific inquiry as an important learning outcome associated with understanding evolution but provide little guidance with respect to how one might achieve this goal. This dissertation project explores the nature of evolutionary inquiry and how understanding the details of disciplinary reasoning can inform evolution education. The first analysis involves recasting the existing evolution education research literature to assess educational outcomes related to students ability to reason about data using evolutionary biology methods and models. This is followed in the next chapter by a detailed historical and philosophical characterization of evolutionary biology with the goal of providing a richer context for considering what exactly it is we want students to know about evolution as a discipline. Chapter 4 describes the development and implementation of a high school evolution curriculum that engages students with many aspects of model based reasoning. The final component of this reframing of evolution education involves an empirical study characterizing students' understanding of evolutionary biology as a modeling enterprise. Each chapter addresses a different aspect of evolution education and explores the implications of foregrounding disciplinary reasoning as an educational outcome. The analyses are coordinated with one another in the sense
Evolutionary Multiobjective Design Targeting a Field Programmable Transistor Array
NASA Technical Reports Server (NTRS)
Aguirre, Arturo Hernandez; Zebulum, Ricardo S.; Coello, Carlos Coello
2004-01-01
This paper introduces the ISPAES algorithm for circuit design targeting a Field Programmable Transistor Array (FPTA). The use of evolutionary algorithms is common in circuit design problems, where a single fitness function drives the evolution process. Frequently, the design problem is subject to several goals or operating constraints, thus, designing a suitable fitness function catching all requirements becomes an issue. Such a problem is amenable for multi-objective optimization, however, evolutionary algorithms lack an inherent mechanism for constraint handling. This paper introduces ISPAES, an evolutionary optimization algorithm enhanced with a constraint handling technique. Several design problems targeting a FPTA show the potential of our approach.
Model-based reconfiguration: Diagnosis and recovery
NASA Technical Reports Server (NTRS)
Crow, Judy; Rushby, John
1994-01-01
We extend Reiter's general theory of model-based diagnosis to a theory of fault detection, identification, and reconfiguration (FDIR). The generality of Reiter's theory readily supports an extension in which the problem of reconfiguration is viewed as a close analog of the problem of diagnosis. Using a reconfiguration predicate 'rcfg' analogous to the abnormality predicate 'ab,' we derive a strategy for reconfiguration by transforming the corresponding strategy for diagnosis. There are two obvious benefits of this approach: algorithms for diagnosis can be exploited as algorithms for reconfiguration and we have a theoretical framework for an integrated approach to FDIR. As a first step toward realizing these benefits we show that a class of diagnosis engines can be used for reconfiguration and we discuss algorithms for integrated FDIR. We argue that integrating recovery and diagnosis is an essential next step if this technology is to be useful for practical applications.
NASA Technical Reports Server (NTRS)
Rowe, Sidney E.
2010-01-01
In September 2007, the Engineering Directorate at the Marshall Space Flight Center (MSFC) created the Design System Focus Team (DSFT). MSFC was responsible for the in-house design and development of the Ares 1 Upper Stage and the Engineering Directorate was preparing to deploy a new electronic Configuration Management and Data Management System with the Design Data Management System (DDMS) based upon a Commercial Off The Shelf (COTS) Product Data Management (PDM) System. The DSFT was to establish standardized CAD practices and a new data life cycle for design data. Of special interest here, the design teams were to implement Model Based Definition (MBD) in support of the Upper Stage manufacturing contract. It is noted that this MBD does use partially dimensioned drawings for auxiliary information to the model. The design data lifecycle implemented several new release states to be used prior to formal release that allowed the models to move through a flow of progressive maturity. The DSFT identified some 17 Lessons Learned as outcomes of the standards development, pathfinder deployments and initial application to the Upper Stage design completion. Some of the high value examples are reviewed.
Evolutionary optimization of cooperative heterogeneous teams
NASA Astrophysics Data System (ADS)
Soule, Terence; Heckendorn, Robert B.
2007-04-01
There is considerable interest in developing teams of autonomous, unmanned vehicles that can function in hostile environments without endangering human lives. However, heterogeneous teams, teams of units with specialized roles and/or specialized capabilities, have received relatively little attention. Specialized roles and capabilities can significantly increase team effectiveness and efficiency. Unfortunately, developing effective cooperation mechanisms is much more difficult in heterogeneous teams. Units with specialized roles or capabilities require specialized software that take into account the role and capabilities of both itself and its neighbors. Evolutionary algorithms, algorithms modeled on the principles of natural selection, have a proven track record in generating successful teams for a wide variety of problem domains. Using classification problems as a prototype, we have shown that typical evolutionary algorithms either generate highly effective teams members that cooperate poorly or poorly performing individuals that cooperate well. To overcome these weaknesses we have developed a novel class of evolutionary algorithms. In this paper we apply these algorithms to the problem of controlling simulated, heterogeneous teams of "scouts" and "investigators". Our test problem requires producing a map of an area and to further investigate "areas of interest". We compare several evolutionary algorithms for their ability to generate individually effective members and high levels of cooperation.
Model-based target and background characterization
NASA Astrophysics Data System (ADS)
Mueller, Markus; Krueger, Wolfgang; Heinze, Norbert
2000-07-01
Up to now most approaches of target and background characterization (and exploitation) concentrate solely on the information given by pixels. In many cases this is a complex and unprofitable task. During the development of automatic exploitation algorithms the main goal is the optimization of certain performance parameters. These parameters are measured during test runs while applying one algorithm with one parameter set to images that constitute of image domains with very different domain characteristics (targets and various types of background clutter). Model based geocoding and registration approaches provide means for utilizing the information stored in GIS (Geographical Information Systems). The geographical information stored in the various GIS layers can define ROE (Regions of Expectations) and may allow for dedicated algorithm parametrization and development. ROI (Region of Interest) detection algorithms (in most cases MMO (Man- Made Object) detection) use implicit target and/or background models. The detection algorithms of ROIs utilize gradient direction models that have to be matched with transformed image domain data. In most cases simple threshold calculations on the match results discriminate target object signatures from the background. The geocoding approaches extract line-like structures (street signatures) from the image domain and match the graph constellation against a vector model extracted from a GIS (Geographical Information System) data base. Apart from geo-coding the algorithms can be also used for image-to-image registration (multi sensor and data fusion) and may be used for creation and validation of geographical maps.
Evolutionary Tracks for Betelgeuse
NASA Astrophysics Data System (ADS)
Dolan, Michelle M.; Mathews, Grant J.; Lam, Doan Duc; Quynh Lan, Nguyen; Herczeg, Gregory J.; Dearborn, David S. P.
2016-03-01
We have constructed a series of nonrotating quasi-hydrostatic evolutionary models for the M2 Iab supergiant Betelgeuse (α Orionis). Our models are constrained by multiple observed values for the temperature, luminosity, surface composition, and mass loss for this star, along with the parallax distance and high-resolution imagery that determines its radius. We have then applied our best-fit models to analyze the observed variations in surface luminosity and the size of detected surface bright spots as the result of up-flowing convective material from regions of high temperature in the surface convective zone. We also attempt to explain the intermittently observed periodic variability in a simple radial linear adiabatic pulsation model. Based on the best fit to all observed data, we suggest a best progenitor mass estimate of {20}-3+5 {M}⊙ and a current age from the start of the zero-age main sequence of 8.0-8.5 Myr based on the observed ejected mass while on the giant branch.
Principles of models based engineering
Dolin, R.M.; Hefele, J.
1996-11-01
This report describes a Models Based Engineering (MBE) philosophy and implementation strategy that has been developed at Los Alamos National Laboratory`s Center for Advanced Engineering Technology. A major theme in this discussion is that models based engineering is an information management technology enabling the development of information driven engineering. Unlike other information management technologies, models based engineering encompasses the breadth of engineering information, from design intent through product definition to consumer application.
Efficient Model-Based Diagnosis Engine
NASA Technical Reports Server (NTRS)
Fijany, Amir; Vatan, Farrokh; Barrett, Anthony; James, Mark; Mackey, Ryan; Williams, Colin
2009-01-01
An efficient diagnosis engine - a combination of mathematical models and algorithms - has been developed for identifying faulty components in a possibly complex engineering system. This model-based diagnosis engine embodies a twofold approach to reducing, relative to prior model-based diagnosis engines, the amount of computation needed to perform a thorough, accurate diagnosis. The first part of the approach involves a reconstruction of the general diagnostic engine to reduce the complexity of the mathematical-model calculations and of the software needed to perform them. The second part of the approach involves algorithms for computing a minimal diagnosis (the term "minimal diagnosis" is defined below). A somewhat lengthy background discussion is prerequisite to a meaningful summary of the innovative aspects of the present efficient model-based diagnosis engine. In model-based diagnosis, the function of each component and the relationships among all the components of the engineering system to be diagnosed are represented as a logical system denoted the system description (SD). Hence, the expected normal behavior of the engineering system is the set of logical consequences of the SD. Faulty components lead to inconsistencies between the observed behaviors of the system and the SD (see figure). Diagnosis - the task of finding faulty components - is reduced to finding those components, the abnormalities of which could explain all the inconsistencies. The solution of the diagnosis problem should be a minimal diagnosis, which is a minimal set of faulty components. A minimal diagnosis stands in contradistinction to the trivial solution, in which all components are deemed to be faulty, and which, therefore, always explains all inconsistencies.
Hierarchical model-based interferometric synthetic aperture radar image registration
NASA Astrophysics Data System (ADS)
Wang, Yang; Huang, Haifeng; Dong, Zhen; Wu, Manqing
2014-01-01
With the rapid development of spaceborne interferometric synthetic aperture radar technology, classical image registration methods are incompetent for high-efficiency and high-accuracy masses of real data processing. Based on this fact, we propose a new method. This method consists of two steps: coarse registration that is realized by cross-correlation algorithm and fine registration that is realized by hierarchical model-based algorithm. Hierarchical model-based algorithm is a high-efficiency optimization algorithm. The key features of this algorithm are a global model that constrains the overall structure of the motion estimated, a local model that is used in the estimation process, and a coarse-to-fine refinement strategy. Experimental results from different kinds of simulated and real data have confirmed that the proposed method is very fast and has high accuracy. Comparing with a conventional cross-correlation method, the proposed method provides markedly improved performance.
Vector space model based on semantic relatedness
NASA Astrophysics Data System (ADS)
Bondarchuk, Dmitry; Timofeeva, Galina
2015-11-01
Most of data-mining methods are based on the vector space model of knowledge representation. The vector space model uses the frequency of a term in order to determine its relevance in a document. Terms can be similar by semantic meaning but be lexicographically different ones, so the classification based on the frequency of terms does not give desired results in some subject areas such as the vacancies selection. The modified vector space model based on the semantic relatedness is suggested for data-mining in this area. Evaluation results show that the proposed algorithm is better then one based on the standard vector space model.
Evolutionary design of corrugated horn antennas
NASA Technical Reports Server (NTRS)
Hoorfar, F.; Manshadi, V.; Jamnejad, A.
2002-01-01
An evolutionary progranirnitzg (EP) algorithm is used to optimize pattern of a corrugated circularhorn subject to various constraints on return loss and antenna beamwidth and pattern circularity and low crosspolarization. The EP algorithm uses a Gaussian mutation operator. Examples on design synthesis of a 45 section corrugated horn, with a total of 90 optimization parameters, are presented. The results show excellent and efficient optimization of the desired horn parameters.
An Evolutionary Optimization System for Spacecraft Design
NASA Technical Reports Server (NTRS)
Fukunaga, A.; Stechert, A.
1997-01-01
Spacecraft design optimization is a domian that can benefit from the application of optimization algorithms such as genetic algorithms. In this paper, we describe DEVO, an evolutionary optimization system that addresses these issues and provides a tool that can be applied to a number of real-world spacecraft design applications. We describe two current applications of DEVO: physical design if a Mars Microprobe Soil Penetrator, and system configuration optimization for a Neptune Orbiter.
Model-based vision for space applications
NASA Technical Reports Server (NTRS)
Chaconas, Karen; Nashman, Marilyn; Lumia, Ronald
1992-01-01
This paper describes a method for tracking moving image features by combining spatial and temporal edge information with model based feature information. The algorithm updates the two-dimensional position of object features by correlating predicted model features with current image data. The results of the correlation process are used to compute an updated model. The algorithm makes use of a high temporal sampling rate with respect to spatial changes of the image features and operates in a real-time multiprocessing environment. Preliminary results demonstrate successful tracking for image feature velocities between 1.1 and 4.5 pixels every image frame. This work has applications for docking, assembly, retrieval of floating objects and a host of other space-related tasks.
An evolutionary approach to simulated football free kick optimisation
NASA Astrophysics Data System (ADS)
Rhodes, Martin; Coupland, Simon
We present a genetic algorithm-based evolutionary computing approach to the optimisation of simulated football free kick situations. A detailed physics model is implemented in order to apply evolutionary computing techniques to the creation of strategic offensive shots and defensive player locations.
New Improved Fractional Order Differentiator Models Based on Optimized Digital Differentiators
Gupta, Maneesha
2014-01-01
Different evolutionary algorithms (EAs), namely, particle swarm optimization (PSO), genetic algorithm (GA), and PSO-GA hybrid optimization, have been used to optimize digital differential operators so that these can be better fitted to exemplify their new improved fractional order differentiator counterparts. First, the paper aims to provide efficient 2nd and 3rd order operators in connection with process of minimization of error fitness function by registering mean, median, and standard deviation values in different random iterations to ascertain the best results among them, using all the abovementioned EAs. Later, these optimized operators are discretized for half differentiator models for utilizing their restored qualities inhibited from their optimization. Simulation results present the comparisons of the proposed half differentiators with the existing and amongst different models based on 2nd and 3rd order optimized operators. Proposed half differentiators have been observed to approximate the ideal half differentiator and also outperform the existing ones reasonably well in complete range of Nyquist frequency. PMID:24688426
Evolutionary stability on graphs
Ohtsuki, Hisashi; Nowak, Martin A.
2008-01-01
Evolutionary stability is a fundamental concept in evolutionary game theory. A strategy is called an evolutionarily stable strategy (ESS), if its monomorphic population rejects the invasion of any other mutant strategy. Recent studies have revealed that population structure can considerably affect evolutionary dynamics. Here we derive the conditions of evolutionary stability for games on graphs. We obtain analytical conditions for regular graphs of degree k > 2. Those theoretical predictions are compared with computer simulations for random regular graphs and for lattices. We study three different update rules: birth-death (BD), death-birth (DB), and imitation (IM) updating. Evolutionary stability on sparse graphs does not imply evolutionary stability in a well-mixed population, nor vice versa. We provide a geometrical interpretation of the ESS condition on graphs. PMID:18295801
An evolutionary approach for searching metabolic pathways.
Gerard, Matias F; Stegmayer, Georgina; Milone, Diego H
2013-11-01
Searching metabolic pathways that relate two compounds is a common task in bioinformatics. This is of particular interest when trying, for example, to discover metabolic relations among compounds clustered with a data mining technique. Search strategies find sequences to relate two or more states (compounds) using an appropriate set of transitions (reactions). Evolutionary algorithms carry out the search guided by a fitness function and explore multiple candidate solutions using stochastic operators. In this work we propose an evolutionary algorithm for searching metabolic pathways between two compounds. The operators and fitness function employed are described and the effect of mutation rate is studied. Performance of this algorithm is compared with two classical search strategies. Source code and dataset are available at http://sourceforge.net/projects/sourcesinc/files/eamp/ PMID:24209916
Bell-Curve Based Evolutionary Strategies for Structural Optimization
NASA Technical Reports Server (NTRS)
Kincaid, Rex K.
2000-01-01
Evolutionary methods are exceedingly popular with practitioners of many fields; more so than perhaps any optimization tool in existence. Historically Genetic Algorithms (GAs) led the way in practitioner popularity (Reeves 1997). However, in the last ten years Evolutionary Strategies (ESs) and Evolutionary Programs (EPS) have gained a significant foothold (Glover 1998). One partial explanation for this shift is the interest in using GAs to solve continuous optimization problems. The typical GA relies upon a cumber-some binary representation of the design variables. An ES or EP, however, works directly with the real-valued design variables. For detailed references on evolutionary methods in general and ES or EP in specific see Back (1996) and Dasgupta and Michalesicz (1997). We call our evolutionary algorithm BCB (bell curve based) since it is based upon two normal distributions.
Bell-Curve Based Evolutionary Strategies for Structural Optimization
NASA Technical Reports Server (NTRS)
Kincaid, Rex K.
2001-01-01
Evolutionary methods are exceedingly popular with practitioners of many fields; more so than perhaps any optimization tool in existence. Historically Genetic Algorithms (GAs) led the way in practitioner popularity. However, in the last ten years Evolutionary Strategies (ESs) and Evolutionary Programs (EPS) have gained a significant foothold. One partial explanation for this shift is the interest in using GAs to solve continuous optimization problems. The typical GA relies upon a cumbersome binary representation of the design variables. An ES or EP, however, works directly with the real-valued design variables. For detailed references on evolutionary methods in general and ES or EP in specific see Back and Dasgupta and Michalesicz. We call our evolutionary algorithm BCB (bell curve based) since it is based upon two normal distributions.
Algorithms and Algorithmic Languages.
ERIC Educational Resources Information Center
Veselov, V. M.; Koprov, V. M.
This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…
Model-based multiple patterning layout decomposition
NASA Astrophysics Data System (ADS)
Guo, Daifeng; Tian, Haitong; Du, Yuelin; Wong, Martin D. F.
2015-10-01
As one of the most promising next generation lithography technologies, multiple patterning lithography (MPL) plays an important role in the attempts to keep in pace with 10 nm technology node and beyond. With feature size keeps shrinking, it has become impossible to print dense layouts within one single exposure. As a result, MPL such as double patterning lithography (DPL) and triple patterning lithography (TPL) has been widely adopted. There is a large volume of literature on DPL/TPL layout decomposition, and the current approach is to formulate the problem as a classical graph-coloring problem: Layout features (polygons) are represented by vertices in a graph G and there is an edge between two vertices if and only if the distance between the two corresponding features are less than a minimum distance threshold value dmin. The problem is to color the vertices of G using k colors (k = 2 for DPL, k = 3 for TPL) such that no two vertices connected by an edge are given the same color. This is a rule-based approach, which impose a geometric distance as a minimum constraint to simply decompose polygons within the distance into different masks. It is not desired in practice because this criteria cannot completely capture the behavior of the optics. For example, it lacks of sufficient information such as the optical source characteristics and the effects between the polygons outside the minimum distance. To remedy the deficiency, a model-based layout decomposition approach to make the decomposition criteria base on simulation results was first introduced at SPIE 2013.1 However, the algorithm1 is based on simplified assumption on the optical simulation model and therefore its usage on real layouts is limited. Recently AMSL2 also proposed a model-based approach to layout decomposition by iteratively simulating the layout, which requires excessive computational resource and may lead to sub-optimal solutions. The approach2 also potentially generates too many stiches. In this
Pelletier, F.; Garant, D.; Hendry, A.P.
2009-01-01
Evolutionary ecologists and population biologists have recently considered that ecological and evolutionary changes are intimately linked and can occur on the same time-scale. Recent theoretical developments have shown how the feedback between ecological and evolutionary dynamics can be linked, and there are now empirical demonstrations showing that ecological change can lead to rapid evolutionary change. We also have evidence that microevolutionary change can leave an ecological signature. We are at a stage where the integration of ecology and evolution is a necessary step towards major advances in our understanding of the processes that shape and maintain biodiversity. This special feature about ‘eco-evolutionary dynamics’ brings together biologists from empirical and theoretical backgrounds to bridge the gap between ecology and evolution and provide a series of contributions aimed at quantifying the interactions between these fundamental processes. PMID:19414463
Polymorphic Evolutionary Games.
Fishman, Michael A
2016-06-01
In this paper, I present an analytical framework for polymorphic evolutionary games suitable for explicitly modeling evolutionary processes in diploid populations with sexual reproduction. The principal aspect of the proposed approach is adding diploid genetics cum sexual recombination to a traditional evolutionary game, and switching from phenotypes to haplotypes as the new game׳s pure strategies. Here, the relevant pure strategy׳s payoffs derived by summing the payoffs of all the phenotypes capable of producing gametes containing that particular haplotype weighted by the pertinent probabilities. The resulting game is structurally identical to the familiar Evolutionary Games with non-linear pure strategy payoffs (Hofbauer and Sigmund, 1998. Cambridge University Press), and can be analyzed in terms of an established analytical framework for such games. And these results can be translated into the terms of genotypic, and whence, phenotypic evolutionary stability pertinent to the original game. PMID:27016340
Model-Based Reasoning in Humans Becomes Automatic with Training
Lübbert, Annika; Guitart-Masip, Marc; Dolan, Raymond J.
2015-01-01
Model-based and model-free reinforcement learning (RL) have been suggested as algorithmic realizations of goal-directed and habitual action strategies. Model-based RL is more flexible than model-free but requires sophisticated calculations using a learnt model of the world. This has led model-based RL to be identified with slow, deliberative processing, and model-free RL with fast, automatic processing. In support of this distinction, it has recently been shown that model-based reasoning is impaired by placing subjects under cognitive load—a hallmark of non-automaticity. Here, using the same task, we show that cognitive load does not impair model-based reasoning if subjects receive prior training on the task. This finding is replicated across two studies and a variety of analysis methods. Thus, task familiarity permits use of model-based reasoning in parallel with other cognitive demands. The ability to deploy model-based reasoning in an automatic, parallelizable fashion has widespread theoretical implications, particularly for the learning and execution of complex behaviors. It also suggests a range of important failure modes in psychiatric disorders. PMID:26379239
Evolutionary biology of language.
Nowak, M A
2000-01-01
Language is the most important evolutionary invention of the last few million years. It was an adaptation that helped our species to exchange information, make plans, express new ideas and totally change the appearance of the planet. How human language evolved from animal communication is one of the most challenging questions for evolutionary biology The aim of this paper is to outline the major principles that guided language evolution in terms of mathematical models of evolutionary dynamics and game theory. I will discuss how natural selection can lead to the emergence of arbitrary signs, the formation of words and syntactic communication. PMID:11127907
Evolutionary Optimization of a Quadrifilar Helical Antenna
NASA Technical Reports Server (NTRS)
Lohn, Jason D.; Kraus, William F.; Linden, Derek S.; Clancy, Daniel (Technical Monitor)
2002-01-01
Automated antenna synthesis via evolutionary design has recently garnered much attention in the research literature. Evolutionary algorithms show promise because, among search algorithms, they are able to effectively search large, unknown design spaces. NASA's Mars Odyssey spacecraft is due to reach final Martian orbit insertion in January, 2002. Onboard the spacecraft is a quadrifilar helical antenna that provides telecommunications in the UHF band with landed assets, such as robotic rovers. Each helix is driven by the same signal which is phase-delayed in 90 deg increments. A small ground plane is provided at the base. It is designed to operate in the frequency band of 400-438 MHz. Based on encouraging previous results in automated antenna design using evolutionary search, we wanted to see whether such techniques could improve upon Mars Odyssey antenna design. Specifically, a co-evolutionary genetic algorithm is applied to optimize the gain and size of the quadrifilar helical antenna. The optimization was performed in-situ in the presence of a neighboring spacecraft structure. On the spacecraft, a large aluminum fuel tank is adjacent to the antenna. Since this fuel tank can dramatically affect the antenna's performance, we leave it to the evolutionary process to see if it can exploit the fuel tank's properties advantageously. Optimizing in the presence of surrounding structures would be quite difficult for human antenna designers, and thus the actual antenna was designed for free space (with a small ground plane). In fact, when flying on the spacecraft, surrounding structures that are moveable (e.g., solar panels) may be moved during the mission in order to improve the antenna's performance.
NASA Astrophysics Data System (ADS)
Senkerik, Roman; Zelinka, Ivan; Davendra, Donald; Oplatkova, Zuzana
2010-06-01
This research deals with the optimization of the control of chaos by means of evolutionary algorithms. This work is aimed on an explanation of how to use evolutionary algorithms (EAs) and how to properly define the advanced targeting cost function (CF) securing very fast and precise stabilization of desired state for any initial conditions. As a model of deterministic chaotic system, the one dimensional Logistic equation was used. The evolutionary algorithm Self-Organizing Migrating Algorithm (SOMA) was used in four versions. For each version, repeated simulations were conducted to outline the effectiveness and robustness of used method and targeting CF.
Rethinking evolutionary individuality
Ereshefsky, Marc; Pedroso, Makmiller
2015-01-01
This paper considers whether multispecies biofilms are evolutionary individuals. Numerous multispecies biofilms have characteristics associated with individuality, such as internal integrity, division of labor, coordination among parts, and heritable adaptive traits. However, such multispecies biofilms often fail standard reproductive criteria for individuality: they lack reproductive bottlenecks, are comprised of multiple species, do not form unified reproductive lineages, and fail to have a significant division of reproductive labor among their parts. If such biofilms are good candidates for evolutionary individuals, then evolutionary individuality is achieved through other means than frequently cited reproductive processes. The case of multispecies biofilms suggests that standard reproductive requirements placed on individuality should be reconsidered. More generally, the case of multispecies biofilms indicates that accounts of individuality that focus on single-species eukaryotes are too restrictive and that a pluralistic and open-ended account of evolutionary individuality is needed. PMID:26039982
Evolutionary behavioral genetics
Zietsch, Brendan P.; de Candia, Teresa R; Keller, Matthew C.
2014-01-01
We describe the scientific enterprise at the intersection of evolutionary psychology and behavioral genetics—a field that could be termed Evolutionary Behavioral Genetics—and how modern genetic data is revolutionizing our ability to test questions in this field. We first explain how genetically informative data and designs can be used to investigate questions about the evolution of human behavior, and describe some of the findings arising from these approaches. Second, we explain how evolutionary theory can be applied to the investigation of behavioral genetic variation. We give examples of how new data and methods provide insight into the genetic architecture of behavioral variation and what this tells us about the evolutionary processes that acted on the underlying causal genetic variants. PMID:25587556
Evolutionary Mechanisms for Loneliness
Cacioppo, John T.; Cacioppo, Stephanie; Boomsma, Dorret I.
2013-01-01
Robert Weiss (1973) conceptualized loneliness as perceived social isolation, which he described as a gnawing, chronic disease without redeeming features. On the scale of everyday life, it is understandable how something as personally aversive as loneliness could be regarded as a blight on human existence. However, evolutionary time and evolutionary forces operate at such a different scale of organization than we experience in everyday life that personal experience is not sufficient to understand the role of loneliness in human existence. Research over the past decade suggests a very different view of loneliness than suggested by personal experience, one in which loneliness serves a variety of adaptive functions in specific habitats. We review evidence on the heritability of loneliness and outline an evolutionary theory of loneliness, with an emphasis on its potential adaptive value in an evolutionary timescale. PMID:24067110
Rethinking evolutionary individuality.
Ereshefsky, Marc; Pedroso, Makmiller
2015-08-18
This paper considers whether multispecies biofilms are evolutionary individuals. Numerous multispecies biofilms have characteristics associated with individuality, such as internal integrity, division of labor, coordination among parts, and heritable adaptive traits. However, such multispecies biofilms often fail standard reproductive criteria for individuality: they lack reproductive bottlenecks, are comprised of multiple species, do not form unified reproductive lineages, and fail to have a significant division of reproductive labor among their parts. If such biofilms are good candidates for evolutionary individuals, then evolutionary individuality is achieved through other means than frequently cited reproductive processes. The case of multispecies biofilms suggests that standard reproductive requirements placed on individuality should be reconsidered. More generally, the case of multispecies biofilms indicates that accounts of individuality that focus on single-species eukaryotes are too restrictive and that a pluralistic and open-ended account of evolutionary individuality is needed. PMID:26039982
Eco-evolutionary feedbacks, adaptive dynamics and evolutionary rescue theory
Ferriere, Regis; Legendre, Stéphane
2013-01-01
Adaptive dynamics theory has been devised to account for feedbacks between ecological and evolutionary processes. Doing so opens new dimensions to and raises new challenges about evolutionary rescue. Adaptive dynamics theory predicts that successive trait substitutions driven by eco-evolutionary feedbacks can gradually erode population size or growth rate, thus potentially raising the extinction risk. Even a single trait substitution can suffice to degrade population viability drastically at once and cause ‘evolutionary suicide’. In a changing environment, a population may track a viable evolutionary attractor that leads to evolutionary suicide, a phenomenon called ‘evolutionary trapping’. Evolutionary trapping and suicide are commonly observed in adaptive dynamics models in which the smooth variation of traits causes catastrophic changes in ecological state. In the face of trapping and suicide, evolutionary rescue requires that the population overcome evolutionary threats generated by the adaptive process itself. Evolutionary repellors play an important role in determining how variation in environmental conditions correlates with the occurrence of evolutionary trapping and suicide, and what evolutionary pathways rescue may follow. In contrast with standard predictions of evolutionary rescue theory, low genetic variation may attenuate the threat of evolutionary suicide and small population sizes may facilitate escape from evolutionary traps. PMID:23209163
NASA Technical Reports Server (NTRS)
Joshi, Anjali; Heimdahl, Mats P. E.; Miller, Steven P.; Whalen, Mike W.
2006-01-01
System safety analysis techniques are well established and are used extensively during the design of safety-critical systems. Despite this, most of the techniques are highly subjective and dependent on the skill of the practitioner. Since these analyses are usually based on an informal system model, it is unlikely that they will be complete, consistent, and error free. In fact, the lack of precise models of the system architecture and its failure modes often forces the safety analysts to devote much of their effort to gathering architectural details about the system behavior from several sources and embedding this information in the safety artifacts such as the fault trees. This report describes Model-Based Safety Analysis, an approach in which the system and safety engineers share a common system model created using a model-based development process. By extending the system model with a fault model as well as relevant portions of the physical system to be controlled, automated support can be provided for much of the safety analysis. We believe that by using a common model for both system and safety engineering and automating parts of the safety analysis, we can both reduce the cost and improve the quality of the safety analysis. Here we present our vision of model-based safety analysis and discuss the advantages and challenges in making this approach practical.
Kramers problem in evolutionary strategies
NASA Astrophysics Data System (ADS)
Dunkel, J.; Ebeling, W.; Schimansky-Geier, L.; Hänggi, P.
2003-06-01
We calculate the escape rates of different dynamical processes for the case of a one-dimensional symmetric double-well potential. In particular, we compare the escape rates of a Smoluchowski process, i.e., a corresponding overdamped Brownian motion dynamics in a metastable potential landscape, with the escape rates obtained for a biologically motivated model known as the Fisher-Eigen process. The main difference between the two models is that the dynamics of the Smoluchowski process is determined by local quantities, whereas the Fisher-Eigen process is based on a global coupling (nonlocal interaction). If considered in the context of numerical optimization algorithms, both processes can be interpreted as archetypes of physically or biologically inspired evolutionary strategies. In this sense, the results discussed in this work are utile in order to evaluate the efficiency of such strategies with regard to the problem of surmounting various barriers. We find that a combination of both scenarios, starting with the Fisher-Eigen strategy, provides a most effective evolutionary strategy.
Dynamics of a Simple Evolutionary Process
NASA Astrophysics Data System (ADS)
Stauffer, Dietrich; Newman, M. E. J.
We study the simple evolutionary process in which we repeatedly find the least fit agent in a population of agents and give it a new fitness, which is chosen independently at random from a specified distribution. We show that many of the average properties of this process can be calculated exactly using analytic methods. In particular, we find the distribution of fitnesses at arbitrary time, and the distribution of the lengths of runs of hits on the same agent, the latter being found to follow a power law with exponent -1, similar to the distribution of times between evolutionary events in the Bak-Sneppen model and models based on the so-called record dynamics. We confirm our analytic results with extensive numerical simulations.
Automated Hardware Design via Evolutionary Search
NASA Technical Reports Server (NTRS)
Lohn, Jason D.; Colombano, Silvano P.
2000-01-01
The goal of this research is to investigate the application of evolutionary search to the process of automated engineering design. Evolutionary search techniques involve the simulation of Darwinian mechanisms by computer algorithms. In recent years, such techniques have attracted much attention because they are able to tackle a wide variety of difficult problems and frequently produce acceptable solutions. The results obtained are usually functional, often surprising, and typically "messy" because the algorithms are told to concentrate on the overriding objective and not elegance or simplicity. advantages. First, faster design cycles translate into time and, hence, cost savings. Second, automated design techniques can be made to scale well and hence better deal with increasing amounts of design complexity. Third, design quality can increase because design properties can be specified a priori. For example, size and weight specifications of a device, smaller and lighter than the best known design, might be optimized by the automated design technique. The domain of electronic circuit design is an advantageous platform in which to study automated design techniques because it is a rich design space that is well understood, permitting human-created designs to be compared to machine- generated designs. developed for circuit design was to automatically produce high-level integrated electronic circuit designs whose properties permit physical implementation in silicon. This process entailed designing an effective evolutionary algorithm and solving a difficult multiobjective optimization problem. FY 99 saw many accomplishments in this effort.
Learning Intelligent Genetic Algorithms Using Japanese Nonograms
ERIC Educational Resources Information Center
Tsai, Jinn-Tsong; Chou, Ping-Yi; Fang, Jia-Cen
2012-01-01
An intelligent genetic algorithm (IGA) is proposed to solve Japanese nonograms and is used as a method in a university course to learn evolutionary algorithms. The IGA combines the global exploration capabilities of a canonical genetic algorithm (CGA) with effective condensed encoding, improved fitness function, and modified crossover and…
Proteomics in evolutionary ecology.
Baer, B; Millar, A H
2016-03-01
Evolutionary ecologists are traditionally gene-focused, as genes propagate phenotypic traits across generations and mutations and recombination in the DNA generate genetic diversity required for evolutionary processes. As a consequence, the inheritance of changed DNA provides a molecular explanation for the functional changes associated with natural selection. A direct focus on proteins on the other hand, the actual molecular agents responsible for the expression of a phenotypic trait, receives far less interest from ecologists and evolutionary biologists. This is partially due to the central dogma of molecular biology that appears to define proteins as the 'dead-end of molecular information flow' as well as technical limitations in identifying and studying proteins and their diversity in the field and in many of the more exotic genera often favored in ecological studies. Here we provide an overview of a newly forming field of research that we refer to as 'Evolutionary Proteomics'. We point out that the origins of cellular function are related to the properties of polypeptide and RNA and their interactions with the environment, rather than DNA descent, and that the critical role of horizontal gene transfer in evolution is more about coopting new proteins to impact cellular processes than it is about modifying gene function. Furthermore, post-transcriptional and post-translational processes generate a remarkable diversity of mature proteins from a single gene, and the properties of these mature proteins can also influence inheritance through genetic and perhaps epigenetic mechanisms. The influence of post-transcriptional diversification on evolutionary processes could provide a novel mechanistic underpinning for elements of rapid, directed evolutionary changes and adaptations as observed for a variety of evolutionary processes. Modern state-of the art technologies based on mass spectrometry are now available to identify and quantify peptides, proteins, protein
Paleoanthropology and evolutionary theory.
Tattersall, Ian
2012-01-01
Paleoanthropologists of the first half of the twentieth century were little concerned either with evolutionary theory or with the technicalities and broader implications of zoological nomenclature. In consequence, the paleoanthropological literature of the period consisted largely of a series of descriptions accompanied by authoritative pronouncements, together with a huge excess of hominid genera and species. Given the intellectual flimsiness of the resulting paleoanthropological framework, it is hardly surprising that in 1950 the ornithologist Ernst Mayr met little resistance when he urged the new postwar generation of paleoanthropologists to accept not only the elegant reductionism of the Evolutionary Synthesis but a vast oversimplification of hominid phylogenetic history and nomenclature. Indeed, the impact of Mayr's onslaught was so great that even when developments in evolutionary biology during the last quarter of the century brought other paleontologists to the realization that much more has been involved in evolutionary histories than the simple action of natural selection within gradually transforming lineages, paleoanthropologists proved highly reluctant to follow. Even today, paleoanthropologists are struggling to reconcile an intuitive realization that the burgeoning hominid fossil record harbors a substantial diversity of species (bringing hominid evolutionary patterns into line with that of other successful mammalian families), with the desire to cram a huge variety of morphologies into an unrealistically minimalist systematic framework. As long as this theoretical ambivalence persists, our perception of events in hominid phylogeny will continue to be distorted. PMID:23272602
Applying Evolutionary Anthropology
Gibson, Mhairi A; Lawson, David W
2015-01-01
Evolutionary anthropology provides a powerful theoretical framework for understanding how both current environments and legacies of past selection shape human behavioral diversity. This integrative and pluralistic field, combining ethnographic, demographic, and sociological methods, has provided new insights into the ultimate forces and proximate pathways that guide human adaptation and variation. Here, we present the argument that evolutionary anthropological studies of human behavior also hold great, largely untapped, potential to guide the design, implementation, and evaluation of social and public health policy. Focusing on the key anthropological themes of reproduction, production, and distribution we highlight classic and recent research demonstrating the value of an evolutionary perspective to improving human well-being. The challenge now comes in transforming relevance into action and, for that, evolutionary behavioral anthropologists will need to forge deeper connections with other applied social scientists and policy-makers. We are hopeful that these developments are underway and that, with the current tide of enthusiasm for evidence-based approaches to policy, evolutionary anthropology is well positioned to make a strong contribution. PMID:25684561
Model-based ocean acoustic passive localization
Candy, J.V.; Sullivan, E.J.
1994-01-24
The detection, localization and classification of acoustic sources (targets) in a hostile ocean environment is a difficult problem -- especially in light of the improved design of modern submarines and the continual improvement in quieting technology. Further the advent of more and more diesel-powered vessels makes the detection problem even more formidable than ever before. It has recently been recognized that the incorporation of a mathematical model that accurately represents the phenomenology under investigation can vastly improve the performance of any processor, assuming, of course, that the model is accurate. Therefore, it is necessary to incorporate more knowledge about the ocean environment into detection and localization algorithms in order to enhance the overall signal-to-noise ratios and improve performance. An alternative methodology to matched-field/matched-mode processing is the so-called model-based processor which is based on a state-space representation of the normal-mode propagation model. If state-space solutions can be accomplished, then many of the current ocean acoustic processing problems can be analyzed and solved using this framework to analyze performance results based on firm statistical and system theoretic grounds. The model-based approach, is (simply) ``incorporating mathematical models of both physical phenomenology and the measurement processes including noise into the processor to extract the desired information.`` In this application, we seek techniques to incorporate the: (1) ocean acoustic propagation model; (2) sensor array measurement model; and (3) noise models (ambient, shipping, surface and measurement) into a processor to solve the associated localization/detection problems.
Model-based reasoning: Troubleshooting
NASA Astrophysics Data System (ADS)
Davis, Randall; Hamscher, Walter C.
1988-07-01
To determine why something has stopped working, its useful to know how it was supposed to work in the first place. That simple observation underlies some of the considerable interest generated in recent years on the topic of model-based reasoning, particularly its application to diagnosis and troubleshooting. This paper surveys the current state of the art, reviewing areas that are well understood and exploring areas that present challenging research topics. It views the fundamental paradigm as the interaction of prediction and observation, and explores it by examining three fundamental subproblems: generating hypotheses by reasoning from a symptom to a collection of components whose misbehavior may plausibly have caused that symptom; testing each hypothesis to see whether it can account for all available observations of device behavior; then discriminating among the ones that survive testing. We analyze each of these independently at the knowledge level i.e., attempting to understand what reasoning capabilities arise from the different varieties of knowledge available to the program. We find that while a wide range of apparently diverse model-based systems have been built for diagnosis and troubleshooting, they are for the most part variations on the central theme outlined here. Their diversity lies primarily in the varying amounts of kinds of knowledge they bring to bear at each stage of the process; the underlying paradigm is fundamentally the same.
Human nutrition: evolutionary perspectives.
Barnicot, N A
2005-01-01
In recent decades, much new evidence relating to the ape forerunners of modern humans has come to hand and diet appears to be an important factor. At some stage, there must have been a transition from a largely vegetarian ape diet to a modern human hunting economy providing significant amounts of meat. On an even longer evolutionary time scale the change was more complex. The mechanisms of evolutionary change are now better understood than they were in Darwin's time, thanks largely to great advances in genetics, both experimental and theoretical. It is virtually certain that diet, as a major component of the human environment, must have exerted evolutionary effects, but researchers still have little good evidence. PMID:17393680
Ecological and evolutionary traps
Schlaepfer, Martin A.; Runge, M.C.; Sherman, P.W.
2002-01-01
Organisms often rely on environmental cues to make behavioral and life-history decisions. However, in environments that have been altered suddenly by humans, formerly reliable cues might no longer be associated with adaptive outcomes. In such cases, organisms can become 'trapped' by their evolutionary responses to the cues and experience reduced survival or reproduction. Ecological traps occur when organisms make poor habitat choices based on cues that correlated formerly with habitat quality. Ecological traps are part of a broader phenomenon, evolutionary traps, involving a dissociation between cues that organisms use to make any behavioral or life-history decision and outcomes normally associated with that decision. A trap can lead to extinction if a population falls below a critical size threshold before adaptation to the novel environment occurs. Conservation and management protocols must be designed in light of, rather than in spite of, the behavioral mechanisms and evolutionary history of populations and species to avoid 'trapping' them.
Archaeogenetics in evolutionary medicine.
Bouwman, Abigail; Rühli, Frank
2016-09-01
Archaeogenetics is the study of exploration of ancient DNA (aDNA) of more than 70 years old. It is an important part of the wider studies of many different areas of our past, including animal, plant and pathogen evolution and domestication events. Hereby, we address specifically the impact of research in archaeogenetics in the broader field of evolutionary medicine. Studies on ancient hominid genomes help to understand even modern health patterns. Human genetic microevolution, e.g. related to abilities of post-weaning milk consumption, and specifically genetic adaptation in disease susceptibility, e.g. towards malaria and other infectious diseases, are of the upmost importance in contributions of archeogenetics on the evolutionary understanding of human health and disease. With the increase in both the understanding of modern medical genetics and the ability to deep sequence ancient genetic information, the field of archaeogenetic evolutionary medicine is blossoming. PMID:27289479
Evolutionary Debunking Arguments
Kahane, Guy
2011-01-01
Evolutionary debunking arguments (EDAs) are arguments that appeal to the evolutionary origins of evaluative beliefs to undermine their justification. This paper aims to clarify the premises and presuppositions of EDAs—a form of argument that is increasingly put to use in normative ethics. I argue that such arguments face serious obstacles. It is often overlooked, for example, that they presuppose the truth of metaethical objectivism. More importantly, even if objectivism is assumed, the use of EDAs in normative ethics is incompatible with a parallel and more sweeping global evolutionary debunking argument that has been discussed in recent metaethics. After examining several ways of responding to this global debunking argument, I end by arguing that even if we could resist it, this would still not rehabilitate the current targeted use of EDAs in normative ethics given that, if EDAs work at all, they will in any case lead to a truly radical revision of our evaluative outlook. PMID:21949447
The fastest evolutionary trajectory
Traulsen, Arne; Iwasa, Yoh; Nowak, Martin A.
2008-01-01
Given two mutants, A and B, separated by n mutational steps, what is the evolutionary trajectory which allows a homogeneous population of A to reach B in the shortest time? We show that the optimum evolutionary trajectory (fitness landscape) has the property that the relative fitness increase between any two consecutive steps is constant. Hence, the optimum fitness landscape between A and B is given by an exponential function. Our result is precise for small mutation rates and excluding back mutations. We discuss deviations for large mutation rates and including back mutations. For very large mutation rates, the optimum fitness landscape is flat and has a single peak at type B. PMID:17900629
Evolutionary families of peptidases.
Rawlings, N D; Barrett, A J
1993-01-01
The available amino acid sequences of peptidases have been examined, and the enzymes have been allocated to evolutionary families. Some of the families can be grouped together in 'clans' that show signs of distant relationship, but nevertheless, it appears that there may be as many as 60 evolutionary lines of peptidases with separate origins. Some of these contain members with quite diverse peptidase activities, and yet there are some striking examples of convergence. We suggest that the classification by families could be used as an extension of the current classification by catalytic type. PMID:8439290
Investigating human evolutionary history
WOOD, BERNARD
2000-01-01
We rely on fossils for the interpretation of more than 95% of our evolutionary history. Fieldwork resulting in the recovery of fresh fossil evidence is an important component of reconstructing human evolutionary history, but advances can also be made by extracting additional evidence for the existing fossil record, and by improving the methods used to interpret the fossil evidence. This review shows how information from imaging and dental microstructure has contributed to improving our understanding of the hominin fossil record. It also surveys recent advances in the use of the fossil record for phylogenetic inference. PMID:10999269
Hybridization facilitates evolutionary rescue
Stelkens, Rike B; Brockhurst, Michael A; Hurst, Gregory D D; Greig, Duncan
2014-01-01
The resilience of populations to rapid environmental degradation is a major concern for biodiversity conservation. When environments deteriorate to lethal levels, species must evolve to adapt to the new conditions to avoid extinction. Here, we test the hypothesis that evolutionary rescue may be enabled by hybridization, because hybridization increases genetic variability. Using experimental evolution, we show that interspecific hybrid populations of Saccharomyces yeast adapt to grow in more highly degraded environments than intraspecific and parental crosses, resulting in survival rates far exceeding those of their ancestors. We conclude that hybridization can increase evolutionary responsiveness and that taxa able to exchange genes with distant relatives may better survive rapid environmental change. PMID:25558281
Passive localization in ocean acoustics: A model-based approach
Candy, J.V.; Sullivan, E.J.
1995-09-01
A model-based approach is developed to solve the passive localization problem in ocean acoustics using the state-space formulation for the first time. It is shown that the inherent structure of the resulting processor consists of a parameter estimator coupled to a nonlinear optimization scheme. The parameter estimator is designed using the model-based approach in which an ocean acoustic propagation model is used in developing the model-based processor required for localization. Recall that model-based signal processing is a well-defined methodology enabling the inclusion of environmental (propagation) models, measurement (sensor arrays) models, and noise (shipping, measurement) models into a sophisticated processing algorithm. Here the parameter estimator is designed, or more appropriately the model-based identifier (MBID) for a propagation model developed from a shallow water ocean experiment. After simulation, it is then applied to a set of experimental data demonstrating the applicability of this approach. {copyright} {ital 1995} {ital Acoustical} {ital Society} {ital of} {ital America}.
Model Based Autonomy for Robust Mars Operations
NASA Technical Reports Server (NTRS)
Kurien, James A.; Nayak, P. Pandurang; Williams, Brian C.; Lau, Sonie (Technical Monitor)
1998-01-01
Space missions have historically relied upon a large ground staff, numbering in the hundreds for complex missions, to maintain routine operations. When an anomaly occurs, this small army of engineers attempts to identify and work around the problem. A piloted Mars mission, with its multiyear duration, cost pressures, half-hour communication delays and two-week blackouts cannot be closely controlled by a battalion of engineers on Earth. Flight crew involvement in routine system operations must also be minimized to maximize science return. It also may be unrealistic to require the crew have the expertise in each mission subsystem needed to diagnose a system failure and effect a timely repair, as engineers did for Apollo 13. Enter model-based autonomy, which allows complex systems to autonomously maintain operation despite failures or anomalous conditions, contributing to safe, robust, and minimally supervised operation of spacecraft, life support, In Situ Resource Utilization (ISRU) and power systems. Autonomous reasoning is central to the approach. A reasoning algorithm uses a logical or mathematical model of a system to infer how to operate the system, diagnose failures and generate appropriate behavior to repair or reconfigure the system in response. The 'plug and play' nature of the models enables low cost development of autonomy for multiple platforms. Declarative, reusable models capture relevant aspects of the behavior of simple devices (e.g. valves or thrusters). Reasoning algorithms combine device models to create a model of the system-wide interactions and behavior of a complex, unique artifact such as a spacecraft. Rather than requiring engineers to all possible interactions and failures at design time or perform analysis during the mission, the reasoning engine generates the appropriate response to the current situation, taking into account its system-wide knowledge, the current state, and even sensor failures or unexpected behavior.
EVOLUTIONARY FOUNDATIONS FOR MOLECULAR MEDICINE
Nesse, Randolph M.; Ganten, Detlev; Gregory, T. Ryan; Omenn, Gilbert S.
2015-01-01
Evolution has long provided a foundation for population genetics, but many major advances in evolutionary biology from the 20th century are only now being applied in molecular medicine. They include the distinction between proximate and evolutionary explanations, kin selection, evolutionary models for cooperation, and new strategies for tracing phylogenies and identifying signals of selection. Recent advances in genomics are further transforming evolutionary biology and creating yet more opportunities for progress at the interface of evolution with genetics, medicine, and public health. This article reviews 15 evolutionary principles and their applications in molecular medicine in hopes that readers will use them and others to speed the development of evolutionary molecular medicine. PMID:22544168
Evolutionary Theories of Detection
Fitch, J P
2005-04-29
Current, mid-term and long range technologies for detection of pathogens and toxins are briefly described in the context of performance metrics and operational scenarios. Predictive (evolutionary) and speculative (revolutionary) assessments are given with trade-offs identified, where possible, among competing performance goals.
Learning: An Evolutionary Analysis
ERIC Educational Resources Information Center
Swann, Joanna
2009-01-01
This paper draws on the philosophy of Karl Popper to present a descriptive evolutionary epistemology that offers philosophical solutions to the following related problems: "What happens when learning takes place?" and "What happens in human learning?" It provides a detailed analysis of how learning takes place without any direct transfer of…
Evolutionary Theory under Fire.
ERIC Educational Resources Information Center
Lewin, Roger
1980-01-01
Summarizes events of a conference on evolutionary biology in Chicago entitled: "Macroevolution." Reviews the theory of modern synthesis, a term used to explain Darwinism in terms of population biology and genetics. Issues presented at the conference are discussed in detail. (CS)
Evolutionary Developmental Psychology.
ERIC Educational Resources Information Center
Geary, David C.; Bjorklund, David F.
2000-01-01
Describes evolutionary developmental psychology as the study of the genetic and ecological mechanisms that govern the development of social and cognitive competencies common to all human beings and the epigenetic (gene-environment interactions) processes that adapt these competencies to local conditions. Outlines basic assumptions and domains of…
Evolutionary Optimization of Yagi-Uda Antennas
NASA Technical Reports Server (NTRS)
Lohn, Jason D.; Kraus, William F.; Linden, Derek S.; Colombano, Silvano P.
2001-01-01
Yagi-Uda antennas are known to be difficult to design and optimize due to their sensitivity at high gain, and the inclusion of numerous parasitic elements. We present a genetic algorithm-based automated antenna optimization system that uses a fixed Yagi-Uda topology and a byte-encoded antenna representation. The fitness calculation allows the implicit relationship between power gain and sidelobe/backlobe loss to emerge naturally, a technique that is less complex than previous approaches. The genetic operators used are also simpler. Our results include Yagi-Uda antennas that have excellent bandwidth and gain properties with very good impedance characteristics. Results exceeded previous Yagi-Uda antennas produced via evolutionary algorithms by at least 7.8% in mainlobe gain. We also present encouraging preliminary results where a coevolutionary genetic algorithm is used.
Model-Based Method for Sensor Validation
NASA Technical Reports Server (NTRS)
Vatan, Farrokh
2012-01-01
Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in situ platforms. One of NASA s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation can be considered to be part of the larger effort of improving reliability and safety. The standard methods for solving the sensor validation problem are based on probabilistic analysis of the system, from which the method based on Bayesian networks is most popular. Therefore, these methods can only predict the most probable faulty sensors, which are subject to the initial probabilities defined for the failures. The method developed in this work is based on a model-based approach and provides the faulty sensors (if any), which can be logically inferred from the model of the system and the sensor readings (observations). The method is also more suitable for the systems when it is hard, or even impossible, to find the probability functions of the system. The method starts by a new mathematical description of the problem and develops a very efficient and systematic algorithm for its solution. The method builds on the concepts of analytical redundant relations (ARRs).
SCOPmap: Automated assignment of protein structures to evolutionary superfamilies
Cheek, Sara; Qi, Yuan; Krishna, S Sri; Kinch, Lisa N; Grishin, Nick V
2004-01-01
Background Inference of remote homology between proteins is very challenging and remains a prerogative of an expert. Thus a significant drawback to the use of evolutionary-based protein structure classifications is the difficulty in assigning new proteins to unique positions in the classification scheme with automatic methods. To address this issue, we have developed an algorithm to map protein domains to an existing structural classification scheme and have applied it to the SCOP database. Results The general strategy employed by this algorithm is to combine the results of several existing sequence and structure comparison tools applied to a query protein of known structure in order to find the homologs already classified in SCOP database and thus determine classification assignments. The algorithm is able to map domains within newly solved structures to the appropriate SCOP superfamily level with ~95% accuracy. Examples of correctly mapped remote homologs are discussed. The algorithm is also capable of identifying potential evolutionary relationships not specified in the SCOP database, thus helping to make it better. The strategy of the mapping algorithm is not limited to SCOP and can be applied to any other evolutionary-based classification scheme as well. SCOPmap is available for download. Conclusion The SCOPmap program is useful for assigning domains in newly solved structures to appropriate superfamilies and for identifying evolutionary links between different superfamilies. PMID:15598351
MODEL-BASED CLUSTERING OF LARGE NETWORKS1
Vu, Duy Q.; Hunter, David R.; Schweinberger, Michael
2015-01-01
We describe a network clustering framework, based on finite mixture models, that can be applied to discrete-valued networks with hundreds of thousands of nodes and billions of edge variables. Relative to other recent model-based clustering work for networks, we introduce a more flexible modeling framework, improve the variational-approximation estimation algorithm, discuss and implement standard error estimation via a parametric bootstrap approach, and apply these methods to much larger data sets than those seen elsewhere in the literature. The more flexible framework is achieved through introducing novel parameterizations of the model, giving varying degrees of parsimony, using exponential family models whose structure may be exploited in various theoretical and algorithmic ways. The algorithms are based on variational generalized EM algorithms, where the E-steps are augmented by a minorization-maximization (MM) idea. The bootstrapped standard error estimates are based on an efficient Monte Carlo network simulation idea. Last, we demonstrate the usefulness of the model-based clustering framework by applying it to a discrete-valued network with more than 131,000 nodes and 17 billion edge variables. PMID:26605002
A modified evolutionary minority game with local imitation
NASA Astrophysics Data System (ADS)
Shang, Lihui; Wang, Xiao Fan
2006-03-01
In this work, we consider a network of agents who compete for limited resources through a modified evolutionary minority game model based on Kauffman network. The properties of such a system for different values of mean connectivity K of the network are studied. Simulation results suggest that the agents also tend to self-segregate into opposing groups characterized by extreme actions, and show that the agents can coordinate their behavior effectively in the system with K=2. Enhanced cooperation occurs also for a generalization of the model to multiple-choice evolutionary minority games in which the agents are allowed to choose among several options.
Model based matching using simulated annealing and a minimum representation size criterion
NASA Technical Reports Server (NTRS)
Ravichandran, B.; Sanderson, A. C.
1992-01-01
We define the model based matching problem in terms of the correspondence and transformation that relate the model and scene, and the search and evaluation measures needed to find the best correspondence and transformation. Simulated annealing is proposed as a method for search and optimization, and the minimum representation size criterion is used as the evaluation measure in an algorithm that finds the best correspondence. An algorithm based on simulated annealing is presented and evaluated. This algorithm is viewed as a part of an adaptive, hierarchical approach which provides robust results for a variety of model based matching problems.
Product Mix Selection Using AN Evolutionary Technique
NASA Astrophysics Data System (ADS)
Tsoulos, Ioannis G.; Vasant, Pandian
2009-08-01
This paper proposes an evolutionary technique for the solution of a real—life industrial problem and particular for the product mix selection problem. The evolutionary technique is a combination of a genetic algorithm that preserves the feasibility of the trial solutions with penalties and some local optimization method. The goal of this paper has been achieved in finding the best near optimal solution for the profit fitness function respect to vagueness factor and level of satisfaction. The findings of the profit values will be very useful for the decision makers in the industrial engineering sector for the implementation purpose. It's possible to improve the solutions obtained in this study by employing other meta-heuristic methods such as simulated annealing, tabu Search, ant colony optimization, particle swarm optimization and artificial immune systems.
Active Shape Model-Based Gait Recognition Using Infrared Images
NASA Astrophysics Data System (ADS)
Kim, Daehee; Lee, Seungwon; Paik, Joonki
We present a gait recognition system using infra-red (IR) images. Since an IR camera is not affected by the intensity of illumination, it is able to provide constant recognition performance regardless of the amount of illumination. Model-based object tracking algorithms enable robust tracking with partial occlusions or dynamic illumination. However, this algorithm often fails in tracking objects if strong edge exists near the object. Replacement of the input image by an IR image guarantees robust object region extraction because background edges do not affect the IR image. In conclusion, the proposed gait recognition algorithm improves accuracy in object extraction by using IR images and the improvements finally increase the recognition rate of gaits.
NASA Astrophysics Data System (ADS)
Hibbard, Bill
2012-05-01
Orseau and Ring, as well as Dewey, have recently described problems, including self-delusion, with the behavior of agents using various definitions of utility functions. An agent's utility function is defined in terms of the agent's history of interactions with its environment. This paper argues, via two examples, that the behavior problems can be avoided by formulating the utility function in two steps: 1) inferring a model of the environment from interactions, and 2) computing utility as a function of the environment model. Basing a utility function on a model that the agent must learn implies that the utility function must initially be expressed in terms of specifications to be matched to structures in the learned model. These specifications constitute prior assumptions about the environment so this approach will not work with arbitrary environments. But the approach should work for agents designed by humans to act in the physical world. The paper also addresses the issue of self-modifying agents and shows that if provided with the possibility to modify their utility functions agents will not choose to do so, under some usual assumptions.
An evolutionary tabu search for cell image segmentation.
Jiang, Tianzi; Yang, Faguo
2002-01-01
Many engineering problems can be formulated as optimization problems. It has become more and more important to develop an efficient global optimization technique for solving these problems. In this paper, we propose an evolutionary tabu search (ETS) for cell image segmentation. The advantages of genetic algorithms (GA) and TS algorithms are incorporated into the proposed method. More precisely, we incorporate "the survival of the fittest" from evolutionary algorithms into TS. The method has been applied to the segmentation of several kinds of cell images. The experimental results show that the new algorithm is a practical and effective one for global optimization; it can yield good, near-optimal solutions and has better convergence and robustness than other global optimization approaches. PMID:18244872
Evolutionary Design in Embryogeny
NASA Astrophysics Data System (ADS)
Ashlock, Daniel
In biology texts embryogeny is defined as "the development or production of an embryo." An embryo is a living creature in its first stage of life, from the fertilized egg cell through the initial development of its morphology and its chemical networks. The study of embryogeny is part of developmental biology [1, 2]. The reader may wonder why a book on evolutionary design should have a section on embryogeny. Computational embryogeny is the study of representations for evolutionary computation that mimic biological embryogeny. These representations contain analogs to the complex biological processes that steer a single cell to become a rose, a mouse, or a man. The advantage of using embryogenic representations is their richness of expression. A small seed of information can be expanded, through a developmental process, into a complex and potentially useful object. This richness of expression comes at a substantial price: the developmental process is sufficiently complex to be unpredictable.
Evolutionary Determinants of Cancer
Greaves, Mel
2015-01-01
‘Nothing in biology makes sense except in the light of evolution’ Th. Dobzhansky, 1973 Our understanding of cancer is being transformed by exploring clonal diversity, drug resistance and causation within an evolutionary framework. The therapeutic resilience of advanced cancer is a consequence of its character as complex, dynamic and adaptive ecosystem engendering robustness, underpinned by genetic diversity and epigenetic plasticity. The risk of mutation-driven escape by self-renewing cells is intrinsic to multicellularity but is countered by multiple restraints facilitating increasing complexity and longevity of species. But our own has disrupted this historical narrative by rapidly escalating intrinsic risk. Evolutionary principles illuminate these challenges and provide new avenues to explore for more effective control. PMID:26193902
Predicting evolutionary dynamics
NASA Astrophysics Data System (ADS)
Balazsi, Gabor
We developed an ordinary differential equation-based model to predict the evolutionary dynamics of yeast cells carrying a synthetic gene circuit. The predicted aspects included the speed at which the ancestral genotype disappears from the population; as well as the types of mutant alleles that establish in each environmental condition. We validated these predictions by experimental evolution. The agreement between our predictions and experimental findings suggests that cellular and population fitness landscapes can be useful to predict short-term evolution.
A Model-Based Prognostics Approach Applied to Pneumatic Valves
NASA Technical Reports Server (NTRS)
Daigle, Matthew J.; Goebel, Kai
2011-01-01
Within the area of systems health management, the task of prognostics centers on predicting when components will fail. Model-based prognostics exploits domain knowledge of the system, its components, and how they fail by casting the underlying physical phenomena in a physics-based model that is derived from first principles. Uncertainty cannot be avoided in prediction, therefore, algorithms are employed that help in managing these uncertainties. The particle filtering algorithm has become a popular choice for model-based prognostics due to its wide applicability, ease of implementation, and support for uncertainty management. We develop a general model-based prognostics methodology within a robust probabilistic framework using particle filters. As a case study, we consider a pneumatic valve from the Space Shuttle cryogenic refueling system. We develop a detailed physics-based model of the pneumatic valve, and perform comprehensive simulation experiments to illustrate our prognostics approach and evaluate its effectiveness and robustness. The approach is demonstrated using historical pneumatic valve data from the refueling system.
NASA Astrophysics Data System (ADS)
Szabó, György; Fáth, Gábor
2007-07-01
Game theory is one of the key paradigms behind many scientific disciplines from biology to behavioral sciences to economics. In its evolutionary form and especially when the interacting agents are linked in a specific social network the underlying solution concepts and methods are very similar to those applied in non-equilibrium statistical physics. This review gives a tutorial-type overview of the field for physicists. The first four sections introduce the necessary background in classical and evolutionary game theory from the basic definitions to the most important results. The fifth section surveys the topological complications implied by non-mean-field-type social network structures in general. The next three sections discuss in detail the dynamic behavior of three prominent classes of models: the Prisoner's Dilemma, the Rock-Scissors-Paper game, and Competing Associations. The major theme of the review is in what sense and how the graph structure of interactions can modify and enrich the picture of long term behavioral patterns emerging in evolutionary games.
Evolutionary mysteries in meiosis.
Lenormand, Thomas; Engelstädter, Jan; Johnston, Susan E; Wijnker, Erik; Haag, Christoph R
2016-10-19
Meiosis is a key event of sexual life cycles in eukaryotes. Its mechanistic details have been uncovered in several model organisms, and most of its essential features have received various and often contradictory evolutionary interpretations. In this perspective, we present an overview of these often 'weird' features. We discuss the origin of meiosis (origin of ploidy reduction and recombination, two-step meiosis), its secondary modifications (in polyploids or asexuals, inverted meiosis), its importance in punctuating life cycles (meiotic arrests, epigenetic resetting, meiotic asymmetry, meiotic fairness) and features associated with recombination (disjunction constraints, heterochiasmy, crossover interference and hotspots). We present the various evolutionary scenarios and selective pressures that have been proposed to account for these features, and we highlight that their evolutionary significance often remains largely mysterious. Resolving these mysteries will likely provide decisive steps towards understanding why sex and recombination are found in the majority of eukaryotes.This article is part of the themed issue 'Weird sex: the underappreciated diversity of sexual reproduction'. PMID:27619705
McAvoy, Alex; Hauert, Christoph
2015-01-01
Evolutionary game theory is a powerful framework for studying evolution in populations of interacting individuals. A common assumption in evolutionary game theory is that interactions are symmetric, which means that the players are distinguished by only their strategies. In nature, however, the microscopic interactions between players are nearly always asymmetric due to environmental effects, differing baseline characteristics, and other possible sources of heterogeneity. To model these phenomena, we introduce into evolutionary game theory two broad classes of asymmetric interactions: ecological and genotypic. Ecological asymmetry results from variation in the environments of the players, while genotypic asymmetry is a consequence of the players having differing baseline genotypes. We develop a theory of these forms of asymmetry for games in structured populations and use the classical social dilemmas, the Prisoner’s Dilemma and the Snowdrift Game, for illustrations. Interestingly, asymmetric games reveal essential differences between models of genetic evolution based on reproduction and models of cultural evolution based on imitation that are not apparent in symmetric games. PMID:26308326
Evolutionary Computational Methods for Identifying Emergent Behavior in Autonomous Systems
NASA Technical Reports Server (NTRS)
Terrile, Richard J.; Guillaume, Alexandre
2011-01-01
A technique based on Evolutionary Computational Methods (ECMs) was developed that allows for the automated optimization of complex computationally modeled systems, such as autonomous systems. The primary technology, which enables the ECM to find optimal solutions in complex search spaces, derives from evolutionary algorithms such as the genetic algorithm and differential evolution. These methods are based on biological processes, particularly genetics, and define an iterative process that evolves parameter sets into an optimum. Evolutionary computation is a method that operates on a population of existing computational-based engineering models (or simulators) and competes them using biologically inspired genetic operators on large parallel cluster computers. The result is the ability to automatically find design optimizations and trades, and thereby greatly amplify the role of the system engineer.
DeLong, John P; Gibert, Jean P
2016-02-01
Heritable trait variation is a central and necessary ingredient of evolution. Trait variation also directly affects ecological processes, generating a clear link between evolutionary and ecological dynamics. Despite the changes in variation that occur through selection, drift, mutation, and recombination, current eco-evolutionary models usually fail to track how variation changes through time. Moreover, eco-evolutionary models assume fitness functions for each trait and each ecological context, which often do not have empirical validation. We introduce a new type of model, Gillespie eco-evolutionary models (GEMs), that resolves these concerns by tracking distributions of traits through time as eco-evolutionary dynamics progress. This is done by allowing change to be driven by the direct fitness consequences of model parameters within the context of the underlying ecological model, without having to assume a particular fitness function. GEMs work by adding a trait distribution component to the standard Gillespie algorithm - an approach that models stochastic systems in nature that are typically approximated through ordinary differential equations. We illustrate GEMs with the Rosenzweig-MacArthur consumer-resource model. We show not only how heritable trait variation fuels trait evolution and influences eco-evolutionary dynamics, but also how the erosion of variation through time may hinder eco-evolutionary dynamics in the long run. GEMs can be developed for any parameter in any ordinary differential equation model and, furthermore, can enable modeling of multiple interacting traits at the same time. We expect GEMs will open the door to a new direction in eco-evolutionary and evolutionary modeling by removing long-standing modeling barriers, simplifying the link between traits, fitness, and dynamics, and expanding eco-evolutionary treatment of a greater diversity of ecological interactions. These factors make GEMs much more than a modeling advance, but an important
Expediting model-based optoacoustic reconstructions with tomographic symmetries
Lutzweiler, Christian; Deán-Ben, Xosé Luís; Razansky, Daniel
2014-01-15
Purpose: Image quantification in optoacoustic tomography implies the use of accurate forward models of excitation, propagation, and detection of optoacoustic signals while inversions with high spatial resolution usually involve very large matrices, leading to unreasonably long computation times. The development of fast and memory efficient model-based approaches represents then an important challenge to advance on the quantitative and dynamic imaging capabilities of tomographic optoacoustic imaging. Methods: Herein, a method for simplification and acceleration of model-based inversions, relying on inherent symmetries present in common tomographic acquisition geometries, has been introduced. The method is showcased for the case of cylindrical symmetries by using polar image discretization of the time-domain optoacoustic forward model combined with efficient storage and inversion strategies. Results: The suggested methodology is shown to render fast and accurate model-based inversions in both numerical simulations andpost mortem small animal experiments. In case of a full-view detection scheme, the memory requirements are reduced by one order of magnitude while high-resolution reconstructions are achieved at video rate. Conclusions: By considering the rotational symmetry present in many tomographic optoacoustic imaging systems, the proposed methodology allows exploiting the advantages of model-based algorithms with feasible computational requirements and fast reconstruction times, so that its convenience and general applicability in optoacoustic imaging systems with tomographic symmetries is anticipated.
Adaptive noise cancellation based on beehive pattern evolutionary digital filter
NASA Astrophysics Data System (ADS)
Zhou, Xiaojun; Shao, Yimin
2014-01-01
Evolutionary digital filtering (EDF) exhibits the advantage of avoiding the local optimum problem by using cloning and mating searching rules in an adaptive noise cancellation system. However, convergence performance is restricted by the large population of individuals and the low level of information communication among them. The special beehive structure enables the individuals on neighbour beehive nodes to communicate with each other and thus enhance the information spread and random search ability of the algorithm. By introducing the beehive pattern evolutionary rules into the original EDF, this paper proposes an improved beehive pattern evolutionary digital filter (BP-EDF) to overcome the defects of the original EDF. In the proposed algorithm, a new evolutionary rule which combines competing cloning, complete cloning and assistance mating methods is constructed to enable the individuals distributed on the beehive to communicate with their neighbours. Simulation results are used to demonstrate the improved performance of the proposed algorithm in terms of convergence speed to the global optimum compared with the original methods. Experimental results also verify the effectiveness of the proposed algorithm in extracting feature signals that are contaminated by significant amounts of noise during the fault diagnosis task.
3-D model-based tracking for UAV indoor localization.
Teulière, Céline; Marchand, Eric; Eck, Laurent
2015-05-01
This paper proposes a novel model-based tracking approach for 3-D localization. One main difficulty of standard model-based approach lies in the presence of low-level ambiguities between different edges. In this paper, given a 3-D model of the edges of the environment, we derive a multiple hypotheses tracker which retrieves the potential poses of the camera from the observations in the image. We also show how these candidate poses can be integrated into a particle filtering framework to guide the particle set toward the peaks of the distribution. Motivated by the UAV indoor localization problem where GPS signal is not available, we validate the algorithm on real image sequences from UAV flights. PMID:25099967
Model based control of a rehabilitation robot for lower extremities.
Xie, Xiao-Liang; Hou, Zeng-Guang; Li, Peng-Feng; Ji, Cheng; Zhang, Feng; Tan, Min; Wang, Hongbo; Hu, Guoqing
2010-01-01
This paper mainly focuses on the trajectory tracking control of a lower extremity rehabilitation robot during passive training process of patients. Firstly, a mathematical model of the rehabilitation robot is introduced by using Lagrangian analysis. Then, a model based computed-torque control scheme is designed to control the constrained four-link robot (with patient's foot fixed on robot's end-effector) to track a predefined trajectory. Simulation results are provided to illustrate the effectiveness of the proposed model based computed-torque algorithm. In the simulation, a multi-body dynamics and motion software named ADAMS is used. The combined simulation of ADAMS and MATLAB is able to produce more realistic results of this complex integrated system. PMID:21097222
Zhou, Yongquan; Xie, Jian; Li, Liangliang; Ma, Mingzhi
2014-01-01
Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: "bats approach their prey." Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization. PMID:24967425
NASA Astrophysics Data System (ADS)
Martinez-Vaquero, Luis A.; Cuesta, José A.
2013-05-01
Indirect reciprocity is one of the main mechanisms to explain the emergence and sustainment of altruism in societies. The standard approach to indirect reciprocity is reputation models. These are games in which players base their decisions on their opponent's reputation gained in past interactions with other players (moral assessment). The combination of actions and moral assessment leads to a large diversity of strategies; thus determining the stability of any of them against invasions by all the others is a difficult task. We use a variant of a previously introduced reputation-based model that let us systematically analyze all these invasions and determine which ones are successful. Accordingly, we are able to identify the third-order strategies (those which, apart from the action, judge considering both the reputation of the donor and that of the recipient) that are evolutionarily stable. Our results reveal that if a strategy resists the invasion of any other one sharing its same moral assessment, it can resist the invasion of any other strategy. However, if actions are not always witnessed, cheaters (i.e., individuals with a probability of defecting regardless of the opponent's reputation) have a chance to defeat the stable strategies for some choices of the probabilities of cheating and of being witnessed. Remarkably, by analyzing this issue with adaptive dynamics we find that whether an honest population resists the invasion of cheaters is determined by a Hamilton-like rule, with the probability that the cheat is discovered playing the role of the relatedness parameter.
NASA Astrophysics Data System (ADS)
Healy, Thomas J.
1993-04-01
The paper describes an evolutionary approach to the development of aerospace systems, represented by the introduction of integrated product teams (IPTs), which are now used at Rockwell's Space Systems Division on all new programs and are introduced into existing projects after demonstrations of increases in quality and reductions in cost and schedule due to IPTs. Each IPT is unique and reflects its own program and lasts for the life of the program. An IPT includes customers, suppliers, subcontractors, and associate contractors, and have a charter, mission, scope of authority, budget, and schedule. Functional management is responsible for the staffing, training, method development, and generic technology development.
A model-based multisensor data fusion knowledge management approach
NASA Astrophysics Data System (ADS)
Straub, Jeremy
2014-06-01
A variety of approaches exist for combining data from multiple sensors. The model-based approach combines data based on its support for or refutation of elements of the model which in turn can be used to evaluate an experimental thesis. This paper presents a collection of algorithms for mapping various types of sensor data onto a thesis-based model and evaluating the truth or falsity of the thesis, based on the model. The use of this approach for autonomously arriving at findings and for prioritizing data are considered. Techniques for updating the model (instead of arriving at a true/false assertion) are also discussed.
Spring-Model-Based Wireless Localization in Cooperative User Environments
NASA Astrophysics Data System (ADS)
Ke, Wei; Wu, Lenan; Qi, Chenhao
To overcome the shortcomings of conventional cellular positioning, a novel cooperative location algorithm that uses the available peer-to-peer communication between the mobile terminals (MTs) is proposed. The main idea behind the proposed approach is to incorporate the long- and short-range location information to improve the estimation of the MT's coordinates. Since short-range communications among MTs are characterized by high line-of-sight (LOS) probability, an improved spring-model-based cooperative location method can be exploited to provide low-cost improvement for cellular-based location in the non-line-of-sight (NLOS) environments.
Optimization of semiconductor quantum devices by evolutionary search.
Goldoni, G; Rossi, F
2000-07-15
A novel simulation strategy is proposed for searching for semiconductor quantum devices that are optimized with respect to required performances. Based on evolutionary programming, a technique that implements the paradigm of genetic algorithms in more-complex data structures than strings of bits, the proposed algorithm is able to deal with quantum devices with preset nontrivial constraints (e.g., transition energies, geometric requirements). Therefore our approach allows for automatic design, thus avoiding costly by-hand optimizations. We demonstrate the advantages of the proposed algorithm through a relevant and nontrivial application, the optimization of a second-harmonic-generation device working in resonance conditions. PMID:18064261
Evolutionary status of Polaris
NASA Astrophysics Data System (ADS)
Fadeyev, Yu. A.
2015-05-01
Hydrodynamic models of short-period Cepheids were computed to determine the pulsation period as a function of evolutionary time during the first and third crossings of the instability strip. The equations of radiation hydrodynamics and turbulent convection for radial stellar pulsations were solved with the initial conditions obtained from the evolutionary models of Population I stars (X = 0.7, Z = 0.02) with masses from 5.2 to 6.5 M⊙ and the convective core overshooting parameter 0.1 ≤ αov ≤ 0.3. In Cepheids with period of 4 d the rate of pulsation period change during the first crossing of the instability strip is over 50 times larger than that during the third crossing. Polaris is shown to cross the instability strip for the first time and to be the fundamental mode pulsator. The best agreement between the predicted and observed rates of period change was obtained for the model with mass of 5.4 M⊙ and the overshooting parameter αov = 0.25. The bolometric luminosity and radius are L = 1.26 × 103 L⊙ and R = 37.5 R⊙, respectively. In the HR diagram, Polaris is located at the red edge of the instability strip.
Alvarez de Lorenzana, J M; Ward, L M
1987-01-01
This paper develops a metatheoretical framework for understanding evolutionary systems (systems that develop in ways that increase their own variety). The framework addresses shortcomings seen in other popular systems theories. It concerns both living and nonliving systems, and proposes a metahierarchy of hierarchical systems. Thus, it potentially addresses systems at all descriptive levels. We restrict our definition of system to that of a core system whose parts have a different ontological status than the system, and characterize the core system in terms of five global properties: minimal length interval, minimal time interval, system cycle, total receptive capacity, and system potential. We propose two principles through the interaction of which evolutionary systems develop. The Principle of Combinatorial Expansion describes how a core system realizes its developmental potential through a process of progressive differentiation of the single primal state up to a limit stage. The Principle of Generative Condensation describes how the components of the last stage of combinatorial expansion condense and become the environment for and components of new, enriched systems. The early evolution of the Universe after the "big bang" is discussed in light of these ideas as an example of the application of the framework. PMID:3689299
Evolutionary models of binaries
NASA Astrophysics Data System (ADS)
van Rensbergen, Walter; Mennekens, Nicki; de Greve, Jean-Pierre; Jansen, Kim; de Loore, Bert
2011-07-01
We have put on CDS a catalog containing 561 evolutionary models of binaries: J/A+A/487/1129 (Van Rensbergen+, 2008). The catalog covers a grid of binaries with a B-type primary at birth, different values for the initial mass ratio and a wide range of initial orbital periods. The evolution was calculated with the Brussels code in which we introduced the spinning up and the creation of a hot spot on the gainer or its accretion disk, caused by impacting mass coming from the donor. When the kinetic energy of fast rotation added to the radiative energy of the hot spot exceeds the binding energy, a fraction of the transferred matter leaves the system: the evolution is liberal during a short lasting era of rapid mass transfer. The spin-up of the gainer was modulated using both strong and weak tides. The catalog shows the results for both types. For comparison, we included the evolutionary tracks calculated with the conservative assumption. Binaries with an initial primary below 6 Msolar show hardly any mass loss from the system and thus evolve conservatively. Above this limit differences between liberal and conservative evolution grow with increasing initial mass of the primary star.
An Evolutionary Solution for Cooperative and Competitive Mobile Agents
NASA Astrophysics Data System (ADS)
Fan, Jiancong; Ruan, Jiuhong; Liang, Yongquan
The cooperation and competition among mobile agents using evolutionary strategy is an important domain in Agent theory and application. With evolutionary strategy the cooperation process is achieved by training and iterating many times. From evolutionary solution of cooperative and competitive mobile agents (CCMA), a group of mobile agents are partitioned into two populations, cooperative agents group and competitive agent group. Cooperative agents are treated as several pursuers, while a competitive agent is viewed as the pursuers' competitor called evader. The cooperation actions take place among the pursuers in order to capture the evader as rapidly as possible. An agent individual (chromosome) is encoded based on a kind of two-dimensional random moving. The next moving direction is encoded as chromosome. The chromosome can be crossed over and mutated according to designed operators and fitness function. An evolutionary algorithm for cooperation and competition of mobile agents is proposed. The experiments show that the algorithm for this evolutionary solution is effective, and it has better time performances and convergence.
Adaptive evolutionary artificial neural networks for pattern classification.
Oong, Tatt Hee; Isa, Nor Ashidi Mat
2011-11-01
This paper presents a new evolutionary approach called the hybrid evolutionary artificial neural network (HEANN) for simultaneously evolving an artificial neural networks (ANNs) topology and weights. Evolutionary algorithms (EAs) with strong global search capabilities are likely to provide the most promising region. However, they are less efficient in fine-tuning the search space locally. HEANN emphasizes the balancing of the global search and local search for the evolutionary process by adapting the mutation probability and the step size of the weight perturbation. This is distinguishable from most previous studies that incorporate EA to search for network topology and gradient learning for weight updating. Four benchmark functions were used to test the evolutionary framework of HEANN. In addition, HEANN was tested on seven classification benchmark problems from the UCI machine learning repository. Experimental results show the superior performance of HEANN in fine-tuning the network complexity within a small number of generations while preserving the generalization capability compared with other algorithms. PMID:21968733
Topological evolutionary computing in the optimal design of 2D and 3D structures
NASA Astrophysics Data System (ADS)
Burczynski, T.; Poteralski, A.; Szczepanik, M.
2007-10-01
An application of evolutionary algorithms and the finite-element method to the topology optimization of 2D structures (plane stress, bending plates, and shells) and 3D structures is described. The basis of the topological evolutionary optimization is the direct control of the density material distribution (or thickness for 2D structures) by the evolutionary algorithm. The structures are optimized for stress, mass, and compliance criteria. The numerical examples demonstrate that this method is an effective technique for solving problems in computer-aided optimal design.
Transition matrix model for evolutionary game dynamics.
Ermentrout, G Bard; Griffin, Christopher; Belmonte, Andrew
2016-03-01
We study an evolutionary game model based on a transition matrix approach, in which the total change in the proportion of a population playing a given strategy is summed directly over contributions from all other strategies. This general approach combines aspects of the traditional replicator model, such as preserving unpopulated strategies, with mutation-type dynamics, which allow for nonzero switching to unpopulated strategies, in terms of a single transition function. Under certain conditions, this model yields an endemic population playing non-Nash-equilibrium strategies. In addition, a Hopf bifurcation with a limit cycle may occur in the generalized rock-scissors-paper game, unlike the replicator equation. Nonetheless, many of the Folk Theorem results are shown to hold for this model. PMID:27078323
Transition matrix model for evolutionary game dynamics
NASA Astrophysics Data System (ADS)
Ermentrout, G. Bard; Griffin, Christopher; Belmonte, Andrew
2016-03-01
We study an evolutionary game model based on a transition matrix approach, in which the total change in the proportion of a population playing a given strategy is summed directly over contributions from all other strategies. This general approach combines aspects of the traditional replicator model, such as preserving unpopulated strategies, with mutation-type dynamics, which allow for nonzero switching to unpopulated strategies, in terms of a single transition function. Under certain conditions, this model yields an endemic population playing non-Nash-equilibrium strategies. In addition, a Hopf bifurcation with a limit cycle may occur in the generalized rock-scissors-paper game, unlike the replicator equation. Nonetheless, many of the Folk Theorem results are shown to hold for this model.
Airlines Network Optimization using Evolutionary Computation
NASA Astrophysics Data System (ADS)
Inoue, Hiroki; Kato, Yasuhiko; Sakagami, Tomoya
In recent years, various networks have come to exist in our surroundings. Not only the internet and airline routes can be thought of as networks: protein interactions are also networks. An “economic network design problem” can be discussed by assuming that a vertex is an economic player and that a link represents some connection between economic players. In this paper, the Airlines network is taken up as an example of an “economic network design problem”, and the Airlines network which the profit of the entire Airlines industry is maximized is clarified. The Airlines network is modeled based on connections models proposed by Jackson and Wolinsky, and the utility function of the network is defined. In addition, the optimization simulation using the evolutionary computation is shown for a domestic airline in Japan.
Optimizing a reconfigurable material via evolutionary computation
NASA Astrophysics Data System (ADS)
Wilken, Sam; Miskin, Marc Z.; Jaeger, Heinrich M.
2015-08-01
Rapid prototyping by combining evolutionary computation with simulations is becoming a powerful tool for solving complex design problems in materials science. This method of optimization operates in a virtual design space that simulates potential material behaviors and after completion needs to be validated by experiment. However, in principle an evolutionary optimizer can also operate on an actual physical structure or laboratory experiment directly, provided the relevant material parameters can be accessed by the optimizer and information about the material's performance can be updated by direct measurements. Here we provide a proof of concept of such direct, physical optimization by showing how a reconfigurable, highly nonlinear material can be tuned to respond to impact. We report on an entirely computer controlled laboratory experiment in which a 6 ×6 grid of electromagnets creates a magnetic field pattern that tunes the local rigidity of a concentrated suspension of ferrofluid and iron filings. A genetic algorithm is implemented and tasked to find field patterns that minimize the force transmitted through the suspension. Searching within a space of roughly 1010 possible configurations, after testing only 1500 independent trials the algorithm identifies an optimized configuration of layered rigid and compliant regions.
Evolutionary structure search of efficient thermoelectric compounds
NASA Astrophysics Data System (ADS)
Núñez Valdez, Maribel; Oganov, Artem
Thermoelectric materials, which are used to harvest waste heat to generate power, are taking an important role for energy solutions. However, it is of fundamental significance the optimization of a variety of conflicting properties in order to obtain a high efficiency thermoelectric device to be cost-effective for applications. This efficiency or figure of merit (ZT), which depends on the Seebeck coefficient, electrical resistivity and heat conductivity, is restricted by currently available materials and fabricating technologies. Therefore, the main objective of our study is the identification of thermodynamically stable compounds and their crystal structures with high ZT given just a set of elements by using an evolutionary algorithm in which the figure of merit is a degree of freedom to be optimized. We test the performance of our methods within the system Bi2Te3-Sb2Te3. These compounds are well known for their large ZT 's and their use in technological applications. Our results indicate a high feasibility for the employment of our evolutionary algorithm search using a wide variety of elements for optimizing and designing new thermoelectric materials.
Optimizing a reconfigurable material via evolutionary computation.
Wilken, Sam; Miskin, Marc Z; Jaeger, Heinrich M
2015-08-01
Rapid prototyping by combining evolutionary computation with simulations is becoming a powerful tool for solving complex design problems in materials science. This method of optimization operates in a virtual design space that simulates potential material behaviors and after completion needs to be validated by experiment. However, in principle an evolutionary optimizer can also operate on an actual physical structure or laboratory experiment directly, provided the relevant material parameters can be accessed by the optimizer and information about the material's performance can be updated by direct measurements. Here we provide a proof of concept of such direct, physical optimization by showing how a reconfigurable, highly nonlinear material can be tuned to respond to impact. We report on an entirely computer controlled laboratory experiment in which a 6×6 grid of electromagnets creates a magnetic field pattern that tunes the local rigidity of a concentrated suspension of ferrofluid and iron filings. A genetic algorithm is implemented and tasked to find field patterns that minimize the force transmitted through the suspension. Searching within a space of roughly 10^{10} possible configurations, after testing only 1500 independent trials the algorithm identifies an optimized configuration of layered rigid and compliant regions. PMID:26382399
Model-based condition monitoring for lithium-ion batteries
NASA Astrophysics Data System (ADS)
Kim, Taesic; Wang, Yebin; Fang, Huazhen; Sahinoglu, Zafer; Wada, Toshihiro; Hara, Satoshi; Qiao, Wei
2015-11-01
Condition monitoring for batteries involves tracking changes in physical parameters and operational states such as state of health (SOH) and state of charge (SOC), and is fundamentally important for building high-performance and safety-critical battery systems. A model-based condition monitoring strategy is developed in this paper for Lithium-ion batteries on the basis of an electrical circuit model incorporating hysteresis effect. It systematically integrates 1) a fast upper-triangular and diagonal recursive least squares algorithm for parameter identification of the battery model, 2) a smooth variable structure filter for the SOC estimation, and 3) a recursive total least squares algorithm for estimating the maximum capacity, which indicates the SOH. The proposed solution enjoys advantages including high accuracy, low computational cost, and simple implementation, and therefore is suitable for deployment and use in real-time embedded battery management systems (BMSs). Simulations and experiments validate effectiveness of the proposed strategy.
Qualitative model-based diagnostics for rocket systems
NASA Technical Reports Server (NTRS)
Maul, William; Meyer, Claudia; Jankovsky, Amy; Fulton, Christopher
1993-01-01
A diagnostic software package is currently being developed at NASA LeRC that utilizes qualitative model-based reasoning techniques. These techniques can provide diagnostic information about the operational condition of the modeled rocket engine system or subsystem. The diagnostic package combines a qualitative model solver with a constraint suspension algorithm. The constraint suspension algorithm directs the solver's operation to provide valuable fault isolation information about the modeled system. A qualitative model of the Space Shuttle Main Engine's oxidizer supply components was generated. A diagnostic application based on this qualitative model was constructed to process four test cases: three numerical simulations and one actual test firing. The diagnostic tool's fault isolation output compared favorably with the input fault condition.
Practical Application of Model-based Programming and State-based Architecture to Space Missions
NASA Technical Reports Server (NTRS)
Horvath, Gregory; Ingham, Michel; Chung, Seung; Martin, Oliver; Williams, Brian
2006-01-01
A viewgraph presentation to develop models from systems engineers that accomplish mission objectives and manage the health of the system is shown. The topics include: 1) Overview; 2) Motivation; 3) Objective/Vision; 4) Approach; 5) Background: The Mission Data System; 6) Background: State-based Control Architecture System; 7) Background: State Analysis; 8) Overview of State Analysis; 9) Background: MDS Software Frameworks; 10) Background: Model-based Programming; 10) Background: Titan Model-based Executive; 11) Model-based Execution Architecture; 12) Compatibility Analysis of MDS and Titan Architectures; 13) Integrating Model-based Programming and Execution into the Architecture; 14) State Analysis and Modeling; 15) IMU Subsystem State Effects Diagram; 16) Titan Subsystem Model: IMU Health; 17) Integrating Model-based Programming and Execution into the Software IMU; 18) Testing Program; 19) Computationally Tractable State Estimation & Fault Diagnosis; 20) Diagnostic Algorithm Performance; 21) Integration and Test Issues; 22) Demonstrated Benefits; and 23) Next Steps
Improved shape-signature and matching methods for model-based robotic vision
NASA Technical Reports Server (NTRS)
Schwartz, J. T.; Wolfson, H. J.
1987-01-01
Researchers describe new techniques for curve matching and model-based object recognition, which are based on the notion of shape-signature. The signature which researchers use is an approximation of pointwise curvature. Described here is curve matching algorithm which generalizes a previous algorithm which was developed using this signature, allowing improvement and generalization of a previous model-based object recognition scheme. The results and the experiments described relate to 2-D images. However, natural extensions to the 3-D case exist and are being developed.
Specialization and evolutionary branching within migratory populations
Torney, Colin J.; Levin, Simon A.; Couzin, Iain D.
2010-01-01
Understanding the mechanisms that drive specialization and speciation within initially homogeneous populations is a fundamental challenge for evolutionary theory. It is an issue of relevance for significant open questions in biology concerning the generation and maintenance of biodiversity, the origins of reciprocal cooperation, and the efficient division of labor in social or colonial organisms. Several mathematical frameworks have been developed to address this question and models based on evolutionary game theory or the adaptive dynamics of phenotypic mutation have demonstrated the emergence of polymorphic, specialized populations. Here we focus on a ubiquitous biological phenomenon, migration. Individuals in our model may evolve the capacity to detect and follow an environmental cue that indicates a preferred migration route. The strategy space is defined by the level of investment in acquiring personal information about this route or the alternative tendency to follow the direction choice of others. The result is a relation between the migratory process and a game theoretic dynamic that is generally applicable to situations where information may be considered a public good. Through the use of an approximation of social interactions, we demonstrate the emergence of a stable, polymorphic population consisting of an uninformed subpopulation that is dependent upon a specialized group of leaders. The branching process is classified using the techniques of adaptive dynamics. PMID:21059935
Emergence of structured communities through evolutionary dynamics.
Shtilerman, Elad; Kessler, David A; Shnerb, Nadav M
2015-10-21
Species-rich communities, in which many competing species coexist in a single trophic level, are quite frequent in nature, but pose a formidable theoretical challenge. In particular, it is known that complex competitive systems become unstable and unfeasible when the number of species is large. Recently, many studies have attributed the stability of natural communities to the structure of the interspecific interaction network, yet the nature of such structures and the underlying mechanisms responsible for them remain open questions. Here we introduce an evolutionary model, based on the generic Lotka-Volterra competitive framework, from which a stable, structured, diverse community emerges spontaneously. The modular structure of the competition matrix reflects the phylogeny of the community, in agreement with the hierarchial taxonomic classification. Closely related species tend to have stronger niche overlap and weaker fitness differences, as opposed to pairs of species from different modules. The competitive-relatedness hypothesis and the idea of emergent neutrality are discussed in the context of this evolutionary model. PMID:26231415
Spore: Spawning Evolutionary Misconceptions?
NASA Astrophysics Data System (ADS)
Bean, Thomas E.; Sinatra, Gale M.; Schrader, P. G.
2010-10-01
The use of computer simulations as educational tools may afford the means to develop understanding of evolution as a natural, emergent, and decentralized process. However, special consideration of developmental constraints on learning may be necessary when using these technologies. Specifically, the essentialist (biological forms possess an immutable essence), teleological (assignment of purpose to living things and/or parts of living things that may not be purposeful), and intentionality (assumption that events are caused by an intelligent agent) biases may be reinforced through the use of computer simulations, rather than addressed with instruction. We examine the video game Spore for its depiction of evolutionary content and its potential to reinforce these cognitive biases. In particular, we discuss three pedagogical strategies to mitigate weaknesses of Spore and other computer simulations: directly targeting misconceptions through refutational approaches, targeting specific principles of scientific inquiry, and directly addressing issues related to models as cognitive tools.
Quantitative evolutionary design
Diamond, Jared
2002-01-01
The field of quantitative evolutionary design uses evolutionary reasoning (in terms of natural selection and ultimate causation) to understand the magnitudes of biological reserve capacities, i.e. excesses of capacities over natural loads. Ratios of capacities to loads, defined as safety factors, fall in the range 1.2-10 for most engineered and biological components, even though engineered safety factors are specified intentionally by humans while biological safety factors arise through natural selection. Familiar examples of engineered safety factors include those of buildings, bridges and elevators (lifts), while biological examples include factors of bones and other structural elements, of enzymes and transporters, and of organ metabolic performances. Safety factors serve to minimize the overlap zone (resulting in performance failure) between the low tail of capacity distributions and the high tail of load distributions. Safety factors increase with coefficients of variation of load and capacity, with capacity deterioration with time, and with cost of failure, and decrease with costs of initial construction, maintenance, operation, and opportunity. Adaptive regulation of many biological systems involves capacity increases with increasing load; several quantitative examples suggest sublinear increases, such that safety factors decrease towards 1.0. Unsolved questions include safety factors of series systems, parallel or branched pathways, elements with multiple functions, enzyme reaction chains, and equilibrium enzymes. The modest sizes of safety factors imply the existence of costs that penalize excess capacities. Those costs are likely to involve wasted energy or space for large or expensive components, but opportunity costs of wasted space at the molecular level for minor components. PMID:12122135
Evolutionary Design of Controlled Structures
NASA Technical Reports Server (NTRS)
Masters, Brett P.; Crawley, Edward F.
1997-01-01
Basic physical concepts of structural delay and transmissibility are provided for simple rod and beam structures. Investigations show the sensitivity of these concepts to differing controlled-structures variables, and to rational system modeling effects. An evolutionary controls/structures design method is developed. The basis of the method is an accurate model formulation for dynamic compensator optimization and Genetic Algorithm based updating of sensor/actuator placement and structural attributes. One and three dimensional examples from the literature are used to validate the method. Frequency domain interpretation of these controlled structure systems provide physical insight as to how the objective is optimized and consequently what is important in the objective. Several disturbance rejection type controls-structures systems are optimized for a stellar interferometer spacecraft application. The interferometric designs include closed loop tracking optics. Designs are generated for differing structural aspect ratios, differing disturbance attributes, and differing sensor selections. Physical limitations in achieving performance are given in terms of average system transfer function gains and system phase loss. A spacecraft-like optical interferometry system is investigated experimentally over several different optimized controlled structures configurations. Configurations represent common and not-so-common approaches to mitigating pathlength errors induced by disturbances of two different spectra. Results show that an optimized controlled structure for low frequency broadband disturbances achieves modest performance gains over a mass equivalent regular structure, while an optimized structure for high frequency narrow band disturbances is four times better in terms of root-mean-square pathlength. These results are predictable given the nature of the physical system and the optimization design variables. Fundamental limits on controlled performance are discussed
Multiple Damage Progression Paths in Model-Based Prognostics
NASA Technical Reports Server (NTRS)
Daigle, Matthew; Goebel, Kai Frank
2011-01-01
Model-based prognostics approaches employ domain knowledge about a system, its components, and how they fail through the use of physics-based models. Component wear is driven by several different degradation phenomena, each resulting in their own damage progression path, overlapping to contribute to the overall degradation of the component. We develop a model-based prognostics methodology using particle filters, in which the problem of characterizing multiple damage progression paths is cast as a joint state-parameter estimation problem. The estimate is represented as a probability distribution, allowing the prediction of end of life and remaining useful life within a probabilistic framework that supports uncertainty management. We also develop a novel variance control mechanism that maintains an uncertainty bound around the hidden parameters to limit the amount of estimation uncertainty and, consequently, reduce prediction uncertainty. We construct a detailed physics-based model of a centrifugal pump, to which we apply our model-based prognostics algorithms. We illustrate the operation of the prognostic solution with a number of simulation-based experiments and demonstrate the performance of the chosen approach when multiple damage mechanisms are active
Open Issues in Evolutionary Robotics.
Silva, Fernando; Duarte, Miguel; Correia, Luís; Oliveira, Sancho Moura; Christensen, Anders Lyhne
2016-01-01
One of the long-term goals in evolutionary robotics is to be able to automatically synthesize controllers for real autonomous robots based only on a task specification. While a number of studies have shown the applicability of evolutionary robotics techniques for the synthesis of behavioral control, researchers have consistently been faced with a number of issues preventing the widespread adoption of evolutionary robotics for engineering purposes. In this article, we review and discuss the open issues in evolutionary robotics. First, we analyze the benefits and challenges of simulation-based evolution and subsequent deployment of controllers versus evolution on real robotic hardware. Second, we discuss specific evolutionary computation issues that have plagued evolutionary robotics: (1) the bootstrap problem, (2) deception, and (3) the role of genomic encoding and genotype-phenotype mapping in the evolution of controllers for complex tasks. Finally, we address the absence of standard research practices in the field. We also discuss promising avenues of research. Our underlying motivation is the reduction of the current gap between evolutionary robotics and mainstream robotics, and the establishment of evolutionary robotics as a canonical approach for the engineering of autonomous robots. PMID:26581015
Evolutionary dynamics in structured populations
Nowak, Martin A.; Tarnita, Corina E.; Antal, Tibor
2010-01-01
Evolutionary dynamics shape the living world around us. At the centre of every evolutionary process is a population of reproducing individuals. The structure of that population affects evolutionary dynamics. The individuals can be molecules, cells, viruses, multicellular organisms or humans. Whenever the fitness of individuals depends on the relative abundance of phenotypes in the population, we are in the realm of evolutionary game theory. Evolutionary game theory is a general approach that can describe the competition of species in an ecosystem, the interaction between hosts and parasites, between viruses and cells, and also the spread of ideas and behaviours in the human population. In this perspective, we review the recent advances in evolutionary game dynamics with a particular emphasis on stochastic approaches in finite sized and structured populations. We give simple, fundamental laws that determine how natural selection chooses between competing strategies. We study the well-mixed population, evolutionary graph theory, games in phenotype space and evolutionary set theory. We apply these results to the evolution of cooperation. The mechanism that leads to the evolution of cooperation in these settings could be called ‘spatial selection’: cooperators prevail against defectors by clustering in physical or other spaces. PMID:20008382
Child Development and Evolutionary Psychology.
ERIC Educational Resources Information Center
Bjorklund, David F.; Pellegrini, Anthony D.
2000-01-01
Argues that an evolutionary account provides insight into developmental function and individual differences. Outlines some assumptions of evolutionary psychology related to development. Introduces the developmental systems approach, differential influence of natural selection at different points in ontogeny, and development of evolved…
Observability in dynamic evolutionary models.
López, I; Gámez, M; Carreño, R
2004-02-01
In the paper observability problems are considered in basic dynamic evolutionary models for sexual and asexual populations. Observability means that from the (partial) knowledge of certain phenotypic characteristics the whole evolutionary process can be uniquely recovered. Sufficient conditions are given to guarantee observability for both sexual and asexual populations near an evolutionarily stable state. PMID:15013222
How competition affects evolutionary rescue
Osmond, Matthew Miles; de Mazancourt, Claire
2013-01-01
Populations facing novel environments can persist by adapting. In nature, the ability to adapt and persist will depend on interactions between coexisting individuals. Here we use an adaptive dynamic model to assess how the potential for evolutionary rescue is affected by intra- and interspecific competition. Intraspecific competition (negative density-dependence) lowers abundance, which decreases the supply rate of beneficial mutations, hindering evolutionary rescue. On the other hand, interspecific competition can aid evolutionary rescue when it speeds adaptation by increasing the strength of selection. Our results clarify this point and give an additional requirement: competition must increase selection pressure enough to overcome the negative effect of reduced abundance. We therefore expect evolutionary rescue to be most likely in communities which facilitate rapid niche displacement. Our model, which aligns to previous quantitative and population genetic models in the absence of competition, provides a first analysis of when competitors should help or hinder evolutionary rescue. PMID:23209167
NASA Astrophysics Data System (ADS)
Hortos, William S.
2011-06-01
Published studies have focused on the application of one bio-inspired or evolutionary computational method to the functions of a single protocol layer in a wireless ad hoc sensor network (WSN). For example, swarm intelligence in the form of ant colony optimization (ACO), has been repeatedly considered for the routing of data/information among nodes, a network-layer function, while genetic algorithms (GAs) have been used to select transmission frequencies and power levels, physical-layer functions. Similarly, artificial immune systems (AISs) as well as trust models of quantized data reputation have been invoked for detection of network intrusions that cause anomalies in data and information; these act on the application and presentation layers. Most recently, a self-organizing scheduling scheme inspired by frog-calling behavior for reliable data transmission in wireless sensor networks, termed anti-phase synchronization, has been applied to realize collision-free transmissions between neighboring nodes, a function of the MAC layer. In a novel departure from previous work, the cross-layer approach to WSN protocol design suggests applying more than one evolutionary computational method to the functions of the appropriate layers to improve the QoS performance of the cross-layer design beyond that of one method applied to a single layer's functions. A baseline WSN protocol design, embedding GAs, anti-phase synchronization, ACO, and a trust model based on quantized data reputation at the physical, MAC, network, and application layers, respectively, is constructed. Simulation results demonstrate the synergies among the bioinspired/ evolutionary methods of the proposed baseline design improve the overall QoS performance of networks over that of a single computational method.
NASA Technical Reports Server (NTRS)
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
Kitaev models based on unitary quantum groupoids
Chang, Liang
2014-04-15
We establish a generalization of Kitaev models based on unitary quantum groupoids. In particular, when inputting a Kitaev-Kong quantum groupoid H{sub C}, we show that the ground state manifold of the generalized model is canonically isomorphic to that of the Levin-Wen model based on a unitary fusion category C. Therefore, the generalized Kitaev models provide realizations of the target space of the Turaev-Viro topological quantum field theory based on C.
RNA based evolutionary optimization
NASA Astrophysics Data System (ADS)
Schuster, Peter
1993-12-01
The notion of an RNA world has been introduced for a prebiotic scenario that is dominated by RNA molecules and their properties, in particular their capabilities to act as templates for reproduction and as catalysts for several cleavage and ligation reactions of polynucleotides and polypeptides. This notion is used here also for simple experimental assays which are well suited to study evolution in the test tube. In molecular evolution experiments fitness is determined in essence by the molecular structures of RNA molecules. Evidence is presented for adaptation to environment in cell-free media. RNA based molecular evolution experiments have led to interesting spin-offs in biotechnology, commonly called ‘applied molecular evolution’, which make use of Darwinian trial-and-error strategies in order to synthesize new pharmacological compounds and other advanced materials on a biological basis. Error-propagation in RNA replication leads to formation of mutant spectra called ‘quasispecies’. An increase in the error rate broadens the mutant spectrum. There exists a sharply defined threshold beyond which heredity breaks down and evolutionary adaptation becomes impossible. Almost all RNA viruses studied so far operate at conditions close to this error threshold. Quasispecies and error thresholds are important for an understanding of RNA virus evolution, and they may help to develop novel antiviral strategies. Evolution of RNA molecules can be studied and interpreted by considering secondary structures. The notion of sequence space introduces a distance between pairs of RNA sequences which is tantamount to counting the minimal number of point mutations required to convert the sequences into each other. The mean sensitivity of RNA secondary structures to mutation depends strongly on the base pairing alphabet: structures from sequences which contain only one base pair (GC or AU are much less stable against mutation than those derived from the natural (AUGC) sequences
MEGA6: Molecular Evolutionary Genetics Analysis version 6.0.
Tamura, Koichiro; Stecher, Glen; Peterson, Daniel; Filipski, Alan; Kumar, Sudhir
2013-12-01
We announce the release of an advanced version of the Molecular Evolutionary Genetics Analysis (MEGA) software, which currently contains facilities for building sequence alignments, inferring phylogenetic histories, and conducting molecular evolutionary analysis. In version 6.0, MEGA now enables the inference of timetrees, as it implements the RelTime method for estimating divergence times for all branching points in a phylogeny. A new Timetree Wizard in MEGA6 facilitates this timetree inference by providing a graphical user interface (GUI) to specify the phylogeny and calibration constraints step-by-step. This version also contains enhanced algorithms to search for the optimal trees under evolutionary criteria and implements a more advanced memory management that can double the size of sequence data sets to which MEGA can be applied. Both GUI and command-line versions of MEGA6 can be downloaded from www.megasoftware.net free of charge. PMID:24132122
NASA Technical Reports Server (NTRS)
Kumar, Vivek; Horio, Brant M.; DeCicco, Anthony H.; Hasan, Shahab; Stouffer, Virginia L.; Smith, Jeremy C.; Guerreiro, Nelson M.
2015-01-01
This paper presents a search algorithm based framework to calibrate origin-destination (O-D) market specific airline ticket demands and prices for the Air Transportation System (ATS). This framework is used for calibrating an agent based model of the air ticket buy-sell process - Airline Evolutionary Simulation (Airline EVOS) -that has fidelity of detail that accounts for airline and consumer behaviors and the interdependencies they share between themselves and the NAS. More specificially, this algorithm simultaneous calibrates demand and airfares for each O-D market, to within specified threshold of a pre-specified target value. The proposed algorithm is illustrated with market data targets provided by the Transportation System Analysis Model (TSAM) and Airline Origin and Destination Survey (DB1B). Although we specify these models and datasources for this calibration exercise, the methods described in this paper are applicable to calibrating any low-level model of the ATS to some other demand forecast model-based data. We argue that using a calibration algorithm such as the one we present here to synchronize ATS models with specialized forecast demand models, is a powerful tool for establishing credible baseline conditions in experiments analyzing the effects of proposed policy changes to the ATS.
Aristotelous, Andreas C.; Durrett, Richard
2014-01-01
Inspired by the use of hybrid cellular automata in modeling cancer, we introduce a generalization of evolutionary games in which cells produce and absorb chemicals, and the chemical concentrations dictate the death rates of cells and their fitnesses. Our long term aim is to understand how the details of the interactions in a system with n species and m chemicals translate into the qualitative behavior of the system. Here, we study two simple 2 × 2 games with two chemicals and revisit the two and three species versions of the one chemical colicin system studied earlier by Durrett and Levin [28]. We find that in the 2 × 2 examples, the behavior of our new spatial model can be predicted from that of the mean field differential equation using ideas of [12]. However, in the three species colicin model, the system with diffusion does not have the coexistence which occurs in the lattices model in which sites interact with only their nearest neighbors. PMID:24513098
Modeling tumor evolutionary dynamics
Stransky, Beatriz; de Souza, Sandro J.
2013-01-01
Tumorigenesis can be seen as an evolutionary process, in which the transformation of a normal cell into a tumor cell involves a number of limiting genetic and epigenetic events, occurring in a series of discrete stages. However, not all mutations in a cell are directly involved in cancer development and it is likely that most of them (passenger mutations) do not contribute in any way to tumorigenesis. Moreover, the process of tumor evolution is punctuated by selection of advantageous (driver) mutations and clonal expansions. Regarding these driver mutations, it is uncertain how many limiting events are required and/or sufficient to promote a tumorigenic process or what are the values associated with the adaptive advantage of different driver mutations. In spite of the availability of high-quality cancer data, several assumptions about the mechanistic process of cancer initiation and development remain largely untested, both mathematically and statistically. Here we review the development of recent mathematical/computational models and discuss their impact in the field of tumor biology. PMID:23420281
Evolutionary model with Turing machines
NASA Astrophysics Data System (ADS)
Feverati, Giovanni; Musso, Fabio
2008-06-01
The development of a large noncoding fraction in eukaryotic DNA and the phenomenon of the code bloat in the field of evolutionary computations show a striking similarity. This seems to suggest that (in the presence of mechanisms of code growth) the evolution of a complex code cannot be attained without maintaining a large inactive fraction. To test this hypothesis we performed computer simulations of an evolutionary toy model for Turing machines, studying the relations among fitness and coding versus noncoding ratio while varying mutation and code growth rates. The results suggest that, in our model, having a large reservoir of noncoding states constitutes a great (long term) evolutionary advantage.
Evolutionary reconstruction of networks
NASA Astrophysics Data System (ADS)
Ipsen, Mads; Mikhailov, Alexander S.
2002-10-01
Can a graph specifying the pattern of connections of a dynamical network be reconstructed from statistical properties of a signal generated by such a system? In this model study, we present a Metropolis algorithm for reconstruction of graphs from their Laplacian spectra. Through a stochastic process of mutations and selection, evolving test networks converge to a reference graph. Applying the method to several examples of random graphs, clustered graphs, and small-world networks, we show that the proposed stochastic evolution allows exact reconstruction of relatively small networks and yields good approximations in the case of large sizes.
Model-Based Diagnosis in a Power Distribution Test-Bed
NASA Technical Reports Server (NTRS)
Scarl, E.; McCall, K.
1998-01-01
The Rodon model-based diagnosis shell was applied to a breadboard test-bed, modeling an automated power distribution system. The constraint-based modeling paradigm and diagnostic algorithm were found to adequately represent the selected set of test scenarios.
Discovery and Optimization of Materials Using Evolutionary Approaches.
Le, Tu C; Winkler, David A
2016-05-25
Materials science is undergoing a revolution, generating valuable new materials such as flexible solar panels, biomaterials and printable tissues, new catalysts, polymers, and porous materials with unprecedented properties. However, the number of potentially accessible materials is immense. Artificial evolutionary methods such as genetic algorithms, which explore large, complex search spaces very efficiently, can be applied to the identification and optimization of novel materials more rapidly than by physical experiments alone. Machine learning models can augment experimental measurements of materials fitness to accelerate identification of useful and novel materials in vast materials composition or property spaces. This review discusses the problems of large materials spaces, the types of evolutionary algorithms employed to identify or optimize materials, and how materials can be represented mathematically as genomes, describes fitness landscapes and mutation operators commonly employed in materials evolution, and provides a comprehensive summary of published research on the use of evolutionary methods to generate new catalysts, phosphors, and a range of other materials. The review identifies the potential for evolutionary methods to revolutionize a wide range of manufacturing, medical, and materials based industries. PMID:27171499
Algorithmic methods to infer the evolutionary trajectories in cancer progression.
Caravagna, Giulio; Graudenzi, Alex; Ramazzotti, Daniele; Sanz-Pamplona, Rebeca; De Sano, Luca; Mauri, Giancarlo; Moreno, Victor; Antoniotti, Marco; Mishra, Bud
2016-07-12
The genomic evolution inherent to cancer relates directly to a renewed focus on the voluminous next-generation sequencing data and machine learning for the inference of explanatory models of how the (epi)genomic events are choreographed in cancer initiation and development. However, despite the increasing availability of multiple additional -omics data, this quest has been frustrated by various theoretical and technical hurdles, mostly stemming from the dramatic heterogeneity of the disease. In this paper, we build on our recent work on the "selective advantage" relation among driver mutations in cancer progression and investigate its applicability to the modeling problem at the population level. Here, we introduce PiCnIc (Pipeline for Cancer Inference), a versatile, modular, and customizable pipeline to extract ensemble-level progression models from cross-sectional sequenced cancer genomes. The pipeline has many translational implications because it combines state-of-the-art techniques for sample stratification, driver selection, identification of fitness-equivalent exclusive alterations, and progression model inference. We demonstrate PiCnIc's ability to reproduce much of the current knowledge on colorectal cancer progression as well as to suggest novel experimentally verifiable hypotheses. PMID:27357673
PARALLEL MULTIOBJECTIVE EVOLUTIONARY ALGORITHMS FOR WASTE SOLVENT RECYCLING
Waste solvents are of great concern to the chemical process industries and to the public, and many technologies have been suggested and implemented in the chemical process industries to reduce waste and associated environmental impacts. In this article we have developed a novel p...
Evolutionary algorithm for the neutrino factory front end design
Poklonskiy, Alexey A.; Neuffer, David; /Fermilab
2009-01-01
The Neutrino Factory is an important tool in the long-term neutrino physics program. Substantial effort is put internationally into designing this facility in order to achieve desired performance within the allotted budget. This accelerator is a secondary beam machine: neutrinos are produced by means of the decay of muons. Muons, in turn, are produced by the decay of pions, produced by hitting the target by a beam of accelerated protons suitable for acceleration. Due to the physics of this process, extra conditioning of the pion beam coming from the target is needed in order to effectively perform subsequent acceleration. The subsystem of the Neutrino Factory that performs this conditioning is called Front End, its main performance characteristic is the number of the produced muons.
System Design under Uncertainty: Evolutionary Optimization of the Gravity Probe-B Spacecraft
NASA Technical Reports Server (NTRS)
Pullen, Samuel P.; Parkinson, Bradford W.
1994-01-01
This paper discusses the application of evolutionary random-search algorithms (Simulated Annealing and Genetic Algorithms) to the problem of spacecraft design under performance uncertainty. Traditionally, spacecraft performance uncertainty has been measured by reliability. Published algorithms for reliability optimization are seldom used in practice because they oversimplify reality. The algorithm developed here uses random-search optimization to allow us to model the problem more realistically. Monte Carlo simulations are used to evaluate the objective function for each trial design solution. These methods have been applied to the Gravity Probe-B (GP-B) spacecraft being developed at Stanford University for launch in 1999, Results of the algorithm developed here for GP-13 are shown, and their implications for design optimization by evolutionary algorithms are discussed.
Electrochemical model based charge optimization for lithium-ion batteries
NASA Astrophysics Data System (ADS)
Pramanik, Sourav; Anwar, Sohel
2016-05-01
In this paper, we propose the design of a novel optimal strategy for charging the lithium-ion battery based on electrochemical battery model that is aimed at improved performance. A performance index that aims at minimizing the charging effort along with a minimum deviation from the rated maximum thresholds for cell temperature and charging current has been defined. The method proposed in this paper aims at achieving a faster charging rate while maintaining safe limits for various battery parameters. Safe operation of the battery is achieved by including the battery bulk temperature as a control component in the performance index which is of critical importance for electric vehicles. Another important aspect of the performance objective proposed here is the efficiency of the algorithm that would allow higher charging rates without compromising the internal electrochemical kinetics of the battery which would prevent abusive conditions, thereby improving the long term durability. A more realistic model, based on battery electro-chemistry has been used for the design of the optimal algorithm as opposed to the conventional equivalent circuit models. To solve the optimization problem, Pontryagins principle has been used which is very effective for constrained optimization problems with both state and input constraints. Simulation results show that the proposed optimal charging algorithm is capable of shortening the charging time of a lithium ion cell while maintaining the temperature constraint when compared with the standard constant current charging. The designed method also maintains the internal states within limits that can avoid abusive operating conditions.
Evolutionary Computing Methods for Spectral Retrieval
NASA Technical Reports Server (NTRS)
Terrile, Richard; Fink, Wolfgang; Huntsberger, Terrance; Lee, Seugwon; Tisdale, Edwin; VonAllmen, Paul; Tinetti, Geivanna
2009-01-01
A methodology for processing spectral images to retrieve information on underlying physical, chemical, and/or biological phenomena is based on evolutionary and related computational methods implemented in software. In a typical case, the solution (the information that one seeks to retrieve) consists of parameters of a mathematical model that represents one or more of the phenomena of interest. The methodology was developed for the initial purpose of retrieving the desired information from spectral image data acquired by remote-sensing instruments aimed at planets (including the Earth). Examples of information desired in such applications include trace gas concentrations, temperature profiles, surface types, day/night fractions, cloud/aerosol fractions, seasons, and viewing angles. The methodology is also potentially useful for retrieving information on chemical and/or biological hazards in terrestrial settings. In this methodology, one utilizes an iterative process that minimizes a fitness function indicative of the degree of dissimilarity between observed and synthetic spectral and angular data. The evolutionary computing methods that lie at the heart of this process yield a population of solutions (sets of the desired parameters) within an accuracy represented by a fitness-function value specified by the user. The evolutionary computing methods (ECM) used in this methodology are Genetic Algorithms and Simulated Annealing, both of which are well-established optimization techniques and have also been described in previous NASA Tech Briefs articles. These are embedded in a conceptual framework, represented in the architecture of the implementing software, that enables automatic retrieval of spectral and angular data and analysis of the retrieved solutions for uniqueness.
Evolutionary Processes and Mental Deficiency
ERIC Educational Resources Information Center
Spitz, Herman H.
1973-01-01
The author hypothesizes that central nervous system damage of deficiency associated with mental retardation affects primarily those cortical processes which developed at a late stage in man's evolutionary history. (Author)
Evolutionary genetics of plant adaptation
Anderson, Jill T.; Willis, John H.; Mitchell-Olds, Thomas
2011-01-01
Plants provide unique opportunities to study the mechanistic basis and evolutionary processes of adaptation to diverse environmental conditions. Complementary laboratory and field experiments are important for testing hypothesis reflecting long term ecological and evolutionary history. For example, these approaches can infer whether local adaptation results from genetic tradeoffs (antagonistic pleiotropy), where native alleles are best adapted to local conditions, or if local adaptation is caused by conditional neutrality at many loci, where alleles show fitness differences in one environment, but not in the contrasting environment. Ecological genetics in natural populations of perennial or outcrossing plants also may differ substantially from model systems. In this review of the evolutionary genetics of plant adaptation, we emphasize the importance of field studies for understanding the evolutionary dynamics of model and non-model systems, highlight a key life history trait (flowering time), and discuss emerging conservation issues. PMID:21550682
Evolutionary constraints or opportunities?
Sharov, Alexei A.
2014-01-01
Natural selection is traditionally viewed as a leading factor of evolution, whereas variation is assumed to be random and non-directional. Any order in variation is attributed to epigenetic or developmental constraints that can hinder the action of natural selection. In contrast I consider the positive role of epigenetic mechanisms in evolution because they provide organisms with opportunities for rapid adaptive change. Because the term “constraint” has negative connotations, I use the term “regulated variation” to emphasize the adaptive nature of phenotypic variation, which helps populations and species to survive and evolve in changing environments. The capacity to produce regulated variation is a phenotypic property, which is not described in the genome. Instead, the genome acts as a switchboard, where mostly random mutations switch “on” or “off” preexisting functional capacities of organism components. Thus, there are two channels of heredity: informational (genomic) and structure-functional (phenotypic). Functional capacities of organisms most likely emerged in a chain of modifications and combinations of more simple ancestral functions. The role of DNA has been to keep records of these changes (without describing the result) so that they can be reproduced in the following generations. Evolutionary opportunities include adjustments of individual functions, multitasking, connection between various components of an organism, and interaction between organisms. The adaptive nature of regulated variation can be explained by the differential success of lineages in macro-evolution. Lineages with more advantageous patterns of regulated variation are likely to produce more species and secure more resources (i.e., long-term lineage selection). PMID:24769155
Molluscan Evolutionary Genomics
Simison, W. Brian; Boore, Jeffrey L.
2005-12-01
In the last 20 years there have been dramatic advances in techniques of high-throughput DNA sequencing, most recently accelerated by the Human Genome Project, a program that has determined the three billion base pair code on which we are based. Now this tremendous capability is being directed at other genome targets that are being sampled across the broad range of life. This opens up opportunities as never before for evolutionary and organismal biologists to address questions of both processes and patterns of organismal change. We stand at the dawn of a new 'modern synthesis' period, paralleling that of the early 20th century when the fledgling field of genetics first identified the underlying basis for Darwin's theory. We must now unite the efforts of systematists, paleontologists, mathematicians, computer programmers, molecular biologists, developmental biologists, and others in the pursuit of discovering what genomics can teach us about the diversity of life. Genome-level sampling for mollusks to date has mostly been limited to mitochondrial genomes and it is likely that these will continue to provide the best targets for broad phylogenetic sampling in the near future. However, we are just beginning to see an inroad into complete nuclear genome sequencing, with several mollusks and other eutrochozoans having been selected for work about to begin. Here, we provide an overview of the state of molluscan mitochondrial genomics, highlight a few of the discoveries from this research, outline the promise of broadening this dataset, describe upcoming projects to sequence whole mollusk nuclear genomes, and challenge the community to prepare for making the best use of these data.
Evolutionary adaptive eye tracking for low-cost human computer interaction applications
NASA Astrophysics Data System (ADS)
Shen, Yan; Shin, Hak Chul; Sung, Won Jun; Khim, Sarang; Kim, Honglak; Rhee, Phill Kyu
2013-01-01
We present an evolutionary adaptive eye-tracking framework aiming for low-cost human computer interaction. The main focus is to guarantee eye-tracking performance without using high-cost devices and strongly controlled situations. The performance optimization of eye tracking is formulated into the dynamic control problem of deciding on an eye tracking algorithm structure and associated thresholds/parameters, where the dynamic control space is denoted by genotype and phenotype spaces. The evolutionary algorithm is responsible for exploring the genotype control space, and the reinforcement learning algorithm organizes the evolved genotype into a reactive phenotype. The evolutionary algorithm encodes an eye-tracking scheme as a genetic code based on image variation analysis. Then, the reinforcement learning algorithm defines internal states in a phenotype control space limited by the perceived genetic code and carries out interactive adaptations. The proposed method can achieve optimal performance by compromising the difficulty in the real-time performance of the evolutionary algorithm and the drawback of the huge search space of the reinforcement learning algorithm. Extensive experiments were carried out using webcam image sequences and yielded very encouraging results. The framework can be readily applied to other low-cost vision-based human computer interactions in solving their intrinsic brittleness in unstable operational environments.
Evolutionary status of Be stars
NASA Astrophysics Data System (ADS)
Zorec, J.; Frémat, Y.; Cidale, L.
2004-12-01
Fundamental parameters of nearly 50 field Be stars have been determined. Correcting these parameters from gravity darkening effects induced the fast rotation, we deduced the evolutionary phase of the studied stars. We show that the evolutionary phase at which appear the Be phenomenon is mass dependent: the smaller the stellar mass the elder the phase in the main sequence at which the Be phenomenon seem to appear.
Model-Based Prognostics of Hybrid Systems
NASA Technical Reports Server (NTRS)
Daigle, Matthew; Roychoudhury, Indranil; Bregon, Anibal
2015-01-01
Model-based prognostics has become a popular approach to solving the prognostics problem. However, almost all work has focused on prognostics of systems with continuous dynamics. In this paper, we extend the model-based prognostics framework to hybrid systems models that combine both continuous and discrete dynamics. In general, most systems are hybrid in nature, including those that combine physical processes with software. We generalize the model-based prognostics formulation to hybrid systems, and describe the challenges involved. We present a general approach for modeling hybrid systems, and overview methods for solving estimation and prediction in hybrid systems. As a case study, we consider the problem of conflict (i.e., loss of separation) prediction in the National Airspace System, in which the aircraft models are hybrid dynamical systems.
Testing Strategies for Model-Based Development
NASA Technical Reports Server (NTRS)
Heimdahl, Mats P. E.; Whalen, Mike; Rajan, Ajitha; Miller, Steven P.
2006-01-01
This report presents an approach for testing artifacts generated in a model-based development process. This approach divides the traditional testing process into two parts: requirements-based testing (validation testing) which determines whether the model implements the high-level requirements and model-based testing (conformance testing) which determines whether the code generated from a model is behaviorally equivalent to the model. The goals of the two processes differ significantly and this report explores suitable testing metrics and automation strategies for each. To support requirements-based testing, we define novel objective requirements coverage metrics similar to existing specification and code coverage metrics. For model-based testing, we briefly describe automation strategies and examine the fault-finding capability of different structural coverage metrics using tests automatically generated from the model.
Model-based satellite acquisition and tracking
NASA Technical Reports Server (NTRS)
Casasent, David; Lee, Andrew J.
1988-01-01
A model-based optical processor is introduced for the acquisition and tracking of a satellite in close proximity to an imaging sensor of a space robot. The type of satellite is known in advance, and a model of the satellite (which exists from its design) is used in this task. The model base is used to generate multiple smart filters of the various parts of the satellite, which are used in a symbolic multi-filter optical correlator. The output from the correlator is then treated as a symbolic description of the object, which is operated upon by an optical inference processor to determine the position and orientation of the satellite and to track it as a function of time. The knowledge and model base also serves to generate the rules used by the inference machine. The inference machine allows for feedback to optical correlators or feature extractors to locate the individual parts of the satellite and their orientations.
Subwavelength Lattice Optics by Evolutionary Design
2015-01-01
This paper describes a new class of structured optical materials—lattice opto-materials—that can manipulate the flow of visible light into a wide range of three-dimensional profiles using evolutionary design principles. Lattice opto-materials are based on the discretization of a surface into a two-dimensional (2D) subwavelength lattice whose individual lattice sites can be controlled to achieve a programmed optical response. To access a desired optical property, we designed a lattice evolutionary algorithm that includes and optimizes contributions from every element in the lattice. Lattice opto-materials can exhibit simple properties, such as on- and off-axis focusing, and can also concentrate light into multiple, discrete spots. We expanded the unit cell shapes of the lattice to achieve distinct, polarization-dependent optical responses from the same 2D patterned substrate. Finally, these lattice opto-materials can also be combined into architectures that resemble a new type of compound flat lens. PMID:25380062
Evolutionary Optimization of a Geometrically Refined Truss
NASA Technical Reports Server (NTRS)
Hull, P. V.; Tinker, M. L.; Dozier, G. V.
2007-01-01
Structural optimization is a field of research that has experienced noteworthy growth for many years. Researchers in this area have developed optimization tools to successfully design and model structures, typically minimizing mass while maintaining certain deflection and stress constraints. Numerous optimization studies have been performed to minimize mass, deflection, and stress on a benchmark cantilever truss problem. Predominantly traditional optimization theory is applied to this problem. The cross-sectional area of each member is optimized to minimize the aforementioned objectives. This Technical Publication (TP) presents a structural optimization technique that has been previously applied to compliant mechanism design. This technique demonstrates a method that combines topology optimization, geometric refinement, finite element analysis, and two forms of evolutionary computation: genetic algorithms and differential evolution to successfully optimize a benchmark structural optimization problem. A nontraditional solution to the benchmark problem is presented in this TP, specifically a geometrically refined topological solution. The design process begins with an alternate control mesh formulation, multilevel geometric smoothing operation, and an elastostatic structural analysis. The design process is wrapped in an evolutionary computing optimization toolset.
Evolutionary optimization of a Genetically Refined Truss
NASA Technical Reports Server (NTRS)
Hull, Patrick V.; Tinker, Michael L.; Dozier, Gerry
2005-01-01
Structural optimization is a field of research that has experienced noteworthy growth for many years. Researchers in this area have developed optimization tools to successfully design and model structures, typically minimizing mass while maintaining certain deflection and stress constraints. Numerous optimization studies have been performed to minimize mass, deflection and stress on a benchmark cantilever truss problem. Predominantly traditional optimization theory is applied to this problem. The cross-sectional area of each member is optimized to minimize the aforementioned objectives. This paper will present a structural optimization technique that has been previously applied to compliant mechanism design. This technique demonstrates a method that combines topology optimization, geometric refinement, finite element analysis, and two forms of evolutionary computation: Genetic Algorithms and Differential Evolution to successfully optimize a benchmark structural optimization problem. An non-traditional solution to the benchmark problem is presented in this paper, specifically a geometrically refined topological solution. The design process begins with an alternate control mesh formulation, multilevel geometric smoothing operation, and an elastostatic structural analysis. The design process is wrapped in an evolutionary computing optimization toolset.
Virus-Evolutionary Liner Genetic Programming
NASA Astrophysics Data System (ADS)
Tamura, Kenji; Mutoh, Atsuko; Nakamura, Tsuyoshi; Itoh, Hidenori
Many kinds of evolutionary methods have been proposed. GA and GP in particular have been demonstrated its effectiveness in various problems these days, and many systems have been proposed. One is Virus-Evolutionary Genetic Algorithm (VE-GA), and the other is Linear Genetic Programming in C (LGPC). Each of systems is reported its performance. VE-GA is the coevolution system that host individual and virus individuals. That can spread schema effectively among the host individuals by using the virus infection and virus incorporation. LGPC implements the GP by representing the individuals to one dimension as if GA. LGPC can reduce a search cost of pointer and save the machine memory, and can reduce the time to implements GP programs. We proposed that a system introduce virus individuals in LGPC, and the analyzed performance of the system at two problems. Our system can spread schema among the population, and search solution effectively. The results of computer simulation show that this system can search for solution depending on LGPC applying problem's character compare with LGPC. A search cost of pointer
Subwavelength lattice optics by evolutionary design.
Huntington, Mark D; Lauhon, Lincoln J; Odom, Teri W
2014-12-10
This paper describes a new class of structured optical materials--lattice opto-materials--that can manipulate the flow of visible light into a wide range of three-dimensional profiles using evolutionary design principles. Lattice opto-materials are based on the discretization of a surface into a two-dimensional (2D) subwavelength lattice whose individual lattice sites can be controlled to achieve a programmed optical response. To access a desired optical property, we designed a lattice evolutionary algorithm that includes and optimizes contributions from every element in the lattice. Lattice opto-materials can exhibit simple properties, such as on- and off-axis focusing, and can also concentrate light into multiple, discrete spots. We expanded the unit cell shapes of the lattice to achieve distinct, polarization-dependent optical responses from the same 2D patterned substrate. Finally, these lattice opto-materials can also be combined into architectures that resemble a new type of compound flat lens. PMID:25380062
Model-based fault detection and identification with online aerodynamic model structure selection
NASA Astrophysics Data System (ADS)
Lombaerts, T.
2013-12-01
This publication describes a recursive algorithm for the approximation of time-varying nonlinear aerodynamic models by means of a joint adaptive selection of the model structure and parameter estimation. This procedure is called adaptive recursive orthogonal least squares (AROLS) and is an extension and modification of the previously developed ROLS procedure. This algorithm is particularly useful for model-based fault detection and identification (FDI) of aerospace systems. After the failure, a completely new aerodynamic model can be elaborated recursively with respect to structure as well as parameter values. The performance of the identification algorithm is demonstrated on a simulation data set.
On joint subtree distributions under two evolutionary models.
Wu, Taoyang; Choi, Kwok Pui
2016-04-01
In population and evolutionary biology, hypotheses about micro-evolutionary and macro-evolutionary processes are commonly tested by comparing the shape indices of empirical evolutionary trees with those predicted by neutral models. A key ingredient in this approach is the ability to compute and quantify distributions of various tree shape indices under random models of interest. As a step to meet this challenge, in this paper we investigate the joint distribution of cherries and pitchforks (that is, subtrees with two and three leaves) under two widely used null models: the Yule-Harding-Kingman (YHK) model and the proportional to distinguishable arrangements (PDA) model. Based on two novel recursive formulae, we propose a dynamic approach to numerically compute the exact joint distribution (and hence the marginal distributions) for trees of any size. We also obtained insights into the statistical properties of trees generated under these two models, including a constant correlation between the cherry and the pitchfork distributions under the YHK model, and the log-concavity and unimodality of the cherry distributions under both models. In addition, we show that there exists a unique change point for the cherry distributions between these two models. PMID:26607430
Evolutionary induction of sparse neural trees
Zhang; Ohm; Muhlenbein
1997-01-01
This paper is concerned with the automatic induction of parsimonious neural networks. In contrast to other program induction situations, network induction entails parametric learning as well as structural adaptation. We present a novel representation scheme called neural trees that allows efficient learning of both network architectures and parameters by genetic search. A hybrid evolutionary method is developed for neural tree induction that combines genetic programming and the breeder genetic algorithm under the unified framework of the minimum description length principle. The method is successfully applied to the induction of higher order neural trees while still keeping the resulting structures sparse to ensure good generalization performance. Empirical results are provided on two chaotic time series prediction problems of practical interest. PMID:10021759
Evolutionary complexity for protection of critical assets.
Battaile, Corbett Chandler; Chandross, Michael Evan
2005-01-01
This report summarizes the work performed as part of a one-year LDRD project, 'Evolutionary Complexity for Protection of Critical Assets.' A brief introduction is given to the topics of genetic algorithms and genetic programming, followed by a discussion of relevant results obtained during the project's research, and finally the conclusions drawn from those results. The focus is on using genetic programming to evolve solutions for relatively simple algebraic equations as a prototype application for evolving complexity in computer codes. The results were obtained using the lil-gp genetic program, a C code for evolving solutions to user-defined problems and functions. These results suggest that genetic programs are not well-suited to evolving complexity for critical asset protection because they cannot efficiently evolve solutions to complex problems, and introduce unacceptable performance penalties into solutions for simple ones.
Using Evolutionary Computation on GPS Position Correction
2014-01-01
More and more devices are equipped with global positioning system (GPS). However, those handheld devices with consumer-grade GPS receivers usually have low accuracy in positioning. A position correction algorithm is therefore useful in this case. In this paper, we proposed an evolutionary computation based technique to generate a correction function by two GPS receivers and a known reference location. Locating one GPS receiver on the known location and combining its longitude and latitude information and exact poisoning information, the proposed technique is capable of evolving a correction function by such. The proposed technique can be implemented and executed on handheld devices without hardware reconfiguration. Experiments are conducted to demonstrate performance of the proposed technique. Positioning error could be significantly reduced from the order of 10 m to the order of 1 m. PMID:24578657
Evolutionary optimization of rotational population transfer
Rouzee, Arnaud; Vrakking, Marc J. J.; Ghafur, Omair; Gijsbertsen, Arjan; Vidma, Konstantin; Meijer, Afric; Zande, Wim J. van der; Parker, David; Shir, Ofer M.; Baeck, Thomas
2011-09-15
We present experimental and numerical studies on control of rotational population transfer of NO(J=1/2) molecules to higher rotational states. We are able to transfer 57% of the population to the J=5/2 state and 46% to J=9/2, in good agreement with quantum mechanical simulations. The optimal pulse shapes are composed of pulse sequences with delays corresponding to the beat frequencies of states on the rotational ladder. The evolutionary algorithm is limited by experimental constraints such as volume averaging and the finite laser intensity used, the latter to circumvent ionization. Without these constraints, near-perfect control (>98%) is possible. In addition, we show that downward control, moving molecules from high to low rotational states, is also possible.
Multimode model based defect characterization in composites
NASA Astrophysics Data System (ADS)
Roberts, R.; Holland, S.; Gregory, E.
2016-02-01
A newly-initiated research program for model-based defect characterization in CFRP composites is summarized. The work utilizes computational models of the interaction of NDE probing energy fields (ultrasound and thermography), to determine 1) the measured signal dependence on material and defect properties (forward problem), and 2) an assessment of performance-critical defect properties from analysis of measured NDE signals (inverse problem). Work is reported on model implementation for inspection of CFRP laminates containing delamination and porosity. Forward predictions of measurement response are presented, as well as examples of model-based inversion of measured data for the estimation of defect parameters.
Model-based internal wave processing
Candy, J.V.; Chambers, D.H.
1995-06-09
A model-based approach is proposed to solve the oceanic internal wave signal processing problem that is based on state-space representations of the normal-mode vertical velocity and plane wave horizontal velocity propagation models. It is shown that these representations can be utilized to spatially propagate the modal (dept) vertical velocity functions given the basic parameters (wave numbers, Brunt-Vaisala frequency profile etc.) developed from the solution of the associated boundary value problem as well as the horizontal velocity components. Based on this framework, investigations are made of model-based solutions to the signal enhancement problem for internal waves.
Distributed model-based nonlinear sensor fault diagnosis in wireless sensor networks
NASA Astrophysics Data System (ADS)
Lo, Chun; Lynch, Jerome P.; Liu, Mingyan
2016-01-01
Wireless sensors operating in harsh environments have the potential to be error-prone. This paper presents a distributive model-based diagnosis algorithm that identifies nonlinear sensor faults. The diagnosis algorithm has advantages over existing fault diagnosis methods such as centralized model-based and distributive model-free methods. An algorithm is presented for detecting common non-linearity faults without using reference sensors. The study introduces a model-based fault diagnosis framework that is implemented within a pair of wireless sensors. The detection of sensor nonlinearities is shown to be equivalent to solving the largest empty rectangle (LER) problem, given a set of features extracted from an analysis of sensor outputs. A low-complexity algorithm that gives an approximate solution to the LER problem is proposed for embedment in resource constrained wireless sensors. By solving the LER problem, sensors corrupted by non-linearity faults can be isolated and identified. Extensive analysis evaluates the performance of the proposed algorithm through simulation.
A Food Chain Algorithm for Capacitated Vehicle Routing Problem with Recycling in Reverse Logistics
NASA Astrophysics Data System (ADS)
Song, Qiang; Gao, Xuexia; Santos, Emmanuel T.
2015-12-01
This paper introduces the capacitated vehicle routing problem with recycling in reverse logistics, and designs a food chain algorithm for it. Some illustrative examples are selected to conduct simulation and comparison. Numerical results show that the performance of the food chain algorithm is better than the genetic algorithm, particle swarm optimization as well as quantum evolutionary algorithm.
Evolutionary conceptual analysis: faith community nursing.
Ziebarth, Deborah
2014-12-01
knowledge and model-based applications for evidence-based practice and research. PMID:25097106
On the Performance of Stochastic Model-Based Image Segmentation
NASA Astrophysics Data System (ADS)
Lei, Tianhu; Sewchand, Wilfred
1989-11-01
A new stochastic model-based image segmentation technique for X-ray CT image has been developed and has been extended to the more general nondiffraction CT images which include MRI, SPELT, and certain type of ultrasound images [1,2]. The nondiffraction CT image is modeled by a Finite Normal Mixture. The technique utilizes the information theoretic criterion to detect the number of the region images, uses the Expectation-Maximization algorithm to estimate the parameters of the image, and uses the Bayesian classifier to segment the observed image. How does this technique over/under-estimate the number of the region images? What is the probability of errors in the segmentation of this technique? This paper addresses these two problems and is a continuation of [1,2].
Model-Based Inquiries in Chemistry
ERIC Educational Resources Information Center
Khan, Samia
2007-01-01
In this paper, instructional strategies for sustaining model-based inquiry in an undergraduate chemistry class were analyzed through data collected from classroom observations, a student survey, and in-depth problem-solving sessions with the instructor and students. Analysis of teacher-student interactions revealed a cyclical pattern in which…
What's Missing in Model-Based Teaching
ERIC Educational Resources Information Center
Khan, Samia
2011-01-01
In this study, the author investigated how four science teachers employed model-based teaching (MBT) over a 1-year period. The purpose of the research was to develop a baseline of the fundamental and specific dimensions of MBT that are present and absent in science teaching. Teacher interviews, classroom observations, and pre and post-student…
Sandboxes for Model-Based Inquiry
ERIC Educational Resources Information Center
Brady, Corey; Holbert, Nathan; Soylu, Firat; Novak, Michael; Wilensky, Uri
2015-01-01
In this article, we introduce a class of constructionist learning environments that we call "Emergent Systems Sandboxes" ("ESSs"), which have served as a centerpiece of our recent work in developing curriculum to support scalable model-based learning in classroom settings. ESSs are a carefully specified form of virtual…
A review of estimation of distribution algorithms in bioinformatics
Armañanzas, Rubén; Inza, Iñaki; Santana, Roberto; Saeys, Yvan; Flores, Jose Luis; Lozano, Jose Antonio; Peer, Yves Van de; Blanco, Rosa; Robles, Víctor; Bielza, Concha; Larrañaga, Pedro
2008-01-01
Evolutionary search algorithms have become an essential asset in the algorithmic toolbox for solving high-dimensional optimization problems in across a broad range of bioinformatics problems. Genetic algorithms, the most well-known and representative evolutionary search technique, have been the subject of the major part of such applications. Estimation of distribution algorithms (EDAs) offer a novel evolutionary paradigm that constitutes a natural and attractive alternative to genetic algorithms. They make use of a probabilistic model, learnt from the promising solutions, to guide the search process. In this paper, we set out a basic taxonomy of EDA techniques, underlining the nature and complexity of the probabilistic model of each EDA variant. We review a set of innovative works that make use of EDA techniques to solve challenging bioinformatics problems, emphasizing the EDA paradigm's potential for further research in this domain. PMID:18822112
Model-Based Diagnostics for Propellant Loading Systems
NASA Technical Reports Server (NTRS)
Daigle, Matthew John; Foygel, Michael; Smelyanskiy, Vadim N.
2011-01-01
The loading of spacecraft propellants is a complex, risky operation. Therefore, diagnostic solutions are necessary to quickly identify when a fault occurs, so that recovery actions can be taken or an abort procedure can be initiated. Model-based diagnosis solutions, established using an in-depth analysis and understanding of the underlying physical processes, offer the advanced capability to quickly detect and isolate faults, identify their severity, and predict their effects on system performance. We develop a physics-based model of a cryogenic propellant loading system, which describes the complex dynamics of liquid hydrogen filling from a storage tank to an external vehicle tank, as well as the influence of different faults on this process. The model takes into account the main physical processes such as highly nonequilibrium condensation and evaporation of the hydrogen vapor, pressurization, and also the dynamics of liquid hydrogen and vapor flows inside the system in the presence of helium gas. Since the model incorporates multiple faults in the system, it provides a suitable framework for model-based diagnostics and prognostics algorithms. Using this model, we analyze the effects of faults on the system, derive symbolic fault signatures for the purposes of fault isolation, and perform fault identification using a particle filter approach. We demonstrate the detection, isolation, and identification of a number of faults using simulation-based experiments.
Model-based approach to real-time target detection
NASA Astrophysics Data System (ADS)
Hackett, Jay K.; Gold, Ed V.; Long, Daniel T.; Cloud, Eugene L.; Duvoisin, Herbert A.
1992-09-01
Land mine detection and extraction from infra-red (IR) scenes using real-time parallel processing is of significant interest to ground based infantry. The mine detection algorithms consist of several sub-processes to progress from raw input IR imagery to feature based mine nominations. Image enhancement is first applied; this consists of noise and sensor artifact removal. Edge grouping is used to determine the boundary of the objects. The generalized Hough Transform tuned to the land mine signature acts as a model based matched nomination filter. Once the object is found, the model is used to guide the labeling of each pixel as background, object, or object boundary. Using these labels to identify object regions, feature primitives are extracted in a high speed parallel processor. A feature based screener then compares each object's feature primitives to acceptable values and rejects all objects that do not resemble mines. This operation greatly reduces the number of objects that must be passed from a real-time parallel processor to the classifier. We will discuss details of this model- based approach, including results from actual IR field test imagery.
How cultural evolutionary theory can inform social psychology and vice versa.
Mesoudi, Alex
2009-10-01
Cultural evolutionary theory is an interdisciplinary field in which human culture is viewed as a Darwinian process of variation, competition, and inheritance, and the tools, methods, and theories developed by evolutionary biologists to study genetic evolution are adapted to study cultural change. It is argued here that an integration of the theories and findings of mainstream social psychology and of cultural evolutionary theory can be mutually beneficial. Social psychology provides cultural evolution with a set of empirically verified microevolutionary cultural processes, such as conformity, model-based biases, and content biases, that are responsible for specific patterns of cultural change. Cultural evolutionary theory provides social psychology with ultimate explanations for, and an understanding of the population-level consequences of, many social psychological phenomena, such as social learning, conformity, social comparison, and intergroup processes, as well as linking social psychology with other social science disciplines such as cultural anthropology, archaeology, and sociology. PMID:19839691
The major synthetic evolutionary transitions.
Solé, Ricard
2016-08-19
Evolution is marked by well-defined events involving profound innovations that are known as 'major evolutionary transitions'. They involve the integration of autonomous elements into a new, higher-level organization whereby the former isolated units interact in novel ways, losing their original autonomy. All major transitions, which include the origin of life, cells, multicellular systems, societies or language (among other examples), took place millions of years ago. Are these transitions unique, rare events? Have they instead universal traits that make them almost inevitable when the right pieces are in place? Are there general laws of evolutionary innovation? In order to approach this problem under a novel perspective, we argue that a parallel class of evolutionary transitions can be explored involving the use of artificial evolutionary experiments where alternative paths to innovation can be explored. These 'synthetic' transitions include, for example, the artificial evolution of multicellular systems or the emergence of language in evolved communicating robots. These alternative scenarios could help us to understand the underlying laws that predate the rise of major innovations and the possibility for general laws of evolved complexity. Several key examples and theoretical approaches are summarized and future challenges are outlined.This article is part of the themed issue 'The major synthetic evolutionary transitions'. PMID:27431528
Evolutionary foundations for cancer biology
Aktipis, C Athena; Nesse, Randolph M
2013-01-01
New applications of evolutionary biology are transforming our understanding of cancer. The articles in this special issue provide many specific examples, such as microorganisms inducing cancers, the significance of within-tumor heterogeneity, and the possibility that lower dose chemotherapy may sometimes promote longer survival. Underlying these specific advances is a large-scale transformation, as cancer research incorporates evolutionary methods into its toolkit, and asks new evolutionary questions about why we are vulnerable to cancer. Evolution explains why cancer exists at all, how neoplasms grow, why cancer is remarkably rare, and why it occurs despite powerful cancer suppression mechanisms. Cancer exists because of somatic selection; mutations in somatic cells result in some dividing faster than others, in some cases generating neoplasms. Neoplasms grow, or do not, in complex cellular ecosystems. Cancer is relatively rare because of natural selection; our genomes were derived disproportionally from individuals with effective mechanisms for suppressing cancer. Cancer occurs nonetheless for the same six evolutionary reasons that explain why we remain vulnerable to other diseases. These four principles—cancers evolve by somatic selection, neoplasms grow in complex ecosystems, natural selection has shaped powerful cancer defenses, and the limitations of those defenses have evolutionary explanations—provide a foundation for understanding, preventing, and treating cancer. PMID:23396885
Environmental changes bridge evolutionary valleys
Steinberg, Barrett; Ostermeier, Marc
2016-01-01
In the basic fitness landscape metaphor for molecular evolution, evolutionary pathways are presumed to follow uphill steps of increasing fitness. How evolution can cross fitness valleys is an open question. One possibility is that environmental changes alter the fitness landscape such that low-fitness sequences reside on a hill in alternate environments. We experimentally test this hypothesis on the antibiotic resistance gene TEM-15 β-lactamase by comparing four evolutionary strategies shaped by environmental changes. The strategy that included initial steps of selecting for low antibiotic resistance (negative selection) produced superior alleles compared with the other three strategies. We comprehensively examined possible evolutionary pathways leading to one such high-fitness allele and found that an initially deleterious mutation is key to the allele’s evolutionary history. This mutation is an initial gateway to an otherwise relatively inaccessible area of sequence space and participates in higher-order, positive epistasis with a number of neutral to slightly beneficial mutations. The ability of negative selection and environmental changes to provide access to novel fitness peaks has important implications for natural evolutionary mechanisms and applied directed evolution. PMID:26844293
Evolutionary genetics of maternal effects
Wolf, Jason B.; Wade, Michael J.
2016-01-01
Maternal genetic effects (MGEs), where genes expressed by mothers affect the phenotype of their offspring, are important sources of phenotypic diversity in a myriad of organisms. We use a single‐locus model to examine how MGEs contribute patterns of heritable and nonheritable variation and influence evolutionary dynamics in randomly mating and inbreeding populations. We elucidate the influence of MGEs by examining the offspring genotype‐phenotype relationship, which determines how MGEs affect evolutionary dynamics in response to selection on offspring phenotypes. This approach reveals important results that are not apparent from classic quantitative genetic treatments of MGEs. We show that additive and dominance MGEs make different contributions to evolutionary dynamics and patterns of variation, which are differentially affected by inbreeding. Dominance MGEs make the offspring genotype‐phenotype relationship frequency dependent, resulting in the appearance of negative frequency‐dependent selection, while additive MGEs contribute a component of parent‐of‐origin dependent variation. Inbreeding amplifies the contribution of MGEs to the additive genetic variance and, therefore enhances their evolutionary response. Considering evolutionary dynamics of allele frequency change on an adaptive landscape, we show that this landscape differs from the mean fitness surface, and therefore, under some condition, fitness peaks can exist but not be “available” to the evolving population. PMID:26969266
Environmental changes bridge evolutionary valleys.
Steinberg, Barrett; Ostermeier, Marc
2016-01-01
In the basic fitness landscape metaphor for molecular evolution, evolutionary pathways are presumed to follow uphill steps of increasing fitness. How evolution can cross fitness valleys is an open question. One possibility is that environmental changes alter the fitness landscape such that low-fitness sequences reside on a hill in alternate environments. We experimentally test this hypothesis on the antibiotic resistance gene TEM-15 β-lactamase by comparing four evolutionary strategies shaped by environmental changes. The strategy that included initial steps of selecting for low antibiotic resistance (negative selection) produced superior alleles compared with the other three strategies. We comprehensively examined possible evolutionary pathways leading to one such high-fitness allele and found that an initially deleterious mutation is key to the allele's evolutionary history. This mutation is an initial gateway to an otherwise relatively inaccessible area of sequence space and participates in higher-order, positive epistasis with a number of neutral to slightly beneficial mutations. The ability of negative selection and environmental changes to provide access to novel fitness peaks has important implications for natural evolutionary mechanisms and applied directed evolution. PMID:26844293
The major synthetic evolutionary transitions
Solé, Ricard
2016-01-01
Evolution is marked by well-defined events involving profound innovations that are known as ‘major evolutionary transitions'. They involve the integration of autonomous elements into a new, higher-level organization whereby the former isolated units interact in novel ways, losing their original autonomy. All major transitions, which include the origin of life, cells, multicellular systems, societies or language (among other examples), took place millions of years ago. Are these transitions unique, rare events? Have they instead universal traits that make them almost inevitable when the right pieces are in place? Are there general laws of evolutionary innovation? In order to approach this problem under a novel perspective, we argue that a parallel class of evolutionary transitions can be explored involving the use of artificial evolutionary experiments where alternative paths to innovation can be explored. These ‘synthetic’ transitions include, for example, the artificial evolution of multicellular systems or the emergence of language in evolved communicating robots. These alternative scenarios could help us to understand the underlying laws that predate the rise of major innovations and the possibility for general laws of evolved complexity. Several key examples and theoretical approaches are summarized and future challenges are outlined. This article is part of the themed issue ‘The major synthetic evolutionary transitions’. PMID:27431528
Evolutionary genetics of maternal effects.
Wolf, Jason B; Wade, Michael J
2016-04-01
Maternal genetic effects (MGEs), where genes expressed by mothers affect the phenotype of their offspring, are important sources of phenotypic diversity in a myriad of organisms. We use a single-locus model to examine how MGEs contribute patterns of heritable and nonheritable variation and influence evolutionary dynamics in randomly mating and inbreeding populations. We elucidate the influence of MGEs by examining the offspring genotype-phenotype relationship, which determines how MGEs affect evolutionary dynamics in response to selection on offspring phenotypes. This approach reveals important results that are not apparent from classic quantitative genetic treatments of MGEs. We show that additive and dominance MGEs make different contributions to evolutionary dynamics and patterns of variation, which are differentially affected by inbreeding. Dominance MGEs make the offspring genotype-phenotype relationship frequency dependent, resulting in the appearance of negative frequency-dependent selection, while additive MGEs contribute a component of parent-of-origin dependent variation. Inbreeding amplifies the contribution of MGEs to the additive genetic variance and, therefore enhances their evolutionary response. Considering evolutionary dynamics of allele frequency change on an adaptive landscape, we show that this landscape differs from the mean fitness surface, and therefore, under some condition, fitness peaks can exist but not be "available" to the evolving population. PMID:26969266
Development of X-TOOLSS: Preliminary Design of Space Systems Using Evolutionary Computation
NASA Technical Reports Server (NTRS)
Schnell, Andrew R.; Hull, Patrick V.; Turner, Mike L.; Dozier, Gerry; Alverson, Lauren; Garrett, Aaron; Reneau, Jarred
2008-01-01
Evolutionary computational (EC) techniques such as genetic algorithms (GA) have been identified as promising methods to explore the design space of mechanical and electrical systems at the earliest stages of design. In this paper the authors summarize their research in the use of evolutionary computation to develop preliminary designs for various space systems. An evolutionary computational solver developed over the course of the research, X-TOOLSS (Exploration Toolset for the Optimization of Launch and Space Systems) is discussed. With the success of early, low-fidelity example problems, an outline of work involving more computationally complex models is discussed.
Facial Composite System Using Genetic Algorithm
NASA Astrophysics Data System (ADS)
Zahradníková, Barbora; Duchovičová, Soňa; Schreiber, Peter
2014-12-01
The article deals with genetic algorithms and their application in face identification. The purpose of the research is to develop a free and open-source facial composite system using evolutionary algorithms, primarily processes of selection and breeding. The initial testing proved higher quality of the final composites and massive reduction in the composites processing time. System requirements were specified and future research orientation was proposed in order to improve the results.
A model-based approach to reactive self-configuring systems
Williams, B.C.; Nayak, P.P.
1996-12-31
This paper describes Livingstone, an implemented kernel for a model-based reactive self-configuring autonomous system. It presents a formal characterization of Livingstone`s representation formalism, and reports on our experience with the implementation in a variety of domains. Livingstone provides a reactive system that performs significant deduction in the sense/response loop by drawing on our past experience at building fast propositional conflict-based algorithms for model-based diagnosis, and by framing a model-based configuration manager as a propositional feedback controller that generates focused, optimal responses. Livingstone`s representation formalism achieves broad coverage of hybrid hardware/software systems by coupling the transition system models underlying concurrent reactive languages with the qualitative representations developed in model-based reasoning. Livingstone automates a wide variety of tasks using a single model and a single core algorithm, thus making significant progress towards achieving a central goal of model-based reasoning. Livingstone, together with the HSTS planning and scheduling engine and the RAPS executive, has been selected as part of the core autonomy architecture for NASA`s first New Millennium spacecraft.
A Convergence Analysis of Unconstrained and Bound Constrained Evolutionary Pattern Search
Hart, W.E.
1999-04-22
The authors present and analyze a class of evolutionary algorithms for unconstrained and bound constrained optimization on R{sup n}: evolutionary pattern search algorithms (EPSAs). EPSAs adaptively modify the step size of the mutation operator in response to the success of previous optimization steps. The design of EPSAs is inspired by recent analyses of pattern search methods. They show that EPSAs can be cast as stochastic pattern search methods, and they use this observation to prove that EpSAs have a probabilistic weak stationary point convergence theory. This work provides the first convergence analysis for a class of evolutionary algorithms that guarantees convergence almost surely to a stationary point of a nonconvex objective function.
Deserno, L; Wilbertz, T; Reiter, A; Horstmann, A; Neumann, J; Villringer, A; Heinze, H-J; Schlagenhauf, F
2015-01-01
High impulsivity is an important risk factor for addiction with evidence from endophenotype studies. In addiction, behavioral control is shifted toward the habitual end. Habitual control can be described by retrospective updating of reward expectations in ‘model-free' temporal-difference algorithms. Goal-directed control relies on the prospective consideration of actions and their outcomes, which can be captured by forward-planning ‘model-based' algorithms. So far, no studies have examined behavioral and neural signatures of model-free and model-based control in healthy high-impulsive individuals. Fifty healthy participants were drawn from the upper and lower ends of 452 individuals, completing the Barratt Impulsiveness Scale. All participants performed a sequential decision-making task during functional magnetic resonance imaging (fMRI) and underwent structural MRI. Behavioral and fMRI data were analyzed by means of computational algorithms reflecting model-free and model-based control. Both groups did not differ regarding the balance of model-free and model-based control, but high-impulsive individuals showed a subtle but significant accentuation of model-free control alone. Right lateral prefrontal model-based signatures were reduced in high-impulsive individuals. Effects of smoking, drinking, general cognition or gray matter density did not account for the findings. Irrespectively of impulsivity, gray matter density in the left dorsolateral prefrontal cortex was positively associated with model-based control. The present study supports the idea that high levels of impulsivity are accompanied by behavioral and neural signatures in favor of model-free behavioral control. Behavioral results in healthy high-impulsive individuals were qualitatively different to findings in patients with the same task. The predictive relevance of these results remains an important target for future longitudinal studies. PMID:26460483
Deserno, L; Wilbertz, T; Reiter, A; Horstmann, A; Neumann, J; Villringer, A; Heinze, H-J; Schlagenhauf, F
2015-01-01
High impulsivity is an important risk factor for addiction with evidence from endophenotype studies. In addiction, behavioral control is shifted toward the habitual end. Habitual control can be described by retrospective updating of reward expectations in 'model-free' temporal-difference algorithms. Goal-directed control relies on the prospective consideration of actions and their outcomes, which can be captured by forward-planning 'model-based' algorithms. So far, no studies have examined behavioral and neural signatures of model-free and model-based control in healthy high-impulsive individuals. Fifty healthy participants were drawn from the upper and lower ends of 452 individuals, completing the Barratt Impulsiveness Scale. All participants performed a sequential decision-making task during functional magnetic resonance imaging (fMRI) and underwent structural MRI. Behavioral and fMRI data were analyzed by means of computational algorithms reflecting model-free and model-based control. Both groups did not differ regarding the balance of model-free and model-based control, but high-impulsive individuals showed a subtle but significant accentuation of model-free control alone. Right lateral prefrontal model-based signatures were reduced in high-impulsive individuals. Effects of smoking, drinking, general cognition or gray matter density did not account for the findings. Irrespectively of impulsivity, gray matter density in the left dorsolateral prefrontal cortex was positively associated with model-based control. The present study supports the idea that high levels of impulsivity are accompanied by behavioral and neural signatures in favor of model-free behavioral control. Behavioral results in healthy high-impulsive individuals were qualitatively different to findings in patients with the same task. The predictive relevance of these results remains an important target for future longitudinal studies. PMID:26460483
A Support Vector Machine Blind Equalization Algorithm Based on Immune Clone Algorithm
NASA Astrophysics Data System (ADS)
Yecai, Guo; Rui, Ding
Aiming at affecting of the parameter selection method of support vector machine(SVM) on its application in blind equalization algorithm, a SVM constant modulus blind equalization algorithm based on immune clone selection algorithm(CSA-SVM-CMA) is proposed. In this proposed algorithm, the immune clone algorithm is used to optimize the parameters of the SVM on the basis advantages of its preventing evolutionary precocious, avoiding local optimum, and fast convergence. The proposed algorithm can improve the parameter selection efficiency of SVM constant modulus blind equalization algorithm(SVM-CMA) and overcome the defect of the artificial setting parameters. Accordingly, the CSA-SVM-CMA has faster convergence rate and smaller mean square error than the SVM-CMA. Computer simulations in underwater acoustic channels have proved the validity of the algorithm.
Evolutionary models of interstellar chemistry
NASA Technical Reports Server (NTRS)
Prasad, Sheo S.
1987-01-01
The goal of evolutionary models of interstellar chemistry is to understand how interstellar clouds came to be the way they are, how they will change with time, and to place them in an evolutionary sequence with other celestial objects such as stars. An improved Mark II version of an earlier model of chemistry in dynamically evolving clouds is presented. The Mark II model suggests that the conventional elemental C/O ratio less than one can explain the observed abundances of CI and the nondetection of O2 in dense clouds. Coupled chemical-dynamical models seem to have the potential to generate many observable discriminators of the evolutionary tracks. This is exciting, because, in general, purely dynamical models do not yield enough verifiable discriminators of the predicted tracks.
Neuronal boost to evolutionary dynamics.
de Vladar, Harold P; Szathmáry, Eörs
2015-12-01
Standard evolutionary dynamics is limited by the constraints of the genetic system. A central message of evolutionary neurodynamics is that evolutionary dynamics in the brain can happen in a neuronal niche in real time, despite the fact that neurons do not reproduce. We show that Hebbian learning and structural synaptic plasticity broaden the capacity for informational replication and guided variability provided a neuronally plausible mechanism of replication is in place. The synergy between learning and selection is more efficient than the equivalent search by mutation selection. We also consider asymmetric landscapes and show that the learning weights become correlated with the fitness gradient. That is, the neuronal complexes learn the local properties of the fitness landscape, resulting in the generation of variability directed towards the direction of fitness increase, as if mutations in a genetic pool were drawn such that they would increase reproductive success. Evolution might thus be more efficient within evolved brains than among organisms out in the wild. PMID:26640653
Paradigm change in evolutionary microbiology.
O'Malley, Maureen A; Boucher, Yan
2005-03-01
Thomas Kuhn had little to say about scientific change in biological science, and biologists are ambivalent about how applicable his framework is for their disciplines. We apply Kuhn's account of paradigm change to evolutionary microbiology, where key Darwinian tenets are being challenged by two decades of findings from molecular phylogenetics. The chief culprit is lateral gene transfer, which undermines the role of vertical descent and the representation of evolutionary history as a tree of life. To assess Kuhn's relevance to this controversy, we add a social analysis of the scientists involved to the historical and philosophical debates. We conclude that while Kuhn's account may capture aspects of the pattern (or outcome) of an episode of scientific change, he has little to say about how the process of generating new understandings is occurring in evolutionary microbiology. Once Kuhn's application is limited to that of an initial investigative probe into how scientific problem-solving occurs, his disciplinary scope becomes broader. PMID:16120264
Evolutionary direction of processed pseudogenes.
Liu, Guoqing; Cui, Xiangjun; Li, Hong; Cai, Lu
2016-08-01
While some pseudogenes have been reported to play important roles in gene regulation, little is known about the possible relationship between pseudogene functions and evolutionary process of pseudogenes, or about the forces responsible for the pseudogene evolution. In this study, we characterized human processed pseudogenes in terms of evolutionary dynamics. Our results show that pseudogenes tend to evolve toward: lower GC content, strong dinucleotide bias, reduced abundance of transcription factor binding motifs and short palindromes, and decreased ability to form nucleosomes. We explored possible evolutionary forces that shaped the evolution pattern of pseudogenes, and concluded that mutations in pseudogenes are likely determined, at least partially, by neighbor-dependent mutational bias and recombination-associated selection. PMID:27333782
Neuronal boost to evolutionary dynamics
de Vladar, Harold P.; Szathmáry, Eörs
2015-01-01
Standard evolutionary dynamics is limited by the constraints of the genetic system. A central message of evolutionary neurodynamics is that evolutionary dynamics in the brain can happen in a neuronal niche in real time, despite the fact that neurons do not reproduce. We show that Hebbian learning and structural synaptic plasticity broaden the capacity for informational replication and guided variability provided a neuronally plausible mechanism of replication is in place. The synergy between learning and selection is more efficient than the equivalent search by mutation selection. We also consider asymmetric landscapes and show that the learning weights become correlated with the fitness gradient. That is, the neuronal complexes learn the local properties of the fitness landscape, resulting in the generation of variability directed towards the direction of fitness increase, as if mutations in a genetic pool were drawn such that they would increase reproductive success. Evolution might thus be more efficient within evolved brains than among organisms out in the wild. PMID:26640653
Development of model-based multispectral controllers for smart material systems
NASA Astrophysics Data System (ADS)
Kim, Byeongil; Washington, Gregory N.
2009-03-01
The primary objective of this research is to develop novel model-based multispectral controllers for smart material systems in order to deal with sidebands and higher harmonics and with several frequency components simultaneously. Based on the filtered-X least mean square algorithm, it will be integrated with a nonlinear model-based controller called model predictive sliding mode control. Their performance will be verified in simulation and with various applications such as helicopter cabin noise reduction. This research will improve active vibration and noise control systems used in engineering structures and vehicles by effectively dealing with a wide range of multispectral signals.
Model-based Processing of Micro-cantilever Sensor Arrays
Tringe, J W; Clague, D S; Candy, J V; Lee, C L; Rudd, R E; Burnham, A K
2004-11-17
We develop a model-based processor (MBP) for a micro-cantilever array sensor to detect target species in solution. After discussing the generalized framework for this problem, we develop the specific model used in this study. We perform a proof-of-concept experiment, fit the model parameters to the measured data and use them to develop a Gauss-Markov simulation. We then investigate two cases of interest: (1) averaged deflection data, and (2) multi-channel data. In both cases the evaluation proceeds by first performing a model-based parameter estimation to extract the model parameters, next performing a Gauss-Markov simulation, designing the optimal MBP and finally applying it to measured experimental data. The simulation is used to evaluate the performance of the MBP in the multi-channel case and compare it to a ''smoother'' (''averager'') typically used in this application. It was shown that the MBP not only provides a significant gain ({approx} 80dB) in signal-to-noise ratio (SNR), but also consistently outperforms the smoother by 40-60 dB. Finally, we apply the processor to the smoothed experimental data and demonstrate its capability for chemical detection. The MBP performs quite well, though it includes a correctable systematic bias error. The project's primary accomplishment was the successful application of model-based processing to signals from micro-cantilever arrays: 40-60 dB improvement vs. the smoother algorithm was demonstrated. This result was achieved through the development of appropriate mathematical descriptions for the chemical and mechanical phenomena, and incorporation of these descriptions directly into the model-based signal processor. A significant challenge was the development of the framework which would maximize the usefulness of the signal processing algorithms while ensuring the accuracy of the mathematical description of the chemical-mechanical signal. Experimentally, the difficulty was to identify and characterize the non
Systems Engineering Interfaces: A Model Based Approach
NASA Technical Reports Server (NTRS)
Fosse, Elyse; Delp, Christopher
2013-01-01
Currently: Ops Rev developed and maintains a framework that includes interface-specific language, patterns, and Viewpoints. Ops Rev implements the framework to design MOS 2.0 and its 5 Mission Services. Implementation de-couples interfaces and instances of interaction Future: A Mission MOSE implements the approach and uses the model based artifacts for reviews. The framework extends further into the ground data layers and provides a unified methodology.
Automatic sensor placement for model-based robot vision.
Chen, S Y; Li, Y F
2004-02-01
This paper presents a method for automatic sensor placement for model-based robot vision. In such a vision system, the sensor often needs to be moved from one pose to another around the object to observe all features of interest. This allows multiple three-dimensional (3-D) images to be taken from different vantage viewpoints. The task involves determination of the optimal sensor placements and a shortest path through these viewpoints. During the sensor planning, object features are resampled as individual points attached with surface normals. The optimal sensor placement graph is achieved by a genetic algorithm in which a min-max criterion is used for the evaluation. A shortest path is determined by Christofides algorithm. A Viewpoint Planner is developed to generate the sensor placement plan. It includes many functions, such as 3-D animation of the object geometry, sensor specification, initialization of the viewpoint number and their distribution, viewpoint evolution, shortest path computation, scene simulation of a specific viewpoint, parameter amendment. Experiments are also carried out on a real robot vision system to demonstrate the effectiveness of the proposed method. PMID:15369081
Symbolic Processing Combined with Model-Based Reasoning
NASA Technical Reports Server (NTRS)
James, Mark
2009-01-01
A computer program for the detection of present and prediction of future discrete states of a complex, real-time engineering system utilizes a combination of symbolic processing and numerical model-based reasoning. One of the biggest weaknesses of a purely symbolic approach is that it enables prediction of only future discrete states while missing all unmodeled states or leading to incorrect identification of an unmodeled state as a modeled one. A purely numerical approach is based on a combination of statistical methods and mathematical models of the applicable physics and necessitates development of a complete model to the level of fidelity required for prediction. In addition, a purely numerical approach does not afford the ability to qualify its results without some form of symbolic processing. The present software implements numerical algorithms to detect unmodeled events and symbolic algorithms to predict expected behavior, correlate the expected behavior with the unmodeled events, and interpret the results in order to predict future discrete states. The approach embodied in this software differs from that of the BEAM methodology (aspects of which have been discussed in several prior NASA Tech Briefs articles), which provides for prediction of future measurements in the continuous-data domain.
The evolutionary psychology of hunger.
Al-Shawaf, Laith
2016-10-01
An evolutionary psychological perspective suggests that emotions can be understood as coordinating mechanisms whose job is to regulate various psychological and physiological programs in the service of solving an adaptive problem. This paper suggests that it may also be fruitful to approach hunger from this coordinating mechanism perspective. To this end, I put forward an evolutionary task analysis of hunger, generating novel a priori hypotheses about the coordinating effects of hunger on psychological processes such as perception, attention, categorization, and memory. This approach appears empirically fruitful in that it yields a bounty of testable new hypotheses. PMID:27328100
Deep evolutionary origins of neurobiology
Mancuso, Stefano
2009-01-01
It is generally assumed, both in common-sense argumentations and scientific concepts, that brains and neurons represent late evolutionary achievements which are present only in more advanced animals. Here we overview recently published data clearly revealing that our understanding of bacteria, unicellular eukaryotic organisms, plants, brains and neurons, rooted in the Aristotelian philosophy is flawed. Neural aspects of biological systems are obvious already in bacteria and unicellular biological units such as sexual gametes and diverse unicellular eukaryotic organisms. Altogether, processes and activities thought to represent evolutionary ‘recent’ specializations of the nervous system emerge rather to represent ancient and fundamental cell survival processes. PMID:19513267
Evolutionary artificial neural networks for hydrological systems forecasting
NASA Astrophysics Data System (ADS)
Chen, Yung-hsiang; Chang, Fi-John
2009-03-01
SummaryThe conventional ways of constructing artificial neural network (ANN) for a problem generally presume a specific architecture and do not automatically discover network modules appropriate for specific training data. Evolutionary algorithms are used to automatically adapt the network architecture and connection weights according to the problem environment without substantial human intervention. To improve on the drawbacks of the conventional optimal process, this study presents a novel evolutionary artificial neural network (EANN) for time series forecasting. The EANN has a hybrid procedure, including the genetic algorithm and the scaled conjugate gradient algorithm, where the feedforward ANN architecture and its connection weights of neurons are simultaneously identified and optimized. We first explored the performance of the proposed EANN for the Mackey-Glass chaotic time series. The performance of the different networks was evaluated. The excellent performance in forecasting of the chaotic series shows that the proposed algorithm concurrently possesses efficiency, effectiveness, and robustness. We further explored the applicability and reliability of the EANN in a real hydrological time series. Again, the results indicate the EANN can effectively and efficiently construct a viable forecast module for the 10-day reservoir inflow, and its accuracy is superior to that of the AR and ARMAX models.
A Performance Analysis of Evolutionary Pattern Search with Generalized Mutation Steps
Hart, W.; Hunter, K.
1999-02-10
Evolutionary pattern search algorithms (EPSAs) are a class of evolutionary algorithms (EAs) that have convergence guarantees on a broad class of nonconvex continuous problems. In previous work we have analyzed the empirical performance of EPSAs. This paper revisits that analysis and extends it to a more general model of mutation. We experimentally evaluate how the choice of the set of mutation offsets affects optimization performance for EPSAs. Additionally, we compare EPSAs to self-adaptive EAs with respect to robustness and rate of optimization. All experiments employ a suite of test functions representing a range of modality and number of multiple minima.
An evolutionary approach toward dynamic self-generated fuzzy inference systems.
Zhou, Yi; Er, Meng Joo
2008-08-01
An evolutionary approach toward automatic generation of fuzzy inference systems (FISs), termed evolutionary dynamic self-generated fuzzy inference systems (EDSGFISs), is proposed in this paper. The structure and parameters of an FIS are generated through reinforcement learning, whereas an action set for training the consequents of the FIS is evolved via genetic algorithms (GAs). The proposed EDSGFIS algorithm can automatically create, delete, and adjust fuzzy rules according to the performance of the entire system, as well as evaluation of individual fuzzy rules. Simulation studies on a wall-following task by a mobile robot show that the proposed EDSGFIS approach is superior to other related methods. PMID:18632385
Comparison of the Asynchronous Differential Evolution and JADE Minimization Algorithms
NASA Astrophysics Data System (ADS)
Zhabitsky, Mikhail
2016-02-01
Differential Evolution (DE) is an efficient evolutionary algorithm to solve global optimization problems. In this work we compare performance of the recently proposed Asynchronous Differential Evolution with Adaptive Correlation Matrix (ADEACM) to the widely used JADE algorithm, a DE variant with adaptive control parameters.
A new evolutionary system for evolving artificial neural networks.
Yao, X; Liu, Y
1997-01-01
This paper presents a new evolutionary system, i.e., EPNet, for evolving artificial neural networks (ANNs). The evolutionary algorithm used in EPNet is based on Fogel's evolutionary programming (EP). Unlike most previous studies on evolving ANN's, this paper puts its emphasis on evolving ANN's behaviors. Five mutation operators proposed in EPNet reflect such an emphasis on evolving behaviors. Close behavioral links between parents and their offspring are maintained by various mutations, such as partial training and node splitting. EPNet evolves ANN's architectures and connection weights (including biases) simultaneously in order to reduce the noise in fitness evaluation. The parsimony of evolved ANN's is encouraged by preferring node/connection deletion to addition. EPNet has been tested on a number of benchmark problems in machine learning and ANNs, such as the parity problem, the medical diagnosis problems, the Australian credit card assessment problem, and the Mackey-Glass time series prediction problem. The experimental results show that EPNet can produce very compact ANNs with good generalization ability in comparison with other algorithms. PMID:18255671
Evolutionary Perspective in Child Growth
Hochberg, Ze’ev
2011-01-01
Hereditary, environmental, and stochastic factors determine a child’s growth in his unique environment, but their relative contribution to the phenotypic outcome and the extent of stochastic programming that is required to alter human phenotypes is not known because few data are available. This is an attempt to use evolutionary life-history theory in understanding child growth in a broad evolutionary perspective, using the data and theory of evolutionary predictive adaptive growth-related strategies. Transitions from one life-history phase to the next have inherent adaptive plasticity in their timing. Humans evolved to withstand energy crises by decreasing their body size, and evolutionary short-term adaptations to energy crises utilize a plasticity that modifies the timing of transition from infancy into childhood, culminating in short stature in times of energy crisis. Transition to juvenility is part of a strategy of conversion from a period of total dependence on the family and tribe for provision and security to self-supply, and a degree of adaptive plasticity is provided and determines body composition. Transition to adolescence entails plasticity in adapting to energy resources, other environmental cues, and the social needs of the maturing adolescent to determine life-span and the period of fecundity and fertility. Fundamental questions are raised by a life-history approach to the unique growth pattern of each child in his given genetic background and current environment. PMID:23908815
Evolutionary Psychology and Intelligence Research
ERIC Educational Resources Information Center
Kanazawa, Satoshi
2010-01-01
This article seeks to unify two subfields of psychology that have hitherto stood separately: evolutionary psychology and intelligence research/differential psychology. I suggest that general intelligence may simultaneously be an evolved adaptation and an individual-difference variable. Tooby and Cosmides's (1990a) notion of random quantitative…
Evolutionary perspective in child growth.
Hochberg, Ze'ev
2011-07-01
Hereditary, environmental, and stochastic factors determine a child's growth in his unique environment, but their relative contribution to the phenotypic outcome and the extent of stochastic programming that is required to alter human phenotypes is not known because few data are available. This is an attempt to use evolutionary life-history theory in understanding child growth in a broad evolutionary perspective, using the data and theory of evolutionary predictive adaptive growth-related strategies. Transitions from one life-history phase to the next have inherent adaptive plasticity in their timing. Humans evolved to withstand energy crises by decreasing their body size, and evolutionary short-term adaptations to energy crises utilize a plasticity that modifies the timing of transition from infancy into childhood, culminating in short stature in times of energy crisis. Transition to juvenility is part of a strategy of conversion from a period of total dependence on the family and tribe for provision and security to self-supply, and a degree of adaptive plasticity is provided and determines body composition. Transition to adolescence entails plasticity in adapting to energy resources, other environmental cues, and the social needs of the maturing adolescent to determine life-span and the period of fecundity and fertility. Fundamental questions are raised by a life-history approach to the unique growth pattern of each child in his given genetic background and current environment. PMID:23908815
Cryptic eco-evolutionary dynamics.
Kinnison, Michael T; Hairston, Nelson G; Hendry, Andrew P
2015-12-01
Natural systems harbor complex interactions that are fundamental parts of ecology and evolution. These interactions challenge our inclinations and training to seek the simplest explanations of patterns in nature. Not least is the likelihood that some complex processes might be missed when their patterns look similar to predictions for simpler mechanisms. Along these lines, theory and empirical evidence increasingly suggest that environmental, ecological, phenotypic, and genetic processes can be tightly intertwined, resulting in complex and sometimes surprising eco-evolutionary dynamics. The goal of this review is to temper inclinations to unquestioningly seek the simplest explanations in ecology and evolution, by recognizing that some eco-evolutionary outcomes may appear very similar to purely ecological, purely evolutionary, or even null expectations, and thus be cryptic. We provide theoretical and empirical evidence for observational biases and mechanisms that might operate among the various links in eco-evolutionary feedbacks to produce cryptic patterns. Recognition that cryptic dynamics can be associated with outcomes like stability, resilience, recovery, or coexistence in a dynamically changing world provides added impetus for finding ways to study them. PMID:26619300