Science.gov

Sample records for evolutionary computing methods

  1. Evolutionary Computing Methods for Spectral Retrieval

    NASA Technical Reports Server (NTRS)

    Terrile, Richard; Fink, Wolfgang; Huntsberger, Terrance; Lee, Seugwon; Tisdale, Edwin; VonAllmen, Paul; Tinetti, Geivanna

    2009-01-01

    A methodology for processing spectral images to retrieve information on underlying physical, chemical, and/or biological phenomena is based on evolutionary and related computational methods implemented in software. In a typical case, the solution (the information that one seeks to retrieve) consists of parameters of a mathematical model that represents one or more of the phenomena of interest. The methodology was developed for the initial purpose of retrieving the desired information from spectral image data acquired by remote-sensing instruments aimed at planets (including the Earth). Examples of information desired in such applications include trace gas concentrations, temperature profiles, surface types, day/night fractions, cloud/aerosol fractions, seasons, and viewing angles. The methodology is also potentially useful for retrieving information on chemical and/or biological hazards in terrestrial settings. In this methodology, one utilizes an iterative process that minimizes a fitness function indicative of the degree of dissimilarity between observed and synthetic spectral and angular data. The evolutionary computing methods that lie at the heart of this process yield a population of solutions (sets of the desired parameters) within an accuracy represented by a fitness-function value specified by the user. The evolutionary computing methods (ECM) used in this methodology are Genetic Algorithms and Simulated Annealing, both of which are well-established optimization techniques and have also been described in previous NASA Tech Briefs articles. These are embedded in a conceptual framework, represented in the architecture of the implementing software, that enables automatic retrieval of spectral and angular data and analysis of the retrieved solutions for uniqueness.

  2. Evolutionary Computing

    SciTech Connect

    Patton, Robert M; Cui, Xiaohui; Jiao, Yu; Potok, Thomas E

    2008-01-01

    The rate at which information overwhelms humans is significantly more than the rate at which humans have learned to process, analyze, and leverage this information. To overcome this challenge, new methods of computing must be formulated, and scientist and engineers have looked to nature for inspiration in developing these new methods. Consequently, evolutionary computing has emerged as new paradigm for computing, and has rapidly demonstrated its ability to solve real-world problems where traditional techniques have failed. This field of work has now become quite broad and encompasses areas ranging from artificial life to neural networks. This chapter focuses specifically on two sub-areas of nature-inspired computing: Evolutionary Algorithms and Swarm Intelligence.

  3. Evolutionary Computational Methods for Identifying Emergent Behavior in Autonomous Systems

    NASA Technical Reports Server (NTRS)

    Terrile, Richard J.; Guillaume, Alexandre

    2011-01-01

    A technique based on Evolutionary Computational Methods (ECMs) was developed that allows for the automated optimization of complex computationally modeled systems, such as autonomous systems. The primary technology, which enables the ECM to find optimal solutions in complex search spaces, derives from evolutionary algorithms such as the genetic algorithm and differential evolution. These methods are based on biological processes, particularly genetics, and define an iterative process that evolves parameter sets into an optimum. Evolutionary computation is a method that operates on a population of existing computational-based engineering models (or simulators) and competes them using biologically inspired genetic operators on large parallel cluster computers. The result is the ability to automatically find design optimizations and trades, and thereby greatly amplify the role of the system engineer.

  4. Evolutionary computational methods to predict oral bioavailability QSPRs.

    PubMed

    Bains, William; Gilbert, Richard; Sviridenko, Lilya; Gascon, Jose-Miguel; Scoffin, Robert; Birchall, Kris; Harvey, Inman; Caldwell, John

    2002-01-01

    This review discusses evolutionary and adaptive methods for predicting oral bioavailability (OB) from chemical structure. Genetic Programming (GP), a specific form of evolutionary computing, is compared with some other advanced computational methods for OB prediction. The results show that classifying drugs into 'high' and 'low' OB classes on the basis of their structure alone is solvable, and initial models are already producing output that would be useful for pharmaceutical research. The results also suggest that quantitative prediction of OB will be tractable. Critical aspects of the solution will involve the use of techniques that can: (i) handle problems with a very large number of variables (high dimensionality); (ii) cope with 'noisy' data; and (iii) implement binary choices to sub-classify molecules with behavior that are qualitatively different. Detailed quantitative predictions will emerge from more refined models that are hybrids derived from mechanistic models of the biology of oral absorption and the power of advanced computing techniques to predict the behavior of the components of those models in silico. PMID:11865672

  5. Optimizing neural networks for river flow forecasting - Evolutionary Computation methods versus the Levenberg-Marquardt approach

    NASA Astrophysics Data System (ADS)

    Piotrowski, Adam P.; Napiorkowski, Jarosław J.

    2011-09-01

    SummaryAlthough neural networks have been widely applied to various hydrological problems, including river flow forecasting, for at least 15 years, they have usually been trained by means of gradient-based algorithms. Recently nature inspired Evolutionary Computation algorithms have rapidly developed as optimization methods able to cope not only with non-differentiable functions but also with a great number of local minima. Some of proposed Evolutionary Computation algorithms have been tested for neural networks training, but publications which compare their performance with gradient-based training methods are rare and present contradictory conclusions. The main goal of the present study is to verify the applicability of a number of recently developed Evolutionary Computation optimization methods, mostly from the Differential Evolution family, to multi-layer perceptron neural networks training for daily rainfall-runoff forecasting. In the present paper eight Evolutionary Computation methods, namely the first version of Differential Evolution (DE), Distributed DE with Explorative-Exploitative Population Families, Self-Adaptive DE, DE with Global and Local Neighbors, Grouping DE, JADE, Comprehensive Learning Particle Swarm Optimization and Efficient Population Utilization Strategy Particle Swarm Optimization are tested against the Levenberg-Marquardt algorithm - probably the most efficient in terms of speed and success rate among gradient-based methods. The Annapolis River catchment was selected as the area of this study due to its specific climatic conditions, characterized by significant seasonal changes in runoff, rapid floods, dry summers, severe winters with snowfall, snow melting, frequent freeze and thaw, and presence of river ice - conditions which make flow forecasting more troublesome. The overall performance of the Levenberg-Marquardt algorithm and the DE with Global and Local Neighbors method for neural networks training turns out to be superior to other

  6. Algorithmic Mechanism Design of Evolutionary Computation

    PubMed Central

    Pei, Yan

    2015-01-01

    We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm. PMID:26257777

  7. Algorithmic Mechanism Design of Evolutionary Computation.

    PubMed

    Pei, Yan

    2015-01-01

    We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm. PMID:26257777

  8. Statistical Methods for Evolutionary Trees

    PubMed Central

    Edwards, A. W. F.

    2009-01-01

    In 1963 and 1964, L. L. Cavalli-Sforza and A. W. F. Edwards introduced novel methods for computing evolutionary trees from genetical data, initially for human populations from blood-group gene frequencies. The most important development was their introduction of statistical methods of estimation applied to stochastic models of evolution. PMID:19797062

  9. Evolutionary computation method for pattern recognition of cis-acting sites.

    PubMed

    Howard, Daniel; Benson, Karl

    2003-11-01

    This paper develops an evolutionary method that learns inductively to recognize the makeup and the position of very short consensus sequences, cis-acting sites, which are a typical feature of promoters in genomes. The method combines a Finite State Automata (FSA) and Genetic Programming (GP) to discover candidate promoter sequences in primary sequence data. An experiment measures the success of the method for promoter prediction in the human genome. This class of method can take large base pair jumps and this may enable it to process very long genomic sequences to discover gene specific cis-acting sites, and genes which are regulated together. PMID:14642656

  10. EVOLUTIONARY COMPUTING PROJECT

    SciTech Connect

    C. BARRETT; C. REIDYS

    2000-09-01

    This report summarizes LDRD-funded mathematical research related to computer simulation, inspired in part by combinatorial analysis of sequence to structure relationships of bio-molecules. Computer simulations calculate the interactions among many individual, local entities, thereby generating global dynamics. The objective of this project was to establish a mathematical basis for a comprehensive theory of computer simulations. This mathematical theory is intended to rigorously underwrite very large complex simulations, including simulation of bio- and socio-technical systems. We believe excellent progress has been made. Abstraction of three main ingredients of simulation forms the mathematical setting, called Sequential Dynamical Systems (SDS): (1) functions realized as data-local procedures represent entity state transformations, (2) a graph that expresses locality of the functions and which represents the dependencies among entities, and (3) an ordering, or schedule according to which the entities are evaluated, e.g., up-dated. The research spans algebraic foundations, formal dynamical systems, computer simulation, and theoretical computer science. The theoretical approach is also deeply related to theoretical issues in parallel compilation. Numerous publications were produced, follow-on projects have been identified and are being developed programmatically, and a new area in computational algebra, SDS, was produced.

  11. Combined bio-inspired/evolutionary computational methods in cross-layer protocol optimization for wireless ad hoc sensor networks

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    2011-06-01

    Published studies have focused on the application of one bio-inspired or evolutionary computational method to the functions of a single protocol layer in a wireless ad hoc sensor network (WSN). For example, swarm intelligence in the form of ant colony optimization (ACO), has been repeatedly considered for the routing of data/information among nodes, a network-layer function, while genetic algorithms (GAs) have been used to select transmission frequencies and power levels, physical-layer functions. Similarly, artificial immune systems (AISs) as well as trust models of quantized data reputation have been invoked for detection of network intrusions that cause anomalies in data and information; these act on the application and presentation layers. Most recently, a self-organizing scheduling scheme inspired by frog-calling behavior for reliable data transmission in wireless sensor networks, termed anti-phase synchronization, has been applied to realize collision-free transmissions between neighboring nodes, a function of the MAC layer. In a novel departure from previous work, the cross-layer approach to WSN protocol design suggests applying more than one evolutionary computational method to the functions of the appropriate layers to improve the QoS performance of the cross-layer design beyond that of one method applied to a single layer's functions. A baseline WSN protocol design, embedding GAs, anti-phase synchronization, ACO, and a trust model based on quantized data reputation at the physical, MAC, network, and application layers, respectively, is constructed. Simulation results demonstrate the synergies among the bioinspired/ evolutionary methods of the proposed baseline design improve the overall QoS performance of networks over that of a single computational method.

  12. Computational Physics and Evolutionary Dynamics

    NASA Astrophysics Data System (ADS)

    Fontana, Walter

    2000-03-01

    One aspect of computational physics deals with the characterization of statistical regularities in materials. Computational physics meets biology when these materials can evolve. RNA molecules are a case in point. The folding of RNA sequences into secondary structures (shapes) inspires a simple biophysically grounded genotype-phenotype map that can be explored computationally and in the laboratory. We have identified some statistical regularities of this map and begin to understand their evolutionary consequences. (1) ``typical shapes'': Only a small subset of shapes realized by the RNA folding map is typical, in the sense of containing shapes that are realized significantly more often than others. Consequence: evolutionary histories mostly involve typical shapes, and thus exhibit generic properties. (2) ``neutral networks'': Sequences folding into the same shape are mutationally connected into a network that reaches across sequence space. Consequence: Evolutionary transitions between shapes reflect the fraction of boundary shared by the corresponding neutral networks in sequence space. The notion of a (dis)continuous transition can be made rigorous. (3) ``shape space covering'': Given a random sequence, a modest number of mutations suffices to reach a sequence realizing any typical shape. Consequence: The effective search space for evolutionary optimization is greatly reduced, and adaptive success is less dependent on initial conditions. (4) ``plasticity mirrors variability'': The repertoire of low energy shapes of a sequence is an indicator of how much and in which ways its energetically optimal shape can be altered by a single point mutation. Consequence: (i) Thermodynamic shape stability and mutational robustness are intimately linked. (ii) When natural selection favors the increase of stability, extreme mutational robustness -- to the point of an evolutionary dead-end -- is produced as a side effect. (iii) The hallmark of robust shapes is modularity.

  13. Scalable computing for evolutionary genomics.

    PubMed

    Prins, Pjotr; Belhachemi, Dominique; Möller, Steffen; Smant, Geert

    2012-01-01

    Genomic data analysis in evolutionary biology is becoming so computationally intensive that analysis of multiple hypotheses and scenarios takes too long on a single desktop computer. In this chapter, we discuss techniques for scaling computations through parallelization of calculations, after giving a quick overview of advanced programming techniques. Unfortunately, parallel programming is difficult and requires special software design. The alternative, especially attractive for legacy software, is to introduce poor man's parallelization by running whole programs in parallel as separate processes, using job schedulers. Such pipelines are often deployed on bioinformatics computer clusters. Recent advances in PC virtualization have made it possible to run a full computer operating system, with all of its installed software, on top of another operating system, inside a "box," or virtual machine (VM). Such a VM can flexibly be deployed on multiple computers, in a local network, e.g., on existing desktop PCs, and even in the Cloud, to create a "virtual" computer cluster. Many bioinformatics applications in evolutionary biology can be run in parallel, running processes in one or more VMs. Here, we show how a ready-made bioinformatics VM image, named BioNode, effectively creates a computing cluster, and pipeline, in a few steps. This allows researchers to scale-up computations from their desktop, using available hardware, anytime it is required. BioNode is based on Debian Linux and can run on networked PCs and in the Cloud. Over 200 bioinformatics and statistical software packages, of interest to evolutionary biology, are included, such as PAML, Muscle, MAFFT, MrBayes, and BLAST. Most of these software packages are maintained through the Debian Med project. In addition, BioNode contains convenient configuration scripts for parallelizing bioinformatics software. Where Debian Med encourages packaging free and open source bioinformatics software through one central project

  14. Hybrid pattern recognition method using evolutionary computing techniques applied to the exploitation of hyperspectral imagery and medical spectral data

    NASA Astrophysics Data System (ADS)

    Burman, Jerry A.

    1999-12-01

    Hyperspectral image sets are three dimensional data volumes that are difficult to exploit by manual means because they are comprised of multiple bands of image data that are not easily visualized or assessed. GTE Government Systems Corporation has developed a system that utilizes Evolutionary Computing techniques to automatically identify materials in terrain hyperspectral imagery. The system employs sophisticated signature preprocessing and a unique combination of non- parametric search algorithms guided by a model based cost function to achieve rapid convergence and pattern recognition. The system is scaleable and is capable of discriminating and identifying pertinent materials that comprise a specific object of interest in the terrain and estimating the percentage of materials present within a pixel of interest (spectral unmixing). The method has been applied and evaluated against real hyperspectral imagery data from the AVIRIS sensor. In addition, the process has been applied to remotely sensed infrared spectra collected at the microscopic level to assess the amounts of DNA, RNA and protein present in human tissue samples as an aid to the early detection of cancer.

  15. Optimizing a reconfigurable material via evolutionary computation.

    PubMed

    Wilken, Sam; Miskin, Marc Z; Jaeger, Heinrich M

    2015-08-01

    Rapid prototyping by combining evolutionary computation with simulations is becoming a powerful tool for solving complex design problems in materials science. This method of optimization operates in a virtual design space that simulates potential material behaviors and after completion needs to be validated by experiment. However, in principle an evolutionary optimizer can also operate on an actual physical structure or laboratory experiment directly, provided the relevant material parameters can be accessed by the optimizer and information about the material's performance can be updated by direct measurements. Here we provide a proof of concept of such direct, physical optimization by showing how a reconfigurable, highly nonlinear material can be tuned to respond to impact. We report on an entirely computer controlled laboratory experiment in which a 6×6 grid of electromagnets creates a magnetic field pattern that tunes the local rigidity of a concentrated suspension of ferrofluid and iron filings. A genetic algorithm is implemented and tasked to find field patterns that minimize the force transmitted through the suspension. Searching within a space of roughly 10^{10} possible configurations, after testing only 1500 independent trials the algorithm identifies an optimized configuration of layered rigid and compliant regions. PMID:26382399

  16. Optimizing a reconfigurable material via evolutionary computation

    NASA Astrophysics Data System (ADS)

    Wilken, Sam; Miskin, Marc Z.; Jaeger, Heinrich M.

    2015-08-01

    Rapid prototyping by combining evolutionary computation with simulations is becoming a powerful tool for solving complex design problems in materials science. This method of optimization operates in a virtual design space that simulates potential material behaviors and after completion needs to be validated by experiment. However, in principle an evolutionary optimizer can also operate on an actual physical structure or laboratory experiment directly, provided the relevant material parameters can be accessed by the optimizer and information about the material's performance can be updated by direct measurements. Here we provide a proof of concept of such direct, physical optimization by showing how a reconfigurable, highly nonlinear material can be tuned to respond to impact. We report on an entirely computer controlled laboratory experiment in which a 6 ×6 grid of electromagnets creates a magnetic field pattern that tunes the local rigidity of a concentrated suspension of ferrofluid and iron filings. A genetic algorithm is implemented and tasked to find field patterns that minimize the force transmitted through the suspension. Searching within a space of roughly 1010 possible configurations, after testing only 1500 independent trials the algorithm identifies an optimized configuration of layered rigid and compliant regions.

  17. From computers to cultivation: reconceptualizing evolutionary psychology

    PubMed Central

    Barrett, Louise; Pollet, Thomas V.; Stulp, Gert

    2014-01-01

    Does evolutionary theorizing have a role in psychology? This is a more contentious issue than one might imagine, given that, as evolved creatures, the answer must surely be yes. The contested nature of evolutionary psychology lies not in our status as evolved beings, but in the extent to which evolutionary ideas add value to studies of human behavior, and the rigor with which these ideas are tested. This, in turn, is linked to the framework in which particular evolutionary ideas are situated. While the framing of the current research topic places the brain-as-computer metaphor in opposition to evolutionary psychology, the most prominent school of thought in this field (born out of cognitive psychology, and often known as the Santa Barbara school) is entirely wedded to the computational theory of mind as an explanatory framework. Its unique aspect is to argue that the mind consists of a large number of functionally specialized (i.e., domain-specific) computational mechanisms, or modules (the massive modularity hypothesis). Far from offering an alternative to, or an improvement on, the current perspective, we argue that evolutionary psychology is a mainstream computational theory, and that its arguments for domain-specificity often rest on shaky premises. We then go on to suggest that the various forms of e-cognition (i.e., embodied, embedded, enactive) represent a true alternative to standard computational approaches, with an emphasis on “cognitive integration” or the “extended mind hypothesis” in particular. We feel this offers the most promise for human psychology because it incorporates the social and historical processes that are crucial to human “mind-making” within an evolutionarily informed framework. In addition to linking to other research areas in psychology, this approach is more likely to form productive links to other disciplines within the social sciences, not least by encouraging a healthy pluralism in approach. PMID:25161633

  18. From computers to cultivation: reconceptualizing evolutionary psychology.

    PubMed

    Barrett, Louise; Pollet, Thomas V; Stulp, Gert

    2014-01-01

    Does evolutionary theorizing have a role in psychology? This is a more contentious issue than one might imagine, given that, as evolved creatures, the answer must surely be yes. The contested nature of evolutionary psychology lies not in our status as evolved beings, but in the extent to which evolutionary ideas add value to studies of human behavior, and the rigor with which these ideas are tested. This, in turn, is linked to the framework in which particular evolutionary ideas are situated. While the framing of the current research topic places the brain-as-computer metaphor in opposition to evolutionary psychology, the most prominent school of thought in this field (born out of cognitive psychology, and often known as the Santa Barbara school) is entirely wedded to the computational theory of mind as an explanatory framework. Its unique aspect is to argue that the mind consists of a large number of functionally specialized (i.e., domain-specific) computational mechanisms, or modules (the massive modularity hypothesis). Far from offering an alternative to, or an improvement on, the current perspective, we argue that evolutionary psychology is a mainstream computational theory, and that its arguments for domain-specificity often rest on shaky premises. We then go on to suggest that the various forms of e-cognition (i.e., embodied, embedded, enactive) represent a true alternative to standard computational approaches, with an emphasis on "cognitive integration" or the "extended mind hypothesis" in particular. We feel this offers the most promise for human psychology because it incorporates the social and historical processes that are crucial to human "mind-making" within an evolutionarily informed framework. In addition to linking to other research areas in psychology, this approach is more likely to form productive links to other disciplines within the social sciences, not least by encouraging a healthy pluralism in approach. PMID:25161633

  19. Integrated evolutionary computation neural network quality controller for automated systems

    SciTech Connect

    Patro, S.; Kolarik, W.J.

    1999-06-01

    With increasing competition in the global market, more and more stringent quality standards and specifications are being demands at lower costs. Manufacturing applications of computing power are becoming more common. The application of neural networks to identification and control of dynamic processes has been discussed. The limitations of using neural networks for control purposes has been pointed out and a different technique, evolutionary computation, has been discussed. The results of identifying and controlling an unstable, dynamic process using evolutionary computation methods has been presented. A framework for an integrated system, using both neural networks and evolutionary computation, has been proposed to identify the process and then control the product quality, in a dynamic, multivariable system, in real-time.

  20. Parallel evolutionary computation in bioinformatics applications.

    PubMed

    Pinho, Jorge; Sobral, João Luis; Rocha, Miguel

    2013-05-01

    A large number of optimization problems within the field of Bioinformatics require methods able to handle its inherent complexity (e.g. NP-hard problems) and also demand increased computational efforts. In this context, the use of parallel architectures is a necessity. In this work, we propose ParJECoLi, a Java based library that offers a large set of metaheuristic methods (such as Evolutionary Algorithms) and also addresses the issue of its efficient execution on a wide range of parallel architectures. The proposed approach focuses on the easiness of use, making the adaptation to distinct parallel environments (multicore, cluster, grid) transparent to the user. Indeed, this work shows how the development of the optimization library can proceed independently of its adaptation for several architectures, making use of Aspect-Oriented Programming. The pluggable nature of parallelism related modules allows the user to easily configure its environment, adding parallelism modules to the base source code when needed. The performance of the platform is validated with two case studies within biological model optimization. PMID:23127284

  1. A Bright Future for Evolutionary Methods in Drug Design.

    PubMed

    Le, Tu C; Winkler, David A

    2015-08-01

    Most medicinal chemists understand that chemical space is extremely large, essentially infinite. Although high-throughput experimental methods allow exploration of drug-like space more rapidly, they are still insufficient to fully exploit the opportunities that such large chemical space offers. Evolutionary methods can synergistically blend automated synthesis and characterization methods with computational design to identify promising regions of chemical space more efficiently. We describe how evolutionary methods are implemented, and provide examples of published drug development research in which these methods have generated molecules with increased efficacy. We anticipate that evolutionary methods will play an important role in future drug discovery. PMID:26059362

  2. A New Multiplex-PCR for Urinary Tract Pathogen Detection Using Primer Design Based on an Evolutionary Computation Method.

    PubMed

    García, Liliana Torcoroma; Cristancho, Laura Maritza; Vera, Erika Patricia; Begambre, Oscar

    2015-10-28

    This work describes a new strategy for optimal design of Multiplex-PCR primer sequences. The process is based on the Particle Swarm Optimization-Simplex algorithm (Mult-PSOS). Diverging from previous solutions centered on heuristic tools, the Mult-PSOS is selfconfigured because it does not require the definition of the algorithm's initial search parameters. The successful performance of this method was validated in vitro using Multiplex- PCR assays. For this validation, seven gene sequences of the most prevalent bacteria implicated in urinary tract infections were taken as DNA targets. The in vitro tests confirmed the good performance of the Mult-PSOS, with respect to infectious disease diagnosis, in the rapid and efficient selection of the optimal oligonucleotide sequences for Multiplex-PCRs. The predicted sequences allowed the adequate amplification of all amplicons in a single step (with the correct amount of DNA template and primers), reducing significantly the need for trial and error experiments. In addition, owing to its independence from the initial selection of the heuristic constants, the Mult-PSOS can be employed by non-expert users in computational techniques or in primer design problems. PMID:26059514

  3. Quality-of-service sensitivity to bio-inspired/evolutionary computational methods for intrusion detection in wireless ad hoc multimedia sensor networks

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    2012-06-01

    In the author's previous work, a cross-layer protocol approach to wireless sensor network (WSN) intrusion detection an identification is created with multiple bio-inspired/evolutionary computational methods applied to the functions of the protocol layers, a single method to each layer, to improve the intrusion-detection performance of the protocol over that of one method applied to only a single layer's functions. The WSN cross-layer protocol design embeds GAs, anti-phase synchronization, ACO, and a trust model based on quantized data reputation at the physical, MAC, network, and application layer, respectively. The construct neglects to assess the net effect of the combined bioinspired methods on the quality-of-service (QoS) performance for "normal" data streams, that is, streams without intrusions. Analytic expressions of throughput, delay, and jitter, coupled with simulation results for WSNs free of intrusion attacks, are the basis for sensitivity analyses of QoS metrics for normal traffic to the bio-inspired methods.

  4. Evolutionary Computation Applied to the Tuning of MEMS Gyroscopes

    NASA Technical Reports Server (NTRS)

    Keymeulen, Didier; Fink, Wolfgang; Ferguson, Michael I.; Peay, Chris; Oks, Boris; Terrile, Richard; Yee, Karl

    2005-01-01

    We propose a tuning method for MEMS gyroscopes based on evolutionary computation to efficiently increase the sensitivity of MEMS gyroscopes through tuning and, furthermore, to find the optimally tuned configuration for this state of increased sensitivity. The tuning method was tested for the second generation JPL/Boeing Post-resonator MEMS gyroscope using the measurement of the frequency response of the MEMS device in open-loop operation.

  5. Using Evolutionary Computation on GPS Position Correction

    PubMed Central

    2014-01-01

    More and more devices are equipped with global positioning system (GPS). However, those handheld devices with consumer-grade GPS receivers usually have low accuracy in positioning. A position correction algorithm is therefore useful in this case. In this paper, we proposed an evolutionary computation based technique to generate a correction function by two GPS receivers and a known reference location. Locating one GPS receiver on the known location and combining its longitude and latitude information and exact poisoning information, the proposed technique is capable of evolving a correction function by such. The proposed technique can be implemented and executed on handheld devices without hardware reconfiguration. Experiments are conducted to demonstrate performance of the proposed technique. Positioning error could be significantly reduced from the order of 10 m to the order of 1 m. PMID:24578657

  6. Airlines Network Optimization using Evolutionary Computation

    NASA Astrophysics Data System (ADS)

    Inoue, Hiroki; Kato, Yasuhiko; Sakagami, Tomoya

    In recent years, various networks have come to exist in our surroundings. Not only the internet and airline routes can be thought of as networks: protein interactions are also networks. An “economic network design problem” can be discussed by assuming that a vertex is an economic player and that a link represents some connection between economic players. In this paper, the Airlines network is taken up as an example of an “economic network design problem”, and the Airlines network which the profit of the entire Airlines industry is maximized is clarified. The Airlines network is modeled based on connections models proposed by Jackson and Wolinsky, and the utility function of the network is defined. In addition, the optimization simulation using the evolutionary computation is shown for a domestic airline in Japan.

  7. Development of X-TOOLSS: Preliminary Design of Space Systems Using Evolutionary Computation

    NASA Technical Reports Server (NTRS)

    Schnell, Andrew R.; Hull, Patrick V.; Turner, Mike L.; Dozier, Gerry; Alverson, Lauren; Garrett, Aaron; Reneau, Jarred

    2008-01-01

    Evolutionary computational (EC) techniques such as genetic algorithms (GA) have been identified as promising methods to explore the design space of mechanical and electrical systems at the earliest stages of design. In this paper the authors summarize their research in the use of evolutionary computation to develop preliminary designs for various space systems. An evolutionary computational solver developed over the course of the research, X-TOOLSS (Exploration Toolset for the Optimization of Launch and Space Systems) is discussed. With the success of early, low-fidelity example problems, an outline of work involving more computationally complex models is discussed.

  8. Evolutionary Cell Computing: From Protocells to Self-Organized Computing

    NASA Technical Reports Server (NTRS)

    Colombano, Silvano; New, Michael H.; Pohorille, Andrew; Scargle, Jeffrey; Stassinopoulos, Dimitris; Pearson, Mark; Warren, James

    2000-01-01

    On the path from inanimate to animate matter, a key step was the self-organization of molecules into protocells - the earliest ancestors of contemporary cells. Studies of the properties of protocells and the mechanisms by which they maintained themselves and reproduced are an important part of astrobiology. These studies also have the potential to greatly impact research in nanotechnology and computer science. Previous studies of protocells have focussed on self-replication. In these systems, Darwinian evolution occurs through a series of small alterations to functional molecules whose identities are stored. Protocells, however, may have been incapable of such storage. We hypothesize that under such conditions, the replication of functions and their interrelationships, rather than the precise identities of the functional molecules, is sufficient for survival and evolution. This process is called non-genomic evolution. Recent breakthroughs in experimental protein chemistry have opened the gates for experimental tests of non-genomic evolution. On the basis of these achievements, we have developed a stochastic model for examining the evolutionary potential of non-genomic systems. In this model, the formation and destruction (hydrolysis) of bonds joining amino acids in proteins occur through catalyzed, albeit possibly inefficient, pathways. Each protein can act as a substrate for polymerization or hydrolysis, or as a catalyst of these chemical reactions. When a protein is hydrolyzed to form two new proteins, or two proteins are joined into a single protein, the catalytic abilities of the product proteins are related to the catalytic abilities of the reactants. We will demonstrate that the catalytic capabilities of such a system can increase. Its evolutionary potential is dependent upon the competition between the formation of bond-forming and bond-cutting catalysts. The degree to which hydrolysis preferentially affects bonds in less efficient, and therefore less well

  9. An Evolutionary Programming Based Tabu Search Method for Unit Commitment Problem with Cooling-Banking Constraints

    NASA Astrophysics Data System (ADS)

    Christober, C.; Rajan, Asir

    2011-01-01

    This paper presents a new approach to solve the short-term unit commitment problem using An Evolutionary Programming Based tabu search method with cooling and banking constraints. Numerical results are shown comparing the cost solutions and computation time obtained by using the evolutionary programming method and other conventional methods like dynamic programming, lagrangian relaxation.

  10. Bi-directional evolutionary level set method for topology optimization

    NASA Astrophysics Data System (ADS)

    Zhu, Benliang; Zhang, Xianmin; Fatikow, Sergej; Wang, Nianfeng

    2015-03-01

    A bi-directional evolutionary level set method for solving topology optimization problems is presented in this article. The proposed method has three main advantages over the standard level set method. First, new holes can be automatically generated in the design domain during the optimization process. Second, the dependency of the obtained optimized configurations upon the initial configurations is eliminated. Optimized configurations can be obtained even being started from a minimum possible initial guess. Third, the method can be easily implemented and is computationally more efficient. The validity of the proposed method is tested on the mean compliance minimization problem and the compliant mechanisms topology optimization problem.

  11. Creative Conceptual Design Based on Evolutionary DNA Computing Technique

    NASA Astrophysics Data System (ADS)

    Liu, Xiyu; Liu, Hong; Zheng, Yangyang

    Creative conceptual design is an important area in computer aided innovation. Typical design methodology includes exploration and optimization by evolutionary techniques such as EC and swarm intelligence. Although there are many proposed algorithms and applications for creative design by these techniques, the computing models are implemented mostly by traditional von Neumann’s architecture. On the other hand, the possibility of using DNA as a computing technique arouses wide interests in recent years with huge built-in parallel computing nature and ability to solve NP complete problems. This new computing technique is performed by biological operations on DNA molecules rather than chips. The purpose of this paper is to propose a simulated evolutionary DNA computing model and integrate DNA computing with creative conceptual design. The proposed technique will apply for large scale, high parallel design problems potentially.

  12. Topological evolutionary computing in the optimal design of 2D and 3D structures

    NASA Astrophysics Data System (ADS)

    Burczynski, T.; Poteralski, A.; Szczepanik, M.

    2007-10-01

    An application of evolutionary algorithms and the finite-element method to the topology optimization of 2D structures (plane stress, bending plates, and shells) and 3D structures is described. The basis of the topological evolutionary optimization is the direct control of the density material distribution (or thickness for 2D structures) by the evolutionary algorithm. The structures are optimized for stress, mass, and compliance criteria. The numerical examples demonstrate that this method is an effective technique for solving problems in computer-aided optimal design.

  13. Protein 3D Structure Computed from Evolutionary Sequence Variation

    PubMed Central

    Sheridan, Robert; Hopf, Thomas A.; Pagnani, Andrea; Zecchina, Riccardo; Sander, Chris

    2011-01-01

    The evolutionary trajectory of a protein through sequence space is constrained by its function. Collections of sequence homologs record the outcomes of millions of evolutionary experiments in which the protein evolves according to these constraints. Deciphering the evolutionary record held in these sequences and exploiting it for predictive and engineering purposes presents a formidable challenge. The potential benefit of solving this challenge is amplified by the advent of inexpensive high-throughput genomic sequencing. In this paper we ask whether we can infer evolutionary constraints from a set of sequence homologs of a protein. The challenge is to distinguish true co-evolution couplings from the noisy set of observed correlations. We address this challenge using a maximum entropy model of the protein sequence, constrained by the statistics of the multiple sequence alignment, to infer residue pair couplings. Surprisingly, we find that the strength of these inferred couplings is an excellent predictor of residue-residue proximity in folded structures. Indeed, the top-scoring residue couplings are sufficiently accurate and well-distributed to define the 3D protein fold with remarkable accuracy. We quantify this observation by computing, from sequence alone, all-atom 3D structures of fifteen test proteins from different fold classes, ranging in size from 50 to 260 residues., including a G-protein coupled receptor. These blinded inferences are de novo, i.e., they do not use homology modeling or sequence-similar fragments from known structures. The co-evolution signals provide sufficient information to determine accurate 3D protein structure to 2.7–4.8 Å Cα-RMSD error relative to the observed structure, over at least two-thirds of the protein (method called EVfold, details at http://EVfold.org). This discovery provides insight into essential interactions constraining protein evolution and will facilitate a comprehensive survey of the universe of protein

  14. Studying Collective Human Decision Making and Creativity with Evolutionary Computation.

    PubMed

    Sayama, Hiroki; Dionne, Shelley D

    2015-01-01

    We report a summary of our interdisciplinary research project "Evolutionary Perspective on Collective Decision Making" that was conducted through close collaboration between computational, organizational, and social scientists at Binghamton University. We redefined collective human decision making and creativity as evolution of ecologies of ideas, where populations of ideas evolve via continual applications of evolutionary operators such as reproduction, recombination, mutation, selection, and migration of ideas, each conducted by participating humans. Based on this evolutionary perspective, we generated hypotheses about collective human decision making, using agent-based computer simulations. The hypotheses were then tested through several experiments with real human subjects. Throughout this project, we utilized evolutionary computation (EC) in non-traditional ways-(1) as a theoretical framework for reinterpreting the dynamics of idea generation and selection, (2) as a computational simulation model of collective human decision-making processes, and (3) as a research tool for collecting high-resolution experimental data on actual collaborative design and decision making from human subjects. We believe our work demonstrates untapped potential of EC for interdisciplinary research involving human and social dynamics. PMID:26280078

  15. Computationally mapping sequence space to understand evolutionary protein engineering.

    PubMed

    Armstrong, Kathryn A; Tidor, Bruce

    2008-01-01

    Evolutionary protein engineering has been dramatically successful, producing a wide variety of new proteins with altered stability, binding affinity, and enzymatic activity. However, the success of such procedures is often unreliable, and the impact of the choice of protein, engineering goal, and evolutionary procedure is not well understood. We have created a framework for understanding aspects of the protein engineering process by computationally mapping regions of feasible sequence space for three small proteins using structure-based design protocols. We then tested the ability of different evolutionary search strategies to explore these sequence spaces. The results point to a non-intuitive relationship between the error-prone PCR mutation rate and the number of rounds of replication. The evolutionary relationships among feasible sequences reveal hub-like sequences that serve as particularly fruitful starting sequences for evolutionary search. Moreover, genetic recombination procedures were examined, and tradeoffs relating sequence diversity and search efficiency were identified. This framework allows us to consider the impact of protein structure on the allowed sequence space and therefore on the challenges that each protein presents to error-prone PCR and genetic recombination procedures. PMID:18020358

  16. An evolutionary method to achieve stable superpixel tracking

    NASA Astrophysics Data System (ADS)

    Xi, Wenxing; Tang, Xinyi

    2014-11-01

    Object tracking is a hot and hard problem in the computer vision study area.We deal with large objects,which are challenged in many aspects,such as the factors of lighting, size, posture, disturbance, occlusion, and so on.The superpixel tracking method has been proposed to deal with this problem. Unlike many other approaches, it is robust in all the mentioned aspects to some extent. It is very flexible to deal with non-rigid objects just like the meanshift of color histogram does,but can be more advanced, since it takes advantage of the segmented local color histogram. Here we first introduce the adaptive superpixel tracking algorithm, which is comprised by two parts, modeling and confidence mapping using the color features of superpixels.We model them by clustering, just like the "bags of words" method does, and build the cluster confidence.The model is adaptive since it just learns from some latest tracked frames, which can accumulate errors and lead to drift easily. So we propose a refined model, which incorporates the kalman filter's ideas to this problem, by integrating the current model and the new model as an evolutionary one, to better adapt to the object variation and disturbance in subsequent frames, thus achieve more stable tracking. The evolutionary model is achieved by reclustering the cluster centers of the two models, to make new cluster centers and new cluster confidences. We allocate different weight to them, if the current model gets more weight, then the evolutionary model will be more stable, otherwise it will be more adaptive. Finally we give some experiment comparisons between the evolutionary model and the adaptive one. For most cases, when the scene of the object is stable, namely there is no big sudden light change or color change, the evolutionary model outperforms the adaptive one. The reason is that the adaptive one easily learns from other objects. But when the scene suffers big sudden change, the evolutionary model can't quickly adapt

  17. Gene expression: The missing link in evolutionary computation

    SciTech Connect

    Kargupta, H.

    1997-09-01

    This paper points out that the traditional perspective of evolutionary computation may not provide the complete picture of evolutionary search. This paper focuses on gene expression-- transformations of representation (DNA->RNA->Protein) from a the perspective of relation construction. It decomposes the complex process of gene expression into several steps, namely (1) expression control of DNA base pairs, (2) alphabet transformations during transcription and translation, and (3) folding of the proteins from sequence representation to Euclidean space. Each of these steps is investigated on grounds of relation construction and search efficiency. At the end these pieces of the puzzle are put together to develope a possibly crude and cartoon computational description of gene expression.

  18. Designing a parallel evolutionary algorithm for inferring gene networks on the cloud computing environment

    PubMed Central

    2014-01-01

    Background To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. Results This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Conclusions Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel

  19. Predicting land cover using GIS, Bayesian and evolutionary algorithm methods.

    PubMed

    Aitkenhead, M J; Aalders, I H

    2009-01-01

    Modelling land cover change from existing land cover maps is a vital requirement for anyone wishing to understand how the landscape may change in the future. In order to test any land cover change model, existing data must be used. However, often it is not known which data should be applied to the problem, or whether relationships exist within and between complex datasets. Here we have developed and tested a model that applied evolutionary processes to Bayesian networks. The model was developed and tested on a dataset containing land cover information and environmental data, in order to show that decisions about which datasets should be used could be made automatically. Bayesian networks are amenable to evolutionary methods as they can be easily described using a binary string to which crossover and mutation operations can be applied. The method, developed to allow comparison with standard Bayesian network development software, was proved capable of carrying out a rapid and effective search of the space of possible networks in order to find an optimal or near-optimal solution for the selection of datasets that have causal links with one another. Comparison of land cover mapping in the North-East of Scotland was made with a commercial Bayesian software package, with the evolutionary method being shown to provide greater flexibility in its ability to adapt to incorporate/utilise available evidence/knowledge and develop effective and accurate network structures, at the cost of requiring additional computer programming skills. The dataset used to develop the models included GIS-based data taken from the Land Cover for Scotland 1988 (LCS88), Land Capability for Forestry (LCF), Land Capability for Agriculture (LCA), the soil map of Scotland and additional climatic variables. PMID:18079039

  20. Generative Representations for Computer-Automated Evolutionary Design

    NASA Technical Reports Server (NTRS)

    Hornby, Gregory S.

    2006-01-01

    With the increasing computational power of computers, software design systems are progressing from being tools for architects and designers to express their ideas to tools capable of creating designs under human guidance. One of the main limitations for these computer-automated design systems is the representation with which they encode designs. If the representation cannot encode a certain design, then the design system cannot produce it. To be able to produce new types of designs, and not just optimize pre-defined parameterizations, evolutionary design systems must use generative representations. Generative representations are assembly procedures, or algorithms, for constructing a design thereby allowing for truly novel design solutions to be encoded. In addition, by enabling modularity, regularity and hierarchy, the level of sophistication that can be evolved is increased. We demonstrate the advantages of generative representations on two different design domains: the evolution of spacecraft antennas and the evolution of 3D objects.

  1. Tuning of MEMS Devices using Evolutionary Computation and Open-loop Frequency Response

    NASA Technical Reports Server (NTRS)

    Keymeulen, Didier; Fink, Wolfgang; Ferguson, Michael I.; Peay, Chris; Oks, Boris; Terrile, Richard; Yee, Karl

    2005-01-01

    We propose a tuning method for MEMS gyroscopes based on evolutionary computation that has the capacity to efficiently increase the sensitivity of MEMS gyroscopes through tuning and, furthermore, to find the optimally tuned configuration for this state of increased sensitivity. The tuning method was tested for the second generation JPL/Boeing Post-resonator MEMS gyroscope using the measurement of the frequency response of the MEMS device in open-loop operation.

  2. Supporting Air-Conditioning Controller Design Using Evolutionary Computation

    NASA Astrophysics Data System (ADS)

    Kojima, Kazuyuki; Watanuki, Keiichi

    In recent years, as part of the remarkable development of electronic techniques, electronic control has been applied to various systems. Many sensors and actuators have been implemented into those systems, and energy efficiency and performance have been greatly improved. However, these systems have been complicated, and much time has been required to develop system controllers. In this paper, a method of automatic controller design for those systems is described. In order to automate the design of an electronic controller, an evolutionary hardware is applied. First, the framework for applying the genetic algorithm to the automation of controller design is described. In particular, the coding of a chromosome is shown in detail. Then, how to make a fitness function is represented, with an air conditioner as an example, and the controller of the air conditioner is developed automatically using our proposed framework. Finally, an evolutionary simulation is performed to confirm our framework.

  3. Evolutionary Computation for the Identification of Emergent Behavior in Autonomous Systems

    NASA Technical Reports Server (NTRS)

    Terrile, Richard J.; Guillaume, Alexandre

    2009-01-01

    Over the past several years the Center for Evolutionary Computation and Automated Design at the Jet Propulsion Laboratory has developed a technique based on Evolutionary Computational Methods (ECM) that allows for the automated optimization of complex computationally modeled systems. An important application of this technique is for the identification of emergent behaviors in autonomous systems. Mobility platforms such as rovers or airborne vehicles are now being designed with autonomous mission controllers that can find trajectories over a solution space that is larger than can reasonably be tested. It is critical to identify control behaviors that are not predicted and can have surprising results (both good and bad). These emergent behaviors need to be identified, characterized and either incorporated into or isolated from the acceptable range of control characteristics. We use cluster analysis of automatically retrieved solutions to identify isolated populations of solutions with divergent behaviors.

  4. Computational complexity of ecological and evolutionary spatial dynamics

    PubMed Central

    Ibsen-Jensen, Rasmus; Chatterjee, Krishnendu; Nowak, Martin A.

    2015-01-01

    There are deep, yet largely unexplored, connections between computer science and biology. Both disciplines examine how information proliferates in time and space. Central results in computer science describe the complexity of algorithms that solve certain classes of problems. An algorithm is deemed efficient if it can solve a problem in polynomial time, which means the running time of the algorithm is a polynomial function of the length of the input. There are classes of harder problems for which the fastest possible algorithm requires exponential time. Another criterion is the space requirement of the algorithm. There is a crucial distinction between algorithms that can find a solution, verify a solution, or list several distinct solutions in given time and space. The complexity hierarchy that is generated in this way is the foundation of theoretical computer science. Precise complexity results can be notoriously difficult. The famous question whether polynomial time equals nondeterministic polynomial time (i.e., P = NP) is one of the hardest open problems in computer science and all of mathematics. Here, we consider simple processes of ecological and evolutionary spatial dynamics. The basic question is: What is the probability that a new invader (or a new mutant) will take over a resident population? We derive precise complexity results for a variety of scenarios. We therefore show that some fundamental questions in this area cannot be answered by simple equations (assuming that P is not equal to NP). PMID:26644569

  5. Computational complexity of ecological and evolutionary spatial dynamics.

    PubMed

    Ibsen-Jensen, Rasmus; Chatterjee, Krishnendu; Nowak, Martin A

    2015-12-22

    There are deep, yet largely unexplored, connections between computer science and biology. Both disciplines examine how information proliferates in time and space. Central results in computer science describe the complexity of algorithms that solve certain classes of problems. An algorithm is deemed efficient if it can solve a problem in polynomial time, which means the running time of the algorithm is a polynomial function of the length of the input. There are classes of harder problems for which the fastest possible algorithm requires exponential time. Another criterion is the space requirement of the algorithm. There is a crucial distinction between algorithms that can find a solution, verify a solution, or list several distinct solutions in given time and space. The complexity hierarchy that is generated in this way is the foundation of theoretical computer science. Precise complexity results can be notoriously difficult. The famous question whether polynomial time equals nondeterministic polynomial time (i.e., P = NP) is one of the hardest open problems in computer science and all of mathematics. Here, we consider simple processes of ecological and evolutionary spatial dynamics. The basic question is: What is the probability that a new invader (or a new mutant) will take over a resident population? We derive precise complexity results for a variety of scenarios. We therefore show that some fundamental questions in this area cannot be answered by simple equations (assuming that P is not equal to NP). PMID:26644569

  6. Evolutionary adaptive eye tracking for low-cost human computer interaction applications

    NASA Astrophysics Data System (ADS)

    Shen, Yan; Shin, Hak Chul; Sung, Won Jun; Khim, Sarang; Kim, Honglak; Rhee, Phill Kyu

    2013-01-01

    We present an evolutionary adaptive eye-tracking framework aiming for low-cost human computer interaction. The main focus is to guarantee eye-tracking performance without using high-cost devices and strongly controlled situations. The performance optimization of eye tracking is formulated into the dynamic control problem of deciding on an eye tracking algorithm structure and associated thresholds/parameters, where the dynamic control space is denoted by genotype and phenotype spaces. The evolutionary algorithm is responsible for exploring the genotype control space, and the reinforcement learning algorithm organizes the evolved genotype into a reactive phenotype. The evolutionary algorithm encodes an eye-tracking scheme as a genetic code based on image variation analysis. Then, the reinforcement learning algorithm defines internal states in a phenotype control space limited by the perceived genetic code and carries out interactive adaptations. The proposed method can achieve optimal performance by compromising the difficulty in the real-time performance of the evolutionary algorithm and the drawback of the huge search space of the reinforcement learning algorithm. Extensive experiments were carried out using webcam image sequences and yielded very encouraging results. The framework can be readily applied to other low-cost vision-based human computer interactions in solving their intrinsic brittleness in unstable operational environments.

  7. Kernel Method Based Human Model for Enhancing Interactive Evolutionary Optimization

    PubMed Central

    Zhao, Qiangfu; Liu, Yong

    2015-01-01

    A fitness landscape presents the relationship between individual and its reproductive success in evolutionary computation (EC). However, discrete and approximate landscape in an original search space may not support enough and accurate information for EC search, especially in interactive EC (IEC). The fitness landscape of human subjective evaluation in IEC is very difficult and impossible to model, even with a hypothesis of what its definition might be. In this paper, we propose a method to establish a human model in projected high dimensional search space by kernel classification for enhancing IEC search. Because bivalent logic is a simplest perceptual paradigm, the human model is established by considering this paradigm principle. In feature space, we design a linear classifier as a human model to obtain user preference knowledge, which cannot be supported linearly in original discrete search space. The human model is established by this method for predicting potential perceptual knowledge of human. With the human model, we design an evolution control method to enhance IEC search. From experimental evaluation results with a pseudo-IEC user, our proposed model and method can enhance IEC search significantly. PMID:25879050

  8. Crowd Computing as a Cooperation Problem: An Evolutionary Approach

    NASA Astrophysics Data System (ADS)

    Christoforou, Evgenia; Fernández Anta, Antonio; Georgiou, Chryssis; Mosteiro, Miguel A.; Sánchez, Angel

    2013-05-01

    Cooperation is one of the socio-economic issues that has received more attention from the physics community. The problem has been mostly considered by studying games such as the Prisoner's Dilemma or the Public Goods Game. Here, we take a step forward by studying cooperation in the context of crowd computing. We introduce a model loosely based on Principal-agent theory in which people (workers) contribute to the solution of a distributed problem by computing answers and reporting to the problem proposer (master). To go beyond classical approaches involving the concept of Nash equilibrium, we work on an evolutionary framework in which both the master and the workers update their behavior through reinforcement learning. Using a Markov chain approach, we show theoretically that under certain----not very restrictive—conditions, the master can ensure the reliability of the answer resulting of the process. Then, we study the model by numerical simulations, finding that convergence, meaning that the system reaches a point in which it always produces reliable answers, may in general be much faster than the upper bounds given by the theoretical calculation. We also discuss the effects of the master's level of tolerance to defectors, about which the theory does not provide information. The discussion shows that the system works even with very large tolerances. We conclude with a discussion of our results and possible directions to carry this research further.

  9. Exploring Evolutionary Patterns in Genetic Sequence: A Computer Exercise

    ERIC Educational Resources Information Center

    Shumate, Alice M.; Windsor, Aaron J.

    2010-01-01

    The increase in publications presenting molecular evolutionary analyses and the availability of comparative sequence data through resources such as NCBI's GenBank underscore the necessity of providing undergraduates with hands-on sequence analysis skills in an evolutionary context. This need is particularly acute given that students have been…

  10. Speeding Up Ecological and Evolutionary Computations in R; Essentials of High Performance Computing for Biologists

    PubMed Central

    Visser, Marco D.; McMahon, Sean M.; Merow, Cory; Dixon, Philip M.; Record, Sydne; Jongejans, Eelke

    2015-01-01

    Computation has become a critical component of research in biology. A risk has emerged that computational and programming challenges may limit research scope, depth, and quality. We review various solutions to common computational efficiency problems in ecological and evolutionary research. Our review pulls together material that is currently scattered across many sources and emphasizes those techniques that are especially effective for typical ecological and environmental problems. We demonstrate how straightforward it can be to write efficient code and implement techniques such as profiling or parallel computing. We supply a newly developed R package (aprof) that helps to identify computational bottlenecks in R code and determine whether optimization can be effective. Our review is complemented by a practical set of examples and detailed Supporting Information material (S1–S3 Texts) that demonstrate large improvements in computational speed (ranging from 10.5 times to 14,000 times faster). By improving computational efficiency, biologists can feasibly solve more complex tasks, ask more ambitious questions, and include more sophisticated analyses in their research. PMID:25811842

  11. An Evolutionary Computation Approach to Examine Functional Brain Plasticity.

    PubMed

    Roy, Arnab; Campbell, Colin; Bernier, Rachel A; Hillary, Frank G

    2016-01-01

    One common research goal in systems neurosciences is to understand how the functional relationship between a pair of regions of interest (ROIs) evolves over time. Examining neural connectivity in this way is well-suited for the study of developmental processes, learning, and even in recovery or treatment designs in response to injury. For most fMRI based studies, the strength of the functional relationship between two ROIs is defined as the correlation between the average signal representing each region. The drawback to this approach is that much information is lost due to averaging heterogeneous voxels, and therefore, the functional relationship between a ROI-pair that evolve at a spatial scale much finer than the ROIs remain undetected. To address this shortcoming, we introduce a novel evolutionary computation (EC) based voxel-level procedure to examine functional plasticity between an investigator defined ROI-pair by simultaneously using subject-specific BOLD-fMRI data collected from two sessions seperated by finite duration of time. This data-driven procedure detects a sub-region composed of spatially connected voxels from each ROI (a so-called sub-regional-pair) such that the pair shows a significant gain/loss of functional relationship strength across the two time points. The procedure is recursive and iteratively finds all statistically significant sub-regional-pairs within the ROIs. Using this approach, we examine functional plasticity between the default mode network (DMN) and the executive control network (ECN) during recovery from traumatic brain injury (TBI); the study includes 14 TBI and 12 healthy control subjects. We demonstrate that the EC based procedure is able to detect functional plasticity where a traditional averaging based approach fails. The subject-specific plasticity estimates obtained using the EC-procedure are highly consistent across multiple runs. Group-level analyses using these plasticity estimates showed an increase in the strength

  12. An Evolutionary Computation Approach to Examine Functional Brain Plasticity

    PubMed Central

    Roy, Arnab; Campbell, Colin; Bernier, Rachel A.; Hillary, Frank G.

    2016-01-01

    One common research goal in systems neurosciences is to understand how the functional relationship between a pair of regions of interest (ROIs) evolves over time. Examining neural connectivity in this way is well-suited for the study of developmental processes, learning, and even in recovery or treatment designs in response to injury. For most fMRI based studies, the strength of the functional relationship between two ROIs is defined as the correlation between the average signal representing each region. The drawback to this approach is that much information is lost due to averaging heterogeneous voxels, and therefore, the functional relationship between a ROI-pair that evolve at a spatial scale much finer than the ROIs remain undetected. To address this shortcoming, we introduce a novel evolutionary computation (EC) based voxel-level procedure to examine functional plasticity between an investigator defined ROI-pair by simultaneously using subject-specific BOLD-fMRI data collected from two sessions seperated by finite duration of time. This data-driven procedure detects a sub-region composed of spatially connected voxels from each ROI (a so-called sub-regional-pair) such that the pair shows a significant gain/loss of functional relationship strength across the two time points. The procedure is recursive and iteratively finds all statistically significant sub-regional-pairs within the ROIs. Using this approach, we examine functional plasticity between the default mode network (DMN) and the executive control network (ECN) during recovery from traumatic brain injury (TBI); the study includes 14 TBI and 12 healthy control subjects. We demonstrate that the EC based procedure is able to detect functional plasticity where a traditional averaging based approach fails. The subject-specific plasticity estimates obtained using the EC-procedure are highly consistent across multiple runs. Group-level analyses using these plasticity estimates showed an increase in the strength

  13. De-Hazing of Multi-Spectral Images with Evolutionary Computing

    NASA Astrophysics Data System (ADS)

    von Allmen, P.; Lee, S.; Diner, D. J.; Martonchik, J.; Davis, A. B.

    2009-12-01

    We developed an algorithm that allows for removing haze from a digital picture by numerically subtracting the contribution of optical scattering by aerosols. The scene is modeled by defining a reflectance function for each pixel, which describes the angular dependence of light scattering at the surface, and by describing the scattering from aerosols with a set of models of varying complexity. An optimization algorithm that mixes downhill methods with evolutionary computing approaches was used to fit the observed image to the model of the scene. The contribution of the aerosol scattering is then removed to obtain a de-hazed image. We will present results for multispectral images taken by NASA’s Multi-angle Imaging SpectroRadiometer and we will discuss the numerical efficiency of the algorithm implemented on a multi-node quadcore cluster computer.

  14. Proposal of Evolutionary Simplex Method for Global Optimization Problem

    NASA Astrophysics Data System (ADS)

    Shimizu, Yoshiaki

    To make an agile decision in a rational manner, role of optimization engineering has been notified increasingly under diversified customer demand. With this point of view, in this paper, we have proposed a new evolutionary method serving as an optimization technique in the paradigm of optimization engineering. The developed method has prospects to solve globally various complicated problem appearing in real world applications. It is evolved from the conventional method known as Nelder and Mead’s Simplex method by virtue of idea borrowed from recent meta-heuristic method such as PSO. Mentioning an algorithm to handle linear inequality constraints effectively, we have validated effectiveness of the proposed method through comparison with other methods using several benchmark problems.

  15. Computational Methods for Crashworthiness

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Compiler); Carden, Huey D. (Compiler)

    1993-01-01

    Presentations and discussions from the joint UVA/NASA Workshop on Computational Methods for Crashworthiness held at Langley Research Center on 2-3 Sep. 1992 are included. The presentations addressed activities in the area of impact dynamics. Workshop attendees represented NASA, the Army and Air Force, the Lawrence Livermore and Sandia National Laboratories, the aircraft and automotive industries, and academia. The workshop objectives were to assess the state-of-technology in the numerical simulation of crash and to provide guidelines for future research.

  16. Toward an alternative evolutionary theory of religion: looking past computational evolutionary psychology to a wider field of possibilities.

    PubMed

    Barrett, Nathaniel F

    2010-01-01

    Cognitive science of the last half-century has been dominated by the computational theory of mind and its picture of thought as information processing. Taking this picture for granted, the most prominent evolutionary theories of religion of the last fifteen years have sought to understand human religiosity as the product or by-product of universal information processing mechanisms that were adaptive in our ancestral environment. The rigidity of such explanations is at odds with the highly context-sensitive nature of historical studies of religion, and thus contributes to the apparent tug-of-war between scientific and humanistic perspectives. This essay argues that this antagonism stems in part from a deep flaw of computational theory, namely its notion of information as pre-given and context-free. In contrast, non-computational theories that picture mind as an adaptive, interactive process in which information is jointly constructed by organism and environment offer an alternative approach to an evolutionary understanding of human religiosity, one that is compatible with historical studies and amenable to a wide range of inquiries, including some limited kinds of theological inquiry. PMID:20879191

  17. A novel fitness evaluation method for evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Wang, Ji-feng; Tang, Ke-zong

    2013-03-01

    Fitness evaluation is a crucial task in evolutionary algorithms because it can affect the convergence speed and also the quality of the final solution. But these algorithms may require huge computation power for solving nonlinear programming problems. This paper proposes a novel fitness evaluation approach which employs similarity-base learning embedded in a classical differential evolution (SDE) to evaluate all new individuals. Each individual consists of three elements: parameter vector (v), a fitness value (f), and a reliability value(r). The f is calculated using NFEA, and only when the r is below a threshold is the f calculated using true fitness function. Moreover, applying error compensation system to the proposed algorithm further enhances the performance of the algorithm to make r much closer to true fitness value for each new child. Simulation results over a comprehensive set of benchmark functions show that the convergence rate of the proposed algorithm is much faster than much that of the compared algorithms.

  18. Bi-Directional Evolutionary Topology Optimization Using Element Replaceable Method

    NASA Astrophysics Data System (ADS)

    Zhu, J. H.; Zhang, W. H.; Qiu, K. P.

    2007-06-01

    In the present paper, design problems of maximizing the structural stiffness or natural frequency are considered subject to the material volume constraint. A new element replaceable method (ERPM) is proposed for evolutionary topology optimization of structures. Compared with existing versions of evolutionary structural optimization methods, contributions are twofold. On the one hand, a new automatic element deletion/growth procedure is established. The deletion of a finite element means that a solid element is replaced with an orthotropic cellular microstructure (OCM) element. The growth of an element means that an OCM element is replaced with a solid element of full materials. In fact, both operations are interchangeable depending upon how the value of element sensitivity is with respect to the objective function. The OCM design strategy is beneficial in preventing artificial modes for dynamic problems. Besides, the iteration validity is greatly improved with the introduction of a check position (CP) technique. On the other hand, a new checkerboard control algorithm is proposed to work together with the above procedure. After the identification of local checkerboards and detailed structures over the entire design domain, the algorithm will fill or delete elements depending upon the prescribed threshold of sensitivity values. Numerical results show that the ERPM is efficient and a clear and valuable material pattern can be achieved for both static and dynamic problems.

  19. Computational Evolutionary Methodology for Knowledge Discovery and Forecasting in Epidemiology and Medicine

    SciTech Connect

    Rao, Dhananjai M.; Chernyakhovsky, Alexander; Rao, Victoria

    2008-05-08

    Humanity is facing an increasing number of highly virulent and communicable diseases such as avian influenza. Researchers believe that avian influenza has potential to evolve into one of the deadliest pandemics. Combating these diseases requires in-depth knowledge of their epidemiology. An effective methodology for discovering epidemiological knowledge is to utilize a descriptive, evolutionary, ecological model and use bio-simulations to study and analyze it. These types of bio-simulations fall under the category of computational evolutionary methods because the individual entities participating in the simulation are permitted to evolve in a natural manner by reacting to changes in the simulated ecosystem. This work describes the application of the aforementioned methodology to discover epidemiological knowledge about avian influenza using a novel eco-modeling and bio-simulation environment called SEARUMS. The mathematical principles underlying SEARUMS, its design, and the procedure for using SEARUMS are discussed. The bio-simulations and multi-faceted case studies conducted using SEARUMS elucidate its ability to pinpoint timelines, epicenters, and socio-economic impacts of avian influenza. This knowledge is invaluable for proactive deployment of countermeasures in order to minimize negative socioeconomic impacts, combat the disease, and avert a pandemic.

  20. Computational Evolutionary Methodology for Knowledge Discovery and Forecasting in Epidemiology and Medicine

    NASA Astrophysics Data System (ADS)

    Rao, Dhananjai M.; Chernyakhovsky, Alexander; Rao, Victoria

    2008-05-01

    Humanity is facing an increasing number of highly virulent and communicable diseases such as avian influenza. Researchers believe that avian influenza has potential to evolve into one of the deadliest pandemics. Combating these diseases requires in-depth knowledge of their epidemiology. An effective methodology for discovering epidemiological knowledge is to utilize a descriptive, evolutionary, ecological model and use bio-simulations to study and analyze it. These types of bio-simulations fall under the category of computational evolutionary methods because the individual entities participating in the simulation are permitted to evolve in a natural manner by reacting to changes in the simulated ecosystem. This work describes the application of the aforementioned methodology to discover epidemiological knowledge about avian influenza using a novel eco-modeling and bio-simulation environment called SEARUMS. The mathematical principles underlying SEARUMS, its design, and the procedure for using SEARUMS are discussed. The bio-simulations and multi-faceted case studies conducted using SEARUMS elucidate its ability to pinpoint timelines, epicenters, and socio-economic impacts of avian influenza. This knowledge is invaluable for proactive deployment of countermeasures in order to minimize negative socioeconomic impacts, combat the disease, and avert a pandemic.

  1. Tuning of MEMS Gyroscope using Evolutionary Algorithm and "Switched Drive-Angle" Method

    NASA Technical Reports Server (NTRS)

    Keymeulen, Didier; Ferguson, Michael I.; Breuer, Luke; Peay, Chris; Oks, Boris; Cheng, Yen; Kim, Dennis; MacDonald, Eric; Foor, David; Terrile, Rich; Yee, Karl

    2006-01-01

    We propose a tuning method for Micro-Electro-Mechanical Systems (MEMS) gyroscopes based on evolutionary computation that has the capacity to efficiently increase the sensitivity of MEMS gyroscopes through tuning and, furthermore, to find the optimally tuned configuration for this state of increased sensitivity. We present the results of an experiment to determine the speed and efficiency of an evolutionary algorithm applied to electrostatic tuning of MEMS micro gyros. The MEMS gyro used in this experiment is a pyrex post resonator gyro (PRG) in a closed-loop control system. A measure of the quality of tuning is given by the difference in resonant frequencies, or frequency split, for the two orthogonal rocking axes. The current implementation of the closed-loop platform is able to measure and attain a relative stability in the sub-millihertz range, leading to a reduction of the frequency split to less than 100 mHz.

  2. Supervised and unsupervised discretization methods for evolutionary algorithms

    SciTech Connect

    Cantu-Paz, E

    2001-01-24

    This paper introduces simple model-building evolutionary algorithms (EAs) that operate on continuous domains. The algorithms are based on supervised and unsupervised discretization methods that have been used as preprocessing steps in machine learning. The basic idea is to discretize the continuous variables and use the discretization as a simple model of the solutions under consideration. The model is then used to generate new solutions directly, instead of using the usual operators based on sexual recombination and mutation. The algorithms presented here have fewer parameters than traditional and other model-building EAs. They expect that the proposed algorithms that use multivariate models scale up better to the dimensionality of the problem than existing EAs.

  3. Scalable Evolutionary Computation for Efficient Information Extraction from Remote Sensed Imagery

    NASA Astrophysics Data System (ADS)

    Almutairi, L. M.; Shetty, S.; Momm, H. G.

    2014-11-01

    Evolutionary computation, in the form of genetic programming, is used to aid information extraction process from high-resolution satellite imagery in a semi-automatic fashion. Distributing and parallelizing the task of evaluating all candidate solutions during the evolutionary process could significantly reduce the inherent computational cost of evolving solutions that are composed of multichannel large images. In this study, we present the design and implementation of a system that leverages cloud-computing technology to expedite supervised solution development in a centralized evolutionary framework. The system uses the MapReduce programming model to implement a distributed version of the existing framework in a cloud-computing platform. The proposed system has two major subsystems; (i) data preparation: the generation of random spectral indices; and (ii) distributed processing: the distributed implementation of genetic programming, which is used to spectrally distinguish the features of interest from the remaining image background in the cloud computing environment in order to improve scalability. The proposed system reduces response time by leveraging the vast computational and storage resources in a cloud computing environment. The results demonstrate that distributing the candidate solutions reduces the execution time by 91.58%. These findings indicate that such technology could be applied to more complex problems that involve a larger population size and number of generations.

  4. Learning Evolution and the Nature of Science Using Evolutionary Computing and Artificial Life

    ERIC Educational Resources Information Center

    Pennock, Robert T.

    2007-01-01

    Because evolution in natural systems happens so slowly, it is difficult to design inquiry-based labs where students can experiment and observe evolution in the way they can when studying other phenomena. New research in evolutionary computation and artificial life provides a solution to this problem. This paper describes a new A-Life software…

  5. Evolutionary computing for the design search and optimization of space vehicle power subsystems

    NASA Technical Reports Server (NTRS)

    Kordon, M.; Klimeck, G.; Hanks, D.

    2004-01-01

    Evolutionary computing has proven to be a straightforward and robust approach for optimizing a wide range of difficult analysis and design problems. This paper discusses the application of these techniques to an existing space vehicle power subsystem resource and performance analysis simulation in a parallel processing environment.

  6. On computational methods for crashworthiness

    NASA Technical Reports Server (NTRS)

    Belytschko, T.

    1992-01-01

    The evolution of computational methods for crashworthiness and related fields is described and linked with the decreasing cost of computational resources and with improvements in computation methodologies. The latter includes more effective time integration procedures and more efficient elements. Some recent developments in methodologies and future trends are also summarized. These include multi-time step integration (or subcycling), further improvements in elements, adaptive meshes, and the exploitation of parallel computers.

  7. Support measures to estimate the reliability of evolutionary events predicted by reconciliation methods.

    PubMed

    Nguyen, Thi-Hau; Ranwez, Vincent; Berry, Vincent; Scornavacca, Celine

    2013-01-01

    The genome content of extant species is derived from that of ancestral genomes, distorted by evolutionary events such as gene duplications, transfers and losses. Reconciliation methods aim at recovering such events and at localizing them in the species history, by comparing gene family trees to species trees. These methods play an important role in studying genome evolution as well as in inferring orthology relationships. A major issue with reconciliation methods is that the reliability of predicted evolutionary events may be questioned for various reasons: Firstly, there may be multiple equally optimal reconciliations for a given species tree-gene tree pair. Secondly, reconciliation methods can be misled by inaccurate gene or species trees. Thirdly, predicted events may fluctuate with method parameters such as the cost or rate of elementary events. For all of these reasons, confidence values for predicted evolutionary events are sorely needed. It was recently suggested that the frequency of each event in the set of all optimal reconciliations could be used as a support measure. We put this proposition to the test here and also consider a variant where the support measure is obtained by additionally accounting for suboptimal reconciliations. Experiments on simulated data show the relevance of event supports computed by both methods, while resorting to suboptimal sampling was shown to be more effective. Unfortunately, we also show that, unlike the majority-rule consensus tree for phylogenies, there is no guarantee that a single reconciliation can contain all events having above 50% support. In this paper, we detail how to rely on the reconciliation graph to efficiently identify the median reconciliation. Such median reconciliation can be found in polynomial time within the potentially exponential set of most parsimonious reconciliations. PMID:24124449

  8. Support Measures to Estimate the Reliability of Evolutionary Events Predicted by Reconciliation Methods

    PubMed Central

    Nguyen, Thi-Hau; Ranwez, Vincent; Berry, Vincent; Scornavacca, Celine

    2013-01-01

    The genome content of extant species is derived from that of ancestral genomes, distorted by evolutionary events such as gene duplications, transfers and losses. Reconciliation methods aim at recovering such events and at localizing them in the species history, by comparing gene family trees to species trees. These methods play an important role in studying genome evolution as well as in inferring orthology relationships. A major issue with reconciliation methods is that the reliability of predicted evolutionary events may be questioned for various reasons: Firstly, there may be multiple equally optimal reconciliations for a given species tree–gene tree pair. Secondly, reconciliation methods can be misled by inaccurate gene or species trees. Thirdly, predicted events may fluctuate with method parameters such as the cost or rate of elementary events. For all of these reasons, confidence values for predicted evolutionary events are sorely needed. It was recently suggested that the frequency of each event in the set of all optimal reconciliations could be used as a support measure. We put this proposition to the test here and also consider a variant where the support measure is obtained by additionally accounting for suboptimal reconciliations. Experiments on simulated data show the relevance of event supports computed by both methods, while resorting to suboptimal sampling was shown to be more effective. Unfortunately, we also show that, unlike the majority-rule consensus tree for phylogenies, there is no guarantee that a single reconciliation can contain all events having above 50% support. In this paper, we detail how to rely on the reconciliation graph to efficiently identify the median reconciliation. Such median reconciliation can be found in polynomial time within the potentially exponential set of most parsimonious reconciliations. PMID:24124449

  9. Non-Evolutionary Algorithms for Scheduling Dependent Tasks in Distributed Heterogeneous Computing Environments

    SciTech Connect

    Wayne F. Boyer; Gurdeep S. Hura

    2005-09-01

    The Problem of obtaining an optimal matching and scheduling of interdependent tasks in distributed heterogeneous computing (DHC) environments is well known to be an NP-hard problem. In a DHC system, task execution time is dependent on the machine to which it is assigned and task precedence constraints are represented by a directed acyclic graph. Recent research in evolutionary techniques has shown that genetic algorithms usually obtain more efficient schedules that other known algorithms. We propose a non-evolutionary random scheduling (RS) algorithm for efficient matching and scheduling of inter-dependent tasks in a DHC system. RS is a succession of randomized task orderings and a heuristic mapping from task order to schedule. Randomized task ordering is effectively a topological sort where the outcome may be any possible task order for which the task precedent constraints are maintained. A detailed comparison to existing evolutionary techniques (GA and PSGA) shows the proposed algorithm is less complex than evolutionary techniques, computes schedules in less time, requires less memory and fewer tuning parameters. Simulation results show that the average schedules produced by RS are approximately as efficient as PSGA schedules for all cases studied and clearly more efficient than PSGA for certain cases. The standard formulation for the scheduling problem addressed in this paper is Rm|prec|Cmax.,

  10. Evolutionary method for finding communities in bipartite networks

    NASA Astrophysics Data System (ADS)

    Zhan, Weihua; Zhang, Zhongzhi; Guan, Jihong; Zhou, Shuigeng

    2011-06-01

    An important step in unveiling the relation between network structure and dynamics defined on networks is to detect communities, and numerous methods have been developed separately to identify community structure in different classes of networks, such as unipartite networks, bipartite networks, and directed networks. Here, we show that the finding of communities in such networks can be unified in a general framework—detection of community structure in bipartite networks. Moreover, we propose an evolutionary method for efficiently identifying communities in bipartite networks. To this end, we show that both unipartite and directed networks can be represented as bipartite networks, and their modularity is completely consistent with that for bipartite networks, the detection of modular structure on which can be reformulated as modularity maximization. To optimize the bipartite modularity, we develop a modified adaptive genetic algorithm (MAGA), which is shown to be especially efficient for community structure detection. The high efficiency of the MAGA is based on the following three improvements we make. First, we introduce a different measure for the informativeness of a locus instead of the standard deviation, which can exactly determine which loci mutate. This measure is the bias between the distribution of a locus over the current population and the uniform distribution of the locus, i.e., the Kullback-Leibler divergence between them. Second, we develop a reassignment technique for differentiating the informative state a locus has attained from the random state in the initial phase. Third, we present a modified mutation rule which by incorporating related operations can guarantee the convergence of the MAGA to the global optimum and can speed up the convergence process. Experimental results show that the MAGA outperforms existing methods in terms of modularity for both bipartite and unipartite networks.

  11. A surrogate assisted evolutionary optimization method with application to the transonic airfoil design

    NASA Astrophysics Data System (ADS)

    Shahrokhi, Ava; Jahangirian, Alireza

    2010-06-01

    A multi-layer perceptron neural network (NN) method is used for efficient estimation of the expensive objective functions in the evolutionary optimization with the genetic algorithm (GA). The estimation capability of the NN is improved by dynamic retraining using the data from successive generations. In addition, the normal distribution of the training data variables is used to determine well-trained parts of the design space for the NN approximation. The efficiency of the method is demonstrated by two transonic airfoil design problems considering inviscid and viscous flow solvers. Results are compared with those of the simple GA and an alternative surrogate method. The total number of flow solver calls is reduced by about 40% using this fitness approximation technique, which in turn reduces the total computational time without influencing the convergence rate of the optimization algorithm. The accuracy of the NN estimation is considerably improved using the normal distribution approach compared with the alternative method.

  12. Using evolutionary computations to understand the design and evolution of gene and cell regulatory networks

    PubMed Central

    Spirov, Alexander; Holloway, David

    2013-01-01

    This paper surveys modeling approaches for studying the evolution of gene regulatory networks (GRNs). Modeling of the design or ‘wiring’ of GRNs has become increasingly common in developmental and medical biology, as a means of quantifying gene-gene interactions, the response to perturbations, and the overall dynamic motifs of networks. Drawing from developments in GRN ‘design’ modeling, a number of groups are now using simulations to study how GRNs evolve, both for comparative genomics and to uncover general principles of evolutionary processes. Such work can generally be termed evolution in silico. Complementary to these biologically-focused approaches, a now well-established field of computer science is Evolutionary Computations (EC), in which highly efficient optimization techniques are inspired from evolutionary principles. In surveying biological simulation approaches, we discuss the considerations that must be taken with respect to: a) the precision and completeness of the data (e.g. are the simulations for very close matches to anatomical data, or are they for more general exploration of evolutionary principles); b) the level of detail to model (we proceed from ‘coarse-grained’ evolution of simple gene-gene interactions to ‘fine-grained’ evolution at the DNA sequence level); c) to what degree is it important to include the genome’s cellular context; and d) the efficiency of computation. With respect to the latter, we argue that developments in computer science EC offer the means to perform more complete simulation searches, and will lead to more comprehensive biological predictions. PMID:23726941

  13. Peptide design by artificial neural networks and computer-based evolutionary search

    PubMed Central

    Schneider, Gisbert; Schrödl, Wieland; Wallukat, Gerd; Müller, Johannes; Nissen, Eberhard; Rönspeck, Wolfgang; Wrede, Paul; Kunze, Rudolf

    1998-01-01

    A technique for systematic peptide variation by a combination of rational and evolutionary approaches is presented. The design scheme consists of five consecutive steps: (i) identification of a “seed peptide” with a desired activity, (ii) generation of variants selected from a physicochemical space around the seed peptide, (iii) synthesis and testing of this biased library, (iv) modeling of a quantitative sequence-activity relationship by an artificial neural network, and (v) de novo design by a computer-based evolutionary search in sequence space using the trained neural network as the fitness function. This strategy was successfully applied to the identification of novel peptides that fully prevent the positive chronotropic effect of anti-β1-adrenoreceptor autoantibodies from the serum of patients with dilated cardiomyopathy. The seed peptide, comprising 10 residues, was derived by epitope mapping from an extracellular loop of human β1-adrenoreceptor. A set of 90 peptides was synthesized and tested to provide training data for neural network development. De novo design revealed peptides with desired activities that do not match the seed peptide sequence. These results demonstrate that computer-based evolutionary searches can generate novel peptides with substantial biological activity. PMID:9770460

  14. Design of a dynamic model of genes with multiple autonomous regulatory modules by evolutionary computations.

    PubMed

    Spirov, Alexander V; Holloway, David M

    2010-05-01

    A new approach to design a dynamic model of genes with multiple autonomous regulatory modules by evolutionary computations is proposed. The approach is based on Genetic Algorithms (GA), with new crossover operators especially designed for these purposes. The new operators use local homology between parental strings to preserve building blocks found by the algorithm. The approach exploits the subbasin-portal architecture of the fitness functions suitable for this kind of evolutionary modeling. This architecture is significant for Royal Road class fitness functions. Two real-life Systems Biology problems with such fitness functions are implemented here: evolution of the bacterial promoter rrnPl and of the enhancer of the Drosophila even-skipped gene. The effectiveness of the approach compared to standard GA is demonstrated on several benchmark and real-life tasks. PMID:20930945

  15. Design of a dynamic model of genes with multiple autonomous regulatory modules by evolutionary computations

    PubMed Central

    Spirov, Alexander V.; Holloway, David M.

    2010-01-01

    A new approach to design a dynamic model of genes with multiple autonomous regulatory modules by evolutionary computations is proposed. The approach is based on Genetic Algorithms (GA), with new crossover operators especially designed for these purposes. The new operators use local homology between parental strings to preserve building blocks found by the algorithm. The approach exploits the subbasin-portal architecture of the fitness functions suitable for this kind of evolutionary modeling. This architecture is significant for Royal Road class fitness functions. Two real-life Systems Biology problems with such fitness functions are implemented here: evolution of the bacterial promoter rrnPl and of the enhancer of the Drosophila even-skipped gene. The effectiveness of the approach compared to standard GA is demonstrated on several benchmark and real-life tasks. PMID:20930945

  16. Recombination in viruses: mechanisms, methods of study, and evolutionary consequences.

    PubMed

    Pérez-Losada, Marcos; Arenas, Miguel; Galán, Juan Carlos; Palero, Ferran; González-Candelas, Fernando

    2015-03-01

    Recombination is a pervasive process generating diversity in most viruses. It joins variants that arise independently within the same molecule, creating new opportunities for viruses to overcome selective pressures and to adapt to new environments and hosts. Consequently, the analysis of viral recombination attracts the interest of clinicians, epidemiologists, molecular biologists and evolutionary biologists. In this review we present an overview of three major areas related to viral recombination: (i) the molecular mechanisms that underlie recombination in model viruses, including DNA-viruses (Herpesvirus) and RNA-viruses (Human Influenza Virus and Human Immunodeficiency Virus), (ii) the analytical procedures to detect recombination in viral sequences and to determine the recombination breakpoints, along with the conceptual and methodological tools currently used and a brief overview of the impact of new sequencing technologies on the detection of recombination, and (iii) the major areas in the evolutionary analysis of viral populations on which recombination has an impact. These include the evaluation of selective pressures acting on viral populations, the application of evolutionary reconstructions in the characterization of centralized genes for vaccine design, and the evaluation of linkage disequilibrium and population structure. PMID:25541518

  17. Computational Methods For Composite Structures

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    1988-01-01

    Selected methods of computation for simulation of mechanical behavior of fiber/matrix composite materials described in report. For each method, report describes significance of behavior to be simulated, procedure for simulation, and representative results. Following applications discussed: effects of progressive degradation of interply layers on responses of composite structures, dynamic responses of notched and unnotched specimens, interlaminar fracture toughness, progressive fracture, thermal distortions of sandwich composite structure, and metal-matrix composite structures for use at high temperatures. Methods demonstrate effectiveness of computational simulation as applied to complex composite structures in general and aerospace-propulsion structural components in particular.

  18. Computational Methods Development at Ames

    NASA Technical Reports Server (NTRS)

    Kwak, Dochan; Smith, Charles A. (Technical Monitor)

    1998-01-01

    This viewgraph presentation outlines the development at Ames Research Center of advanced computational methods to provide appropriate fidelity computational analysis/design capabilities. Current thrusts of the Ames research include: 1) methods to enhance/accelerate viscous flow simulation procedures, and the development of hybrid/polyhedral-grid procedures for viscous flow; 2) the development of real time transonic flow simulation procedures for a production wind tunnel, and intelligent data management technology; and 3) the validation of methods and the flow physics study gives historical precedents to above research, and speculates on its future course.

  19. Computational Methods in Drug Discovery

    PubMed Central

    Sliwoski, Gregory; Kothiwale, Sandeepkumar; Meiler, Jens

    2014-01-01

    Computer-aided drug discovery/design methods have played a major role in the development of therapeutically important small molecules for over three decades. These methods are broadly classified as either structure-based or ligand-based methods. Structure-based methods are in principle analogous to high-throughput screening in that both target and ligand structure information is imperative. Structure-based approaches include ligand docking, pharmacophore, and ligand design methods. The article discusses theory behind the most important methods and recent successful applications. Ligand-based methods use only ligand information for predicting activity depending on its similarity/dissimilarity to previously known active ligands. We review widely used ligand-based methods such as ligand-based pharmacophores, molecular descriptors, and quantitative structure-activity relationships. In addition, important tools such as target/ligand data bases, homology modeling, ligand fingerprint methods, etc., necessary for successful implementation of various computer-aided drug discovery/design methods in a drug discovery campaign are discussed. Finally, computational methods for toxicity prediction and optimization for favorable physiologic properties are discussed with successful examples from literature. PMID:24381236

  20. Computational Modeling Method for Superalloys

    NASA Technical Reports Server (NTRS)

    Bozzolo, Guillermo; Noebe, Ronald D.; Gayda, John

    1997-01-01

    Computer modeling based on theoretical quantum techniques has been largely inefficient due to limitations on the methods or the computer needs associated with such calculations, thus perpetuating the notion that little help can be expected from computer simulations for the atomistic design of new materials. In a major effort to overcome these limitations and to provide a tool for efficiently assisting in the development of new alloys, we developed the BFS method for alloys, which together with the experimental results from previous and current research that validate its use for large-scale simulations, provide the ideal grounds for developing a computationally economical and physically sound procedure for supplementing the experimental work at great cost and time savings.

  1. Evolutionary computing for the design search and optimization of space vehicle power subsystems

    NASA Technical Reports Server (NTRS)

    Kordon, Mark; Klimeck, Gerhard; Hanks, David; Hua, Hook

    2004-01-01

    Evolutionary computing has proven to be a straightforward and robust approach for optimizing a wide range of difficult analysis and design problems. This paper discusses the application of these techniques to an existing space vehicle power subsystem resource and performance analysis simulation in a parallel processing environment. Out preliminary results demonstrate that this approach has the potential to improve the space system trade study process by allowing engineers to statistically weight subsystem goals of mass, cost and performance then automatically size power elements based on anticipated performance of the subsystem rather than on worst-case estimates.

  2. An exploration of computer-simulated evolution and small group discussion on pre-service science teachers' perceptions of evolutionary concepts

    NASA Astrophysics Data System (ADS)

    MacDonald, Ronald Douglas

    The primary goal of this study was to explore how the use of a computer simulation of basic evolutionary processes, in combination with small-group discussions, affected Intermediate/Senior pre-service science teachers' perspectives of basic evolutionary concepts. Qualitative and quantitative methods were used in a case study approach with 19 pre-service Intermediate/Senior science teachers at an Ontario university. Several sub-goals were explored. The first sub-goal was to assess Intermediate/Senior pre-service science teachers' current conceptions of evolution. The results indicated that approximately two-thirds of the participants had a poor understanding of basic evolutionary concepts, with only 2 of the 19 participants demonstrating a strong comprehension. These results were found to be very similar to comparable samples of subjects from other research. The second sub-goal was to explore the relationships among Intermediate/Senior pre-service science teachers' understanding of contemporary evolutionary concepts, their perspectives of the nature of science, and their intentions to teach evolutionary concepts in the classroom. Participants' knowledge of evolutionary concepts was found to be associated strongly with their intentions to teach evolution by natural selection (r = .42). However, knowledge of evolutionary concepts was not found to be associated with any particular science epistemology perspective. The third sub-goal was to analyze and to interpret the small-group discussions as members interacted with the simulation. The simulation was found to be highly engaging and a very effective method of encouraging participants to speculate, question, discuss and learn about important evolutionary concepts. Analyses of the discussions revealed that the simulation evoked a wide array of correct conceptions as well as misconceptions. The fourth sub-goal was to assess the extent to which creating a lesson plan on the topic of natural selection could affect

  3. Hardware platforms for MEMS gyroscope tuning based on evolutionary computation using open-loop and closed -loop frequency response

    NASA Technical Reports Server (NTRS)

    Keymeulen, Didier; Ferguson, Michael I.; Fink, Wolfgang; Oks, Boris; Peay, Chris; Terrile, Richard; Cheng, Yen; Kim, Dennis; MacDonald, Eric; Foor, David

    2005-01-01

    We propose a tuning method for MEMS gyroscopes based on evolutionary computation to efficiently increase the sensitivity of MEMS gyroscopes through tuning. The tuning method was tested for the second generation JPL/Boeing Post-resonator MEMS gyroscope using the measurement of the frequency response of the MEMS device in open-loop operation. We also report on the development of a hardware platform for integrated tuning and closed loop operation of MEMS gyroscopes. The control of this device is implemented through a digital design on a Field Programmable Gate Array (FPGA). The hardware platform easily transitions to an embedded solution that allows for the miniaturization of the system to a single chip.

  4. Cepstral methods in computational vision

    NASA Astrophysics Data System (ADS)

    Bandari, Esfandiar; Little, James J.

    1993-05-01

    Many computational vision routines can be regarded as recognition and retrieval of echoes in space or time. Cepstral analysis is a powerful nonlinear adaptive signal processing methodology widely used in many areas such as: echo retrieval and removal, speech processing and phoneme chunking, radar and sonar processing, seismology, medicine, image deblurring and restoration, and signal recovery. The aim of this paper is: (1) To provide a brief mathematical and historical review of cepstral techniques. (2) To introduce computational and performance improvements to power and differential cepstrum for use in detection of echoes; and to provide a comparison between these methods and the traditional cepstral techniques. (3) To apply cepstrum to visual tasks such as motion analysis and trinocular vision. And (4) to draw a brief comparison between cepstrum and other matching techniques. The computational and performance improvements introduced in this paper can e applied in other areas that frequently utilize cepstrum.

  5. Optimal Navier-Stokes Design of Compressor Impellers Using Evolutionary Computation

    NASA Astrophysics Data System (ADS)

    Benini, Ernesto

    2003-09-01

    In the design of modern centrifugal compressor impellers, it is fundamental to account for three-dimensional effects and to use an optimization strategy that helps the designer to achieve the required objectives with the presence of constraints. In this paper, a fully three-dimensional optimization method is described that combines a CFD code and an evolutionary algorithm. The design scenario contemplated here involves the maximization of impeller peak efficiency with constraints on the impeller pressure ratio and operating range. The method is used to improve the performances of a baseline impeller of known characteristics. An optimal solution is proposed and compared to the original configuration.

  6. Hybrid evolutionary computing model for mobile agents of wireless Internet multimedia

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    2001-03-01

    The ecosystem is used as an evolutionary paradigm of natural laws for the distributed information retrieval via mobile agents to allow the computational load to be added to server nodes of wireless networks, while reducing the traffic on communication links. Based on the Food Web model, a set of computational rules of natural balance form the outer stage to control the evolution of mobile agents providing multimedia services with a wireless Internet protocol WIP. The evolutionary model shows how mobile agents should behave with the WIP, in particular, how mobile agents can cooperate, compete and learn from each other, based on an underlying competition for radio network resources to establish the wireless connections to support the quality of service QoS of user requests. Mobile agents are also allowed to clone themselves, propagate and communicate with other agents. A two-layer model is proposed for agent evolution: the outer layer is based on the law of natural balancing, the inner layer is based on a discrete version of a Kohonen self-organizing feature map SOFM to distribute network resources to meet QoS requirements. The former is embedded in the higher OSI layers of the WIP, while the latter is used in the resource management procedures of Layer 2 and 3 of the protocol. Algorithms for the distributed computation of mobile agent evolutionary behavior are developed by adding a learning state to the agent evolution state diagram. When an agent is in an indeterminate state, it can communicate to other agents. Computing models can be replicated from other agents. Then the agents transitions to the mutating state to wait for a new information-retrieval goal. When a wireless terminal or station lacks a network resource, an agent in the suspending state can change its policy to submit to the environment before it transitions to the searching state. The agents learn the facts of agent state information entered into an external database. In the cloning process, two

  7. Deep Space Network Scheduling Using Evolutionary Computational Methods

    NASA Technical Reports Server (NTRS)

    Guillaume, Alexandre; Lee, Seugnwon; Wang, Yeou-Fang; Terrile, Richard J.

    2007-01-01

    The paper presents the specific approach taken to formulate the problem in terms of gene encoding, fitness function, and genetic operations. The genome is encoded such that a subset of the scheduling constraints is automatically satisfied. Several fitness functions are formulated to emphasize different aspects of the scheduling problem. The optimal solutions of the different fitness functions demonstrate the trade-off of the scheduling problem and provide insight into a conflict resolution process.

  8. An evolutionary method for synthesizing technological planning and architectural advance

    NASA Astrophysics Data System (ADS)

    Cole, Bjorn Forstrom

    the appropriate technological antecedents are accounted for in developing the projection. The third chapter of the thesis compiles a series of observations and philosophical considerations into a series of research questions. Some research questions are then answered with further thought, observation, and reading, leading to conjectures on the problem. The remainder require some form of experimentation, and so are used to formulate hypotheses. Falsifiability conditions are then generated from those hypotheses, and used to get the development of experiments to be performed, in this case on a computer upon various conditions of use of a genetic algorithm. The fourth chapter of the thesis walks through the formulation of a method to attack the problem of strategically choosing an architecture. This method is designed to find the optimum architecture under multiple conditions, which is required for the ability to play the "what if" games typically undertaken in strategic situations. The chapter walks through a graph-based representation of architecture, provides the rationale for choosing a given technology forecasting technique, and lays out the implementation of the optimization algorithm, named Sindri, within a commercial analysis code, Pacelab. The fifth chapter of the thesis then tests the Sindri code. The first test applied is a series of standardized combinatorial spaces, which are meant to be analogous to test problems traditionally posed to optimizers (e.g., Rosenbrock's valley function). The results from this test assess the value of various operators used to transform the architecture graph in the course of conducting a genetic search. Finally, this method is employed on a test case involving the transition of a miniature helicopter from glow engine to battery propulsion, and finally to a design where the battery functions as both structure and power source. The final two chapters develop conclusions based on the body of work conducted within this thesis and

  9. Computational methods for stellerator configurations

    NASA Astrophysics Data System (ADS)

    Betancourt, O.

    This project had two main objectives. The first one was to continue to develop computational methods for the study of three dimensional magnetic confinement configurations. The second one was to collaborate and interact with researchers in the field who can use these techniques to study and design fusion experiments. The first objective has been achieved with the development of the spectral code BETAS and the formulation of a new variational approach for the study of magnetic island formation in a self consistent fashion. The code can compute the correct island width corresponding to the saturated island, a result shown by comparing the computed island with the results of unstable tearing modes in Tokamaks and with experimental results in the IMS Stellarator. In addition to studying three dimensional nonlinear effects in Tokamaks configurations, these self consistent computed island equilibria will be used to study transport effects due to magnetic island formation and to nonlinearly bifurcated equilibria. The second objective was achieved through direct collaboration with Steve Hirshman at Oak Ridge, D. Anderson and R. Talmage at Wisconsin as well as through participation in the Sherwood and APS meetings.

  10. 5D parameter estimation of near-field sources using hybrid evolutionary computational techniques.

    PubMed

    Zaman, Fawad; Qureshi, Ijaz Mansoor

    2014-01-01

    Hybrid evolutionary computational technique is developed to jointly estimate the amplitude, frequency, range, and 2D direction of arrival (elevation and azimuth angles) of near-field sources impinging on centrosymmetric cross array. Specifically, genetic algorithm is used as a global optimizer, whereas pattern search and interior point algorithms are employed as rapid local search optimizers. For this, a new multiobjective fitness function is constructed, which is the combination of mean square error and correlation between the normalized desired and estimated vectors. The performance of the proposed hybrid scheme is compared not only with the individual responses of genetic algorithm, interior point algorithm, and pattern search, but also with the existing traditional techniques. The proposed schemes produced fairly good results in terms of estimation accuracy, convergence rate, and robustness against noise. A large number of Monte-Carlo simulations are carried out to test out the validity and reliability of each scheme. PMID:24701156

  11. Development of a multisegment coal mill model using an evolutionary computation technique

    SciTech Connect

    Wei, J.L.; Wang, J.; Wu, Q.H.

    2007-09-15

    This paper presents a multisegment coal mill model that covers the whole milling process from mill startup to shutdown. This multisegment mathematical model is derived through analysis of energy transferring, heat exchange, and mass flow balances. The work presented in the paper focuses on modeling E-type vertical spindle coal mills that are widely used in coal-fired power plants. An evolutionary computation technique is adopted to identify the unknown model parameters using the on-site measurement data. The identified parameters are then validated with different sets of online measured data. Validation results indicate that the model is accurate enough to represent the whole process of coal mill dynamics and can be used for prediction of the mill dynamic performance. Therefore, the model can be used for online monitoring, fault detection, and control to improve the efficiency of combustion.

  12. An Evolutionary Examination of Telemedicine: A Health and Computer-Mediated Communication Perspective

    PubMed Central

    Breen, Gerald-Mark; Matusitz, Jonathan

    2009-01-01

    Telemedicine, the use of advanced communication technologies in the healthcare context, has a rich history and a clear evolutionary course. In this paper, the authors identify telemedicine as operationally defined, the services and technologies it comprises, the direction telemedicine has taken, along with its increased acceptance in the healthcare communities. The authors also describe some of the key pitfalls warred with by researchers and activists to advance telemedicine to its full potential and lead to an unobstructed team of technicians to identify telemedicine’s diverse utilities. A discussion and future directions section is included to provide fresh ideas to health communication and computer-mediated scholars wishing to delve into this area and make a difference to enhance public understanding of this field. PMID:20300559

  13. How quickly do brains catch up with bodies? A comparative method for detecting evolutionary lag.

    PubMed Central

    Deaner, R O; Nunn, C L

    1999-01-01

    A trait may be at odds with theoretical expectation because it is still in the process of responding to a recent selective force. Such a situation can be termed evolutionary lag. Although many cases of evolutionary lag have been suggested, almost all of the arguments have focused on trait fitness. An alternative approach is to examine the prediction that trait expression is a function of the time over which the trait could evolve. Here we present a phylogenetic comparative method for using this 'time' approach and we apply the method to a long-standing lag hypothesis: evolutionary changes in brain size lag behind evolutionary changes in body size. We tested the prediction in primates that brain mass contrast residuals, calculated from a regression of pairwise brain mass contrasts on positive pairwise body mass contrasts, are correlated with the time since the paired species diverged. Contrary to the brain size lag hypothesis, time since divergence was not significantly correlated with brain mass contrast residuals. We found the same result when we accounted for socioecology, used alternative body mass estimates and used male rather than female values. These tests do not support the brain size lag hypothesis. Therefore, body mass need not be viewed as a suspect variable in comparative neuroanatomical studies and relative brain size should not be used to infer recent evolutionary changes in body size. PMID:10331289

  14. Interactive evolutionary computation with minimum fitness evaluation requirement and offline algorithm design.

    PubMed

    Ishibuchi, Hisao; Sudo, Takahiko; Nojima, Yusuke

    2016-01-01

    In interactive evolutionary computation (IEC), each solution is evaluated by a human user. Usually the total number of examined solutions is very small. In some applications such as hearing aid design and music composition, only a single solution can be evaluated at a time by a human user. Moreover, accurate and precise numerical evaluation is difficult. Based on these considerations, we formulated an IEC model with the minimum requirement for fitness evaluation ability of human users under the following assumptions: They can evaluate only a single solution at a time, they can memorize only a single previous solution they have just evaluated, their evaluation result on the current solution is whether it is better than the previous one or not, and the best solution among the evaluated ones should be identified after a pre-specified number of evaluations. In this paper, we first explain our IEC model in detail. Next we propose a ([Formula: see text])ES-style algorithm for our IEC model. Then we propose an offline meta-level approach to automated algorithm design for our IEC model. The main feature of our approach is the use of a different mechanism (e.g., mutation, crossover, random initialization) to generate each solution to be evaluated. Through computational experiments on test problems, our approach is compared with the ([Formula: see text])ES-style algorithm where a solution generation mechanism is pre-specified and fixed throughout the execution of the algorithm. PMID:27026888

  15. An evolutionary computational theory of prefrontal executive function in decision-making.

    PubMed

    Koechlin, Etienne

    2014-11-01

    The prefrontal cortex subserves executive control and decision-making, that is, the coordination and selection of thoughts and actions in the service of adaptive behaviour. We present here a computational theory describing the evolution of the prefrontal cortex from rodents to humans as gradually adding new inferential Bayesian capabilities for dealing with a computationally intractable decision problem: exploring and learning new behavioural strategies versus exploiting and adjusting previously learned ones through reinforcement learning (RL). We provide a principled account identifying three inferential steps optimizing this arbitration through the emergence of (i) factual reactive inferences in paralimbic prefrontal regions in rodents; (ii) factual proactive inferences in lateral prefrontal regions in primates and (iii) counterfactual reactive and proactive inferences in human frontopolar regions. The theory clarifies the integration of model-free and model-based RL through the notion of strategy creation. The theory also shows that counterfactual inferences in humans yield to the notion of hypothesis testing, a critical reasoning ability for approximating optimal adaptive processes and presumably endowing humans with a qualitative evolutionary advantage in adaptive behaviour. PMID:25267817

  16. Mean protein evolutionary distance: a method for comparative protein evolution and its application.

    PubMed

    Wise, Michael J

    2013-01-01

    Proteins are under tight evolutionary constraints, so if a protein changes it can only do so in ways that do not compromise its function. In addition, the proteins in an organism evolve at different rates. Leveraging the history of patristic distance methods, a new method for analysing comparative protein evolution, called Mean Protein Evolutionary Distance (MeaPED), measures differential resistance to evolutionary pressure across viral proteomes and is thereby able to point to the proteins' roles. Different species' proteomes can also be compared because the results, consistent across virus subtypes, concisely reflect the very different lifestyles of the viruses. The MeaPED method is here applied to influenza A virus, hepatitis C virus, human immunodeficiency virus (HIV), dengue virus, rotavirus A, polyomavirus BK and measles, which span the positive and negative single-stranded, doubled-stranded and reverse transcribing RNA viruses, and double-stranded DNA viruses. From this analysis, host interaction proteins including hemagglutinin (influenza), and viroporins agnoprotein (polyomavirus), p7 (hepatitis C) and VPU (HIV) emerge as evolutionary hot-spots. By contrast, RNA-directed RNA polymerase proteins including L (measles), PB1/PB2 (influenza) and VP1 (rotavirus), and internal serine proteases such as NS3 (dengue and hepatitis C virus) emerge as evolutionary cold-spots. The hot spot influenza hemagglutinin protein is contrasted with the related cold spot H protein from measles. It is proposed that evolutionary cold-spot proteins can become significant targets for second-line anti-viral therapeutics, in cases where front-line vaccines are not available or have become ineffective due to mutations in the hot-spot, generally more antigenically exposed proteins. The MeaPED package is available from www.pam1.bcs.uwa.edu.au/~michaelw/ftp/src/meaped.tar.gz. PMID:23613826

  17. Exploiting Genomic Knowledge in Optimising Molecular Breeding Programmes: Algorithms from Evolutionary Computing

    PubMed Central

    O'Hagan, Steve; Knowles, Joshua; Kell, Douglas B.

    2012-01-01

    Comparatively few studies have addressed directly the question of quantifying the benefits to be had from using molecular genetic markers in experimental breeding programmes (e.g. for improved crops and livestock), nor the question of which organisms should be mated with each other to best effect. We argue that this requires in silico modelling, an approach for which there is a large literature in the field of evolutionary computation (EC), but which has not really been applied in this way to experimental breeding programmes. EC seeks to optimise measurable outcomes (phenotypic fitnesses) by optimising in silico the mutation, recombination and selection regimes that are used. We review some of the approaches from EC, and compare experimentally, using a biologically relevant in silico landscape, some algorithms that have knowledge of where they are in the (genotypic) search space (G-algorithms) with some (albeit well-tuned ones) that do not (F-algorithms). For the present kinds of landscapes, F- and G-algorithms were broadly comparable in quality and effectiveness, although we recognise that the G-algorithms were not equipped with any ‘prior knowledge’ of epistatic pathway interactions. This use of algorithms based on machine learning has important implications for the optimisation of experimental breeding programmes in the post-genomic era when we shall potentially have access to the full genome sequence of every organism in a breeding population. The non-proprietary code that we have used is made freely available (via Supplementary information). PMID:23185279

  18. Using evolutionary computation to optimize an SVM used in detecting buried objects in FLIR imagery

    NASA Astrophysics Data System (ADS)

    Paino, Alex; Popescu, Mihail; Keller, James M.; Stone, Kevin

    2013-06-01

    In this paper we describe an approach for optimizing the parameters of a Support Vector Machine (SVM) as part of an algorithm used to detect buried objects in forward looking infrared (FLIR) imagery captured by a camera installed on a moving vehicle. The overall algorithm consists of a spot-finding procedure (to look for potential targets) followed by the extraction of several features from the neighborhood of each spot. The features include local binary pattern (LBP) and histogram of oriented gradients (HOG) as these are good at detecting texture classes. Finally, we project and sum each hit into UTM space along with its confidence value (obtained from the SVM), producing a confidence map for ROC analysis. In this work, we use an Evolutionary Computation Algorithm (ECA) to optimize various parameters involved in the system, such as the combination of features used, parameters on the Canny edge detector, the SVM kernel, and various HOG and LBP parameters. To validate our approach, we compare results obtained from an SVM using parameters obtained through our ECA technique with those previously selected by hand through several iterations of "guess and check".

  19. Computational methods for stealth design

    SciTech Connect

    Cable, V.P. )

    1992-08-01

    A review is presented of the utilization of computer models for stealth design toward the ultimate goal of designing and fielding an aircraft that remains undetected at any altitude and any range. Attention is given to the advancements achieved in computational tools and their utilization. Consideration is given to the development of supercomputers for large-scale scientific computing and the development of high-fidelity, 3D, radar-signature-prediction tools for complex shapes with nonmetallic and radar-penetrable materials.

  20. Blending Determinism with Evolutionary Computing: Applications to the Calculation of the Molecular Electronic Structure of Polythiophene.

    PubMed

    Sarkar, Kanchan; Sharma, Rahul; Bhattacharyya, S P

    2010-03-01

    A density matrix based soft-computing solution to the quantum mechanical problem of computing the molecular electronic structure of fairly long polythiophene (PT) chains is proposed. The soft-computing solution is based on a "random mutation hill climbing" scheme which is modified by blending it with a deterministic method based on a trial single-particle density matrix [P((0))(R)] for the guessed structural parameters (R), which is allowed to evolve under a unitary transformation generated by the Hamiltonian H(R). The Hamiltonian itself changes as the geometrical parameters (R) defining the polythiophene chain undergo mutation. The scale (λ) of the transformation is optimized by making the energy [E(λ)] stationary with respect to λ. The robustness and the performance levels of variants of the algorithm are analyzed and compared with those of other derivative free methods. The method is further tested successfully with optimization of the geometry of bipolaron-doped long PT chains. PMID:26613302

  1. Optimization Methods for Computer Animation.

    ERIC Educational Resources Information Center

    Donkin, John Caldwell

    Emphasizing the importance of economy and efficiency in the production of computer animation, this master's thesis outlines methodologies that can be used to develop animated sequences with the highest quality images for the least expenditure. It is assumed that if computer animators are to be able to fully exploit the available resources, they…

  2. Evolutionary topology optimization using the extended finite element method and isolines

    NASA Astrophysics Data System (ADS)

    Abdi, Meisam; Wildman, Ricky; Ashcroft, Ian

    2014-05-01

    This study presents a new algorithm for structural topological optimization of two-dimensional continuum structures by combining the extended finite element method (X-FEM) with an evolutionary optimization algorithm. Taking advantage of an isoline design approach for boundary representation in a fixed grid domain, X-FEM can be implemented to improve the accuracy of finite element solutions on the boundary during the optimization process. Although this approach does not use any remeshing or moving mesh algorithms, final topologies have smooth and clearly defined boundaries which need no further interpretation. Numerical comparisons of the converged solutions with standard bi-directional evolutionary structural optimization solutions show the efficiency of the proposed method, and comparison with the converged solutions using MSC NASTRAN confirms the high accuracy of this method.

  3. An improved approximate-Bayesian model-choice method for estimating shared evolutionary history

    PubMed Central

    2014-01-01

    Background To understand biological diversification, it is important to account for large-scale processes that affect the evolutionary history of groups of co-distributed populations of organisms. Such events predict temporally clustered divergences times, a pattern that can be estimated using genetic data from co-distributed species. I introduce a new approximate-Bayesian method for comparative phylogeographical model-choice that estimates the temporal distribution of divergences across taxa from multi-locus DNA sequence data. The model is an extension of that implemented in msBayes. Results By reparameterizing the model, introducing more flexible priors on demographic and divergence-time parameters, and implementing a non-parametric Dirichlet-process prior over divergence models, I improved the robustness, accuracy, and power of the method for estimating shared evolutionary history across taxa. Conclusions The results demonstrate the improved performance of the new method is due to (1) more appropriate priors on divergence-time and demographic parameters that avoid prohibitively small marginal likelihoods for models with more divergence events, and (2) the Dirichlet-process providing a flexible prior on divergence histories that does not strongly disfavor models with intermediate numbers of divergence events. The new method yields more robust estimates of posterior uncertainty, and thus greatly reduces the tendency to incorrectly estimate models of shared evolutionary history with strong support. PMID:24992937

  4. Computational methods for probability of instability calculations

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.; Burnside, O. H.

    1990-01-01

    This paper summarizes the development of the methods and a computer program to compute the probability of instability of a dynamic system than can be represented by a system of second-order ordinary linear differential equations. Two instability criteria based upon the roots of the characteristics equation or Routh-Hurwitz test functions are investigated. Computational methods based on system reliability analysis methods and importance sampling concepts are proposed to perform efficient probabilistic analysis. Numerical examples are provided to demonstrate the methods.

  5. Numerical simulation of evolutionary erodible bedforms using the particle finite element method

    NASA Astrophysics Data System (ADS)

    Bravo, Rafael; Becker, Pablo; Ortiz, Pablo

    2016-07-01

    This paper presents a numerical strategy for the simulation of flows with evolutionary erodible boundaries. The fluid equations are fully resolved in 3D, while the sediment transport is modelled using the Exner equation and solved with an explicit Lagrangian procedure based on a fixed 2D mesh. Flow and sediment are coupled in geometry by deforming the fluid mesh in the vertical direction and in velocities with the experimental sediment flux computed using the Meyer Peter Müller model. A comparison with real experiments on channels is performed, giving good agreement.

  6. Computational methods for unsteady transonic flows

    NASA Technical Reports Server (NTRS)

    Edwards, John W.; Thomas, James L.

    1987-01-01

    Computational methods for unsteady transonic flows are surveyed with emphasis upon applications to aeroelastic analysis and flutter prediction. Computational difficulty is discussed with respect to type of unsteady flow; attached, mixed (attached/separated) and separated. Significant early computations of shock motions, aileron buzz and periodic oscillations are discussed. The maturation of computational methods towards the capability of treating complete vehicles with reasonable computational resources is noted and a survey of recent comparisons with experimental results is compiled. The importance of mixed attached and separated flow modeling for aeroelastic analysis is discussed and recent calculations of periodic aerodynamic oscillations for an 18 percent thick circular arc airfoil are given.

  7. Computational methods for unsteady transonic flows

    NASA Technical Reports Server (NTRS)

    Edwards, John W.; Thomas, J. L.

    1987-01-01

    Computational methods for unsteady transonic flows are surveyed with emphasis on prediction. Computational difficulty is discussed with respect to type of unsteady flow; attached, mixed (attached/separated) and separated. Significant early computations of shock motions, aileron buzz and periodic oscillations are discussed. The maturation of computational methods towards the capability of treating complete vehicles with reasonable computational resources is noted and a survey of recent comparisons with experimental results is compiled. The importance of mixed attached and separated flow modeling for aeroelastic analysis is discussed, and recent calculations of periodic aerodynamic oscillations for an 18 percent thick circular arc airfoil are given.

  8. Multiprocessor computer overset grid method and apparatus

    DOEpatents

    Barnette, Daniel W.; Ober, Curtis C.

    2003-01-01

    A multiprocessor computer overset grid method and apparatus comprises associating points in each overset grid with processors and using mapped interpolation transformations to communicate intermediate values between processors assigned base and target points of the interpolation transformations. The method allows a multiprocessor computer to operate with effective load balance on overset grid applications.

  9. Computational Methods in Nanostructure Design

    NASA Astrophysics Data System (ADS)

    Bellesia, Giovanni; Lampoudi, Sotiria; Shea, Joan-Emma

    Self-assembling peptides can serve as building blocks for novel biomaterials. Replica exchange molecular dynamics simulations are a powerful means to probe the conformational space of these peptides. We discuss the theoretical foundations of this enhanced sampling method and its use in biomolecular simulations. We then apply this method to determine the monomeric conformations of the Alzheimer amyloid-β(12-28) peptide that can serve as initiation sites for aggregation.

  10. Toward a method for tracking virus evolutionary trajectory applied to the pandemic H1N1 2009 influenza virus.

    PubMed

    Squires, R Burke; Pickett, Brett E; Das, Sajal; Scheuermann, Richard H

    2014-12-01

    In 2009 a novel pandemic H1N1 influenza virus (H1N1pdm09) emerged as the first official influenza pandemic of the 21st century. Early genomic sequence analysis pointed to the swine origin of the virus. Here we report a novel computational approach to determine the evolutionary trajectory of viral sequences that uses data-driven estimations of nucleotide substitution rates to track the gradual accumulation of observed sequence alterations over time. Phylogenetic analysis and multiple sequence alignments show that sequences belonging to the resulting evolutionary trajectory of the H1N1pdm09 lineage exhibit a gradual accumulation of sequence variations and tight temporal correlations in the topological structure of the phylogenetic trees. These results suggest that our evolutionary trajectory analysis (ETA) can more effectively pinpoint the evolutionary history of viruses, including the host and geographical location traversed by each segment, when compared against either BLAST or traditional phylogenetic analysis alone. PMID:25064525

  11. Combinatorial protein design strategies using computational methods.

    PubMed

    Kono, Hidetoshi; Wang, Wei; Saven, Jeffery G

    2007-01-01

    Computational methods continue to facilitate efforts in protein design. Most of this work has focused on searching sequence space to identify one or a few sequences compatible with a given structure and functionality. Probabilistic computational methods provide information regarding the range of amino acid variability permitted by desired functional and structural constraints. Such methods may be used to guide the construction of both individual sequences and combinatorial libraries of proteins. PMID:17041256

  12. Computational Methods to Model Persistence.

    PubMed

    Vandervelde, Alexandra; Loris, Remy; Danckaert, Jan; Gelens, Lendert

    2016-01-01

    Bacterial persister cells are dormant cells, tolerant to multiple antibiotics, that are involved in several chronic infections. Toxin-antitoxin modules play a significant role in the generation of such persister cells. Toxin-antitoxin modules are small genetic elements, omnipresent in the genomes of bacteria, which code for an intracellular toxin and its neutralizing antitoxin. In the past decade, mathematical modeling has become an important tool to study the regulation of toxin-antitoxin modules and their relation to the emergence of persister cells. Here, we provide an overview of several numerical methods to simulate toxin-antitoxin modules. We cover both deterministic modeling using ordinary differential equations and stochastic modeling using stochastic differential equations and the Gillespie method. Several characteristics of toxin-antitoxin modules such as protein production and degradation, negative autoregulation through DNA binding, toxin-antitoxin complex formation and conditional cooperativity are gradually integrated in these models. Finally, by including growth rate modulation, we link toxin-antitoxin module expression to the generation of persister cells. PMID:26468111

  13. Computational Methods for Ideal Magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Kercher, Andrew D.

    Numerical schemes for the ideal magnetohydrodynamics (MHD) are widely used for modeling space weather and astrophysical flows. They are designed to resolve the different waves that propagate through a magnetohydro fluid, namely, the fast, Alfven, slow, and entropy waves. Numerical schemes for ideal magnetohydrodynamics that are based on the standard finite volume (FV) discretization exhibit pseudo-convergence in which non-regular waves no longer exist only after heavy grid refinement. A method is described for obtaining solutions for coplanar and near coplanar cases that consist of only regular waves, independent of grid refinement. The method, referred to as Compound Wave Modification (CWM), involves removing the flux associated with non-regular structures and can be used for simulations in two- and three-dimensions because it does not require explicitly tracking an Alfven wave. For a near coplanar case, and for grids with 213 points or less, we find root-mean-square-errors (RMSEs) that are as much as 6 times smaller. For the coplanar case, in which non-regular structures will exist at all levels of grid refinement for standard FV schemes, the RMSE is as much as 25 times smaller. A multidimensional ideal MHD code has been implemented for simulations on graphics processing units (GPUs). Performance measurements were conducted for both the NVIDIA GeForce GTX Titan and Intel Xeon E5645 processor. The GPU is shown to perform one to two orders of magnitude greater than the CPU when using a single core, and two to three times greater than when run in parallel with OpenMP. Performance comparisons are made for two methods of storing data on the GPU. The first approach stores data as an Array of Structures (AoS), e.g., a point coordinate array of size 3 x n is iterated over. The second approach stores data as a Structure of Arrays (SoA), e.g. three separate arrays of size n are iterated over simultaneously. For an AoS, coalescing does not occur, reducing memory efficiency

  14. A computational kinematics and evolutionary approach to model molecular flexibility for bionanotechnology

    NASA Astrophysics Data System (ADS)

    Brintaki, Athina N.

    Modeling molecular structures is critical for understanding the principles that govern the behavior of molecules and for facilitating the exploration of potential pharmaceutical drugs and nanoscale designs. Biological molecules are flexible bodies that can adopt many different shapes (or conformations) until they reach a stable molecular state that is usually described by the minimum internal energy. A major challenge in modeling flexible molecules is the exponential explosion in computational complexity as the molecular size increases and many degrees of freedom are considered to represent the molecules' flexibility. This research work proposes a novel generic computational geometric approach called enhanced BioGeoFilter (g.eBGF) that geometrically interprets inter-atomic interactions to impose geometric constraints during molecular conformational search to reduce the time for identifying chemically-feasible conformations. Two new methods called Kinematics-Based Differential Evolution ( kDE) and Biological Differential Evolution ( BioDE) are also introduced to direct the molecular conformational search towards low energy (stable) conformations. The proposed kDE method kinematically describes a molecule's deformation mechanism while it uses differential evolution to minimize the intra-molecular energy. On the other hand, the proposed BioDE utilizes our developed g.eBGF data structure as a surrogate approximation model to reduce the number of exact evaluations and to speed the molecular conformational search. This research work will be extremely useful in enabling the modeling of flexible molecules and in facilitating the exploration of nanoscale designs through the virtual assembly of molecules. Our research work can also be used in areas such as molecular docking, protein folding, and nanoscale computer-aided design where rapid collision detection scheme for highly deformable objects is essential.

  15. Computational analysis of fitness landscapes and evolutionary networks from in vitro evolution experiments.

    PubMed

    Xulvi-Brunet, Ramon; Campbell, Gregory W; Rajamani, Sudha; Jiménez, José I; Chen, Irene A

    2016-08-15

    In vitro selection experiments in biochemistry allow for the discovery of novel molecules capable of specific desired biochemical functions. However, this is not the only benefit we can obtain from such selection experiments. Since selection from a random library yields an unprecedented, and sometimes comprehensive, view of how a particular biochemical function is distributed across sequence space, selection experiments also provide data for creating and analyzing molecular fitness landscapes, which directly map function (phenotypes) to sequence information (genotypes). Given the importance of understanding the relationship between sequence and functional activity, reliable methods to build and analyze fitness landscapes are needed. Here, we present some statistical methods to extract this information from pools of RNA molecules. We also provide new computational tools to construct and study molecular fitness landscapes. PMID:27211010

  16. Transonic wing analysis using advanced computational methods

    NASA Technical Reports Server (NTRS)

    Henne, P. A.; Hicks, R. M.

    1978-01-01

    This paper discusses the application of three-dimensional computational transonic flow methods to several different types of transport wing designs. The purpose of these applications is to evaluate the basic accuracy and limitations associated with such numerical methods. The use of such computational methods for practical engineering problems can only be justified after favorable evaluations are completed. The paper summarizes a study of both the small-disturbance and the full potential technique for computing three-dimensional transonic flows. Computed three-dimensional results are compared to both experimental measurements and theoretical results. Comparisons are made not only of pressure distributions but also of lift and drag forces. Transonic drag rise characteristics are compared. Three-dimensional pressure distributions and aerodynamic forces, computed from the full potential solution, compare reasonably well with experimental results for a wide range of configurations and flow conditions.

  17. Spectral Methods for Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Zang, T. A.; Streett, C. L.; Hussaini, M. Y.

    1994-01-01

    As a tool for large-scale computations in fluid dynamics, spectral methods were prophesized in 1944, born in 1954, virtually buried in the mid-1960's, resurrected in 1969, evangalized in the 1970's, and catholicized in the 1980's. The use of spectral methods for meteorological problems was proposed by Blinova in 1944 and the first numerical computations were conducted by Silberman (1954). By the early 1960's computers had achieved sufficient power to permit calculations with hundreds of degrees of freedom. For problems of this size the traditional way of computing the nonlinear terms in spectral methods was expensive compared with finite-difference methods. Consequently, spectral methods fell out of favor. The expense of computing nonlinear terms remained a severe drawback until Orszag (1969) and Eliasen, Machenauer, and Rasmussen (1970) developed the transform methods that still form the backbone of many large-scale spectral computations. The original proselytes of spectral methods were meteorologists involved in global weather modeling and fluid dynamicists investigating isotropic turbulence. The converts who were inspired by the successes of these pioneers remained, for the most part, confined to these and closely related fields throughout the 1970's. During that decade spectral methods appeared to be well-suited only for problems governed by ordinary diSerential eqllations or by partial differential equations with periodic boundary conditions. And, of course, the solution itself needed to be smooth. Some of the obstacles to wider application of spectral methods were: (1) poor resolution of discontinuous solutions; (2) inefficient implementation of implicit methods; and (3) drastic geometric constraints. All of these barriers have undergone some erosion during the 1980's, particularly the latter two. As a result, the applicability and appeal of spectral methods for computational fluid dynamics has broadened considerably. The motivation for the use of spectral

  18. Computational Chemistry Using Modern Electronic Structure Methods

    ERIC Educational Resources Information Center

    Bell, Stephen; Dines, Trevor J.; Chowdhry, Babur Z.; Withnall, Robert

    2007-01-01

    Various modern electronic structure methods are now days used to teach computational chemistry to undergraduate students. Such quantum calculations can now be easily used even for large size molecules.

  19. Computational methods for global/local analysis

    NASA Technical Reports Server (NTRS)

    Ransom, Jonathan B.; Mccleary, Susan L.; Aminpour, Mohammad A.; Knight, Norman F., Jr.

    1992-01-01

    Computational methods for global/local analysis of structures which include both uncoupled and coupled methods are described. In addition, global/local analysis methodology for automatic refinement of incompatible global and local finite element models is developed. Representative structural analysis problems are presented to demonstrate the global/local analysis methods.

  20. Detection and characterization of flaws in CFRP by SQUID gradiometer using evolutionary computation

    NASA Astrophysics Data System (ADS)

    Kojima, F.; Kawai, R.; Kasai, N.; Hatsukade, H.

    2002-05-01

    This paper is concerned with a quantitative nondestructive evaluation of a carbon fiberglass reinforced plastic (CFRP) using the low temperature superconductor (LTS) SQUID gradiometer. The evaluation can be implemented by applying the alternative current injection to the CFRP with a defect. A 3D finite element code is developed for analyzing the SQUID based nondestructive system. Using the evolutionary programming, an efficient inverse scheme is proposed for recovering flaws in CFRP. The proposed algorithm is tested for the detection and characterization of flaws in the CFRP samples.

  1. Evolutionary Analysis of Dengue Serotype 2 Viruses Using Phylogenetic and Bayesian Methods from New Delhi, India

    PubMed Central

    Afreen, Nazia; Naqvi, Irshad H.; Broor, Shobha; Ahmed, Anwar; Kazim, Syed Naqui; Dohare, Ravins; Kumar, Manoj; Parveen, Shama

    2016-01-01

    Dengue fever is the most important arboviral disease in the tropical and sub-tropical countries of the world. Delhi, the metropolitan capital state of India, has reported many dengue outbreaks, with the last outbreak occurring in 2013. We have recently reported predominance of dengue virus serotype 2 during 2011–2014 in Delhi. In the present study, we report molecular characterization and evolutionary analysis of dengue serotype 2 viruses which were detected in 2011–2014 in Delhi. Envelope genes of 42 DENV-2 strains were sequenced in the study. All DENV-2 strains grouped within the Cosmopolitan genotype and further clustered into three lineages; Lineage I, II and III. Lineage III replaced lineage I during dengue fever outbreak of 2013. Further, a novel mutation Thr404Ile was detected in the stem region of the envelope protein of a single DENV-2 strain in 2014. Nucleotide substitution rate and time to the most recent common ancestor were determined by molecular clock analysis using Bayesian methods. A change in effective population size of Indian DENV-2 viruses was investigated through Bayesian skyline plot. The study will be a vital road map for investigation of epidemiology and evolutionary pattern of dengue viruses in India. PMID:26977703

  2. Evolutionary Analysis of Dengue Serotype 2 Viruses Using Phylogenetic and Bayesian Methods from New Delhi, India.

    PubMed

    Afreen, Nazia; Naqvi, Irshad H; Broor, Shobha; Ahmed, Anwar; Kazim, Syed Naqui; Dohare, Ravins; Kumar, Manoj; Parveen, Shama

    2016-03-01

    Dengue fever is the most important arboviral disease in the tropical and sub-tropical countries of the world. Delhi, the metropolitan capital state of India, has reported many dengue outbreaks, with the last outbreak occurring in 2013. We have recently reported predominance of dengue virus serotype 2 during 2011-2014 in Delhi. In the present study, we report molecular characterization and evolutionary analysis of dengue serotype 2 viruses which were detected in 2011-2014 in Delhi. Envelope genes of 42 DENV-2 strains were sequenced in the study. All DENV-2 strains grouped within the Cosmopolitan genotype and further clustered into three lineages; Lineage I, II and III. Lineage III replaced lineage I during dengue fever outbreak of 2013. Further, a novel mutation Thr404Ile was detected in the stem region of the envelope protein of a single DENV-2 strain in 2014. Nucleotide substitution rate and time to the most recent common ancestor were determined by molecular clock analysis using Bayesian methods. A change in effective population size of Indian DENV-2 viruses was investigated through Bayesian skyline plot. The study will be a vital road map for investigation of epidemiology and evolutionary pattern of dengue viruses in India. PMID:26977703

  3. A Segmentation-Based Method to Extract Structural and Evolutionary Features for Protein Fold Recognition.

    PubMed

    Dehzangi, Abdollah; Paliwal, Kuldip; Lyons, James; Sharma, Alok; Sattar, Abdul

    2014-01-01

    Protein fold recognition (PFR) is considered as an important step towards the protein structure prediction problem. Despite all the efforts that have been made so far, finding an accurate and fast computational approach to solve the PFR still remains a challenging problem for bioinformatics and computational biology. In this study, we propose the concept of segmented-based feature extraction technique to provide local evolutionary information embedded in position specific scoring matrix (PSSM) and structural information embedded in the predicted secondary structure of proteins using SPINE-X. We also employ the concept of occurrence feature to extract global discriminatory information from PSSM and SPINE-X. By applying a support vector machine (SVM) to our extracted features, we enhance the protein fold prediction accuracy for 7.4 percent over the best results reported in the literature. We also report 73.8 percent prediction accuracy for a data set consisting of proteins with less than 25 percent sequence similarity rates and 80.7 percent prediction accuracy for a data set with proteins belonging to 110 folds with less than 40 percent sequence similarity rates. We also investigate the relation between the number of folds and the number of features being used and show that the number of features should be increased to get better protein fold prediction results when the number of folds is relatively large. PMID:26356019

  4. Improving hospital bed occupancy and resource utilization through queuing modeling and evolutionary computation.

    PubMed

    Belciug, Smaranda; Gorunescu, Florin

    2015-02-01

    Scarce healthcare resources require carefully made policies ensuring optimal bed allocation, quality healthcare service, and adequate financial support. This paper proposes a complex analysis of the resource allocation in a hospital department by integrating in the same framework a queuing system, a compartmental model, and an evolutionary-based optimization. The queuing system shapes the flow of patients through the hospital, the compartmental model offers a feasible structure of the hospital department in accordance to the queuing characteristics, and the evolutionary paradigm provides the means to optimize the bed-occupancy management and the resource utilization using a genetic algorithm approach. The paper also focuses on a "What-if analysis" providing a flexible tool to explore the effects on the outcomes of the queuing system and resource utilization through systematic changes in the input parameters. The methodology was illustrated using a simulation based on real data collected from a geriatric department of a hospital from London, UK. In addition, the paper explores the possibility of adapting the methodology to different medical departments (surgery, stroke, and mental illness). Moreover, the paper also focuses on the practical use of the model from the healthcare point of view, by presenting a simulated application. PMID:25433363

  5. Computational Methods for Rough Classification and Discovery.

    ERIC Educational Resources Information Center

    Bell, D. A.; Guan, J. W.

    1998-01-01

    Rough set theory is a new mathematical tool to deal with vagueness and uncertainty. Computational methods are presented for using rough sets to identify classes in datasets, finding dependencies in relations, and discovering rules which are hidden in databases. The methods are illustrated with a running example from a database of car test results.…

  6. Updated Panel-Method Computer Program

    NASA Technical Reports Server (NTRS)

    Ashby, Dale L.

    1995-01-01

    Panel code PMARC_12 (Panel Method Ames Research Center, version 12) computes potential-flow fields around complex three-dimensional bodies such as complete aircraft models. Contains several advanced features, including internal mathematical modeling of flow, time-stepping wake model for simulating either steady or unsteady motions, capability for Trefftz computation of drag induced by plane, and capability for computation of off-body and on-body streamlines, and capability of computation of boundary-layer parameters by use of two-dimensional integral boundary-layer method along surface streamlines. Investigators interested in visual representations of phenomena, may want to consider obtaining program GVS (ARC-13361), General visualization System. GVS is Silicon Graphics IRIS program created to support scientific-visualization needs of PMARC_12. GVS available separately from COSMIC. PMARC_12 written in standard FORTRAN 77, with exception of NAMELIST extension used for input.

  7. Computing discharge using the index velocity method

    USGS Publications Warehouse

    Levesque, Victor A.; Oberg, Kevin A.

    2012-01-01

    Application of the index velocity method for computing continuous records of discharge has become increasingly common, especially since the introduction of low-cost acoustic Doppler velocity meters (ADVMs) in 1997. Presently (2011), the index velocity method is being used to compute discharge records for approximately 470 gaging stations operated and maintained by the U.S. Geological Survey. The purpose of this report is to document and describe techniques for computing discharge records using the index velocity method. Computing discharge using the index velocity method differs from the traditional stage-discharge method by separating velocity and area into two ratings—the index velocity rating and the stage-area rating. The outputs from each of these ratings, mean channel velocity (V) and cross-sectional area (A), are then multiplied together to compute a discharge. For the index velocity method, V is a function of such parameters as streamwise velocity, stage, cross-stream velocity, and velocity head, and A is a function of stage and cross-section shape. The index velocity method can be used at locations where stage-discharge methods are used, but it is especially appropriate when more than one specific discharge can be measured for a specific stage. After the ADVM is selected, installed, and configured, the stage-area rating and the index velocity rating must be developed. A standard cross section is identified and surveyed in order to develop the stage-area rating. The standard cross section should be surveyed every year for the first 3 years of operation and thereafter at a lesser frequency, depending on the susceptibility of the cross section to change. Periodic measurements of discharge are used to calibrate and validate the index rating for the range of conditions experienced at the gaging station. Data from discharge measurements, ADVMs, and stage sensors are compiled for index-rating analysis. Index ratings are developed by means of regression

  8. Computational stoning method for surface defect detection

    NASA Astrophysics Data System (ADS)

    Ma, Ninshu; Zhu, Xinhai

    2013-12-01

    Surface defects on outer panels of automotive bodies must be controlled in order to improve the surface quality. The detection and quantitative evaluation of surface defects are quite difficult because the deflection of surface defects is very small. One of detecting methods for surface defects used in factories is a stoning method in which a stone block is moved on the surface of a stamped panel. The computational stoning method was developed to detect surface low defect by authors based on a geometry contact algorithm between a stone block and a stamped panel. If the surface is convex, the stone block always contacts with the convex surface of a stamped panel and the contact gap between them is zero. If there is a surface low, the stone block does not contact to the surface and the contact gap can be computed based on contact algorithm. The convex surface defect can also be detected by applying computational stoning method to the back surface of a stamped panel. By performing two way stoning computations from both the normal surface and the back surface, not only the depth of surface low defect but also the height of convex surface defect can be detected. The surface low defect and convex surface defect can also be detected through multi-directions. Surface defects on the handle emboss of outer panels were accurately detected using the computational stoning method and compared with the real shape. A very good accuracy was obtained.

  9. Method and system for benchmarking computers

    DOEpatents

    Gustafson, John L.

    1993-09-14

    A testing system and method for benchmarking computer systems. The system includes a store containing a scalable set of tasks to be performed to produce a solution in ever-increasing degrees of resolution as a larger number of the tasks are performed. A timing and control module allots to each computer a fixed benchmarking interval in which to perform the stored tasks. Means are provided for determining, after completion of the benchmarking interval, the degree of progress through the scalable set of tasks and for producing a benchmarking rating relating to the degree of progress for each computer.

  10. Allosaurus, crocodiles, and birds: evolutionary clues from spiral computed tomography of an endocast.

    PubMed

    Rogers, S W

    1999-10-15

    Because the brain does not usually leave direct evidence of its existence in the fossil record, our view of this structure in extinct species has relied upon inferences drawn from comparisons between parts of the skeleton that do fossilize or with modern-day relatives that survived extinction. However, soft-tissue structure preservation may indeed occasionally occur, particularly in the endocranial space. By applying modern imaging and analysis methods to such natural cranial "endocasts," we can now learn more than ever thought possible about the brains of extinct species. I will discuss one such example in which spiral computed tomography (CT) scanning analysis has been successfully applied to reveal preserved internal structures of a naturally occurring endocranial cast of Allosaurus fragilis, the dominant carnivorous dinosaur of the late Jurassic period. The ability to directly examine the neuroanatomy of an extinct dinosaur, whose modern-day relatives are birds and crocodiles, has exciting implications about Allosaurus' behavior, its adaptive responses to its environment, and its eventual extinction. PMID:10597341

  11. Computational Analysis of the Predicted Evolutionary Conservation of Human Phosphorylation Sites

    PubMed Central

    Trost, Brett; Kusalik, Anthony; Napper, Scott

    2016-01-01

    Protein kinase-mediated phosphorylation is among the most important post-translational modifications. However, few phosphorylation sites have been experimentally identified for most species, making it difficult to determine the degree to which phosphorylation sites are conserved. The goal of this study was to use computational methods to characterize the conservation of human phosphorylation sites in a wide variety of eukaryotes. Using experimentally-determined human sites as input, homologous phosphorylation sites were predicted in all 432 eukaryotes for which complete proteomes were available. For each pair of species, we calculated phosphorylation site conservation as the number of phosphorylation sites found in both species divided by the number found in at least one of the two species. A clustering of the species based on this conservation measure was concordant with phylogenies based on traditional genomic measures. For a subset of the 432 species, phosphorylation site conservation was compared to conservation of both protein kinases and proteins in general. Protein kinases exhibited the highest degree of conservation, while general proteins were less conserved and phosphorylation sites were least conserved. Although preliminary, these data tentatively suggest that variation in phosphorylation sites may play a larger role in explaining phenotypic differences among organisms than differences in the complements of protein kinases or general proteins. PMID:27046079

  12. Comparison of Evolutionary (Genetic) Algorithm and Adjoint Methods for Multi-Objective Viscous Airfoil Optimizations

    NASA Technical Reports Server (NTRS)

    Pulliam, T. H.; Nemec, M.; Holst, T.; Zingg, D. W.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    A comparison between an Evolutionary Algorithm (EA) and an Adjoint-Gradient (AG) Method applied to a two-dimensional Navier-Stokes code for airfoil design is presented. Both approaches use a common function evaluation code, the steady-state explicit part of the code,ARC2D. The parameterization of the design space is a common B-spline approach for an airfoil surface, which together with a common griding approach, restricts the AG and EA to the same design space. Results are presented for a class of viscous transonic airfoils in which the optimization tradeoff between drag minimization as one objective and lift maximization as another, produces the multi-objective design space. Comparisons are made for efficiency, accuracy and design consistency.

  13. Kerf modelling in abrasive waterjet milling using evolutionary computation and ANOVA techniques

    NASA Astrophysics Data System (ADS)

    Alberdi, A.; Rivero, A.; Carrascal, A.; Lamikiz, A.

    2012-04-01

    Many researchers demonstrated the capability of Abrasive Waterjet (AWJ) technology for precision milling operations. However, the concurrence of several input parameters along with the stochastic nature of this technology leads to a complex process control, which requires a work focused in process modelling. This research work introduces a model to predict the kerf shape in AWJ slot milling in Aluminium 7075-T651 in terms of four important process parameters: the pressure, the abrasive flow rate, the stand-off distance and the traverse feed rate. A hybrid evolutionary approach was employed for kerf shape modelling. This technique allowed characterizing the profile through two parameters: the maximum cutting depth and the full width at half maximum. On the other hand, based on ANOVA and regression techniques, these two parameters were also modelled as a function of process parameters. Combination of both models resulted in an adequate strategy to predict the kerf shape for different machining conditions.

  14. ADVANCED COMPUTATIONAL METHODS IN DOSE MODELING: APPLICATION OF COMPUTATIONAL BIOPHYSICAL TRANSPORT, COMPUTATIONAL CHEMISTRY, AND COMPUTATIONAL BIOLOGY

    EPA Science Inventory

    Computational toxicology (CompTox) leverages the significant gains in computing power and computational techniques (e.g., numerical approaches, structure-activity relationships, bioinformatics) realized over the last few years, thereby reducing costs and increasing efficiency i...

  15. Semiempirical methods for computing turbulent flows

    NASA Technical Reports Server (NTRS)

    Belov, I. A.; Ginzburg, I. P.

    1986-01-01

    Two semiempirical theories which provide a basis for determining the turbulent friction and heat exchange near a wall are presented: (1) the Prandtl-Karman theory, and (2) the theory utilizing an equation for the energy of turbulent pulsations. A comparison is made between exact numerical methods and approximate integral methods for computing the turbulent boundary layers in the presence of pressure, blowing, or suction gradients. Using the turbulent flow around a plate as an example, it is shown that, when computing turbulent flows with external turbulence, it is preferable to construct a turbulence model based on the equation for energy of turbulent pulsations.

  16. An Efficient Method for Computing All Reducts

    NASA Astrophysics Data System (ADS)

    Bao, Yongguang; Du, Xiaoyong; Deng, Mingrong; Ishii, Naohiro

    In the process of data mining of decision table using Rough Sets methodology, the main computational effort is associated with the determination of the reducts. Computing all reducts is a combinatorial NP-hard computational problem. Therefore the only way to achieve its faster execution is by providing an algorithm, with a better constant factor, which may solve this problem in reasonable time for real-life data sets. The purpose of this presentation is to propose two new efficient algorithms to compute reducts in information systems. The proposed algorithms are based on the proposition of reduct and the relation between the reduct and discernibility matrix. Experiments have been conducted on some real world domains in execution time. The results show it improves the execution time when compared with the other methods. In real application, we can combine the two proposed algorithms.

  17. Evolutionary methods for multidisciplinary optimization applied to the design of UAV systems†

    NASA Astrophysics Data System (ADS)

    Gonzalez, L. F.; Periaux, J.; Damp, L.; Srinivas, K.

    2007-10-01

    The implementation and use of a framework in which engineering optimization problems can be analysed are described. In the first part, the foundations of the framework and the hierarchical asynchronous parallel multi-objective evolutionary algorithms (HAPMOEAs) are presented. These are based upon evolution strategies and incorporate the concepts of multi-objective optimization, hierarchical topology, asynchronous evaluation of candidate solutions, and parallel computing. The methodology is presented first and the potential of HAPMOEAs for solving multi-criteria optimization problems is demonstrated on test case problems of increasing difficulty. In the second part of the article several recent applications of multi-objective and multidisciplinary optimization (MO) are described. These illustrate the capabilities of the framework and methodology for the design of UAV and UCAV systems. The application presented deals with a two-objective (drag and weight) UAV wing plan-form optimization. The basic concepts are refined and more sophisticated software and design tools with low- and high-fidelity CFD and FEA models are introduced. Various features described in the text are used to meet the challenge in optimization presented by these test cases.

  18. Computational methods for inlet airframe integration

    NASA Technical Reports Server (NTRS)

    Towne, Charles E.

    1988-01-01

    Fundamental equations encountered in computational fluid dynamics (CFD), and analyses used for internal flow are introduced. Irrotational flow; Euler equations; boundary layers; parabolized Navier-Stokes equations; and time averaged Navier-Stokes equations are treated. Assumptions made and solution methods are outlined, with examples. The overall status of CFD in propulsion is indicated.

  19. Efficient Methods to Compute Genomic Predictions

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Efficient methods for processing genomic data were developed to increase reliability of estimated breeding values and simultaneously estimate thousands of marker effects. Algorithms were derived and computer programs tested on simulated data for 50,000 markers and 2,967 bulls. Accurate estimates of ...

  20. Applying Human Computation Methods to Information Science

    ERIC Educational Resources Information Center

    Harris, Christopher Glenn

    2013-01-01

    Human Computation methods such as crowdsourcing and games with a purpose (GWAP) have each recently drawn considerable attention for their ability to synergize the strengths of people and technology to accomplish tasks that are challenging for either to do well alone. Despite this increased attention, much of this transformation has been focused on…

  1. Computational Methods for Structural Mechanics and Dynamics

    NASA Technical Reports Server (NTRS)

    Stroud, W. Jefferson (Editor); Housner, Jerrold M. (Editor); Tanner, John A. (Editor); Hayduk, Robert J. (Editor)

    1989-01-01

    Topics addressed include: transient dynamics; transient finite element method; transient analysis in impact and crash dynamic studies; multibody computer codes; dynamic analysis of space structures; multibody mechanics and manipulators; spatial and coplanar linkage systems; flexible body simulation; multibody dynamics; dynamical systems; and nonlinear characteristics of joints.

  2. Borg: an auto-adaptive many-objective evolutionary computing framework.

    PubMed

    Hadka, David; Reed, Patrick

    2013-01-01

    This study introduces the Borg multi-objective evolutionary algorithm (MOEA) for many-objective, multimodal optimization. The Borg MOEA combines ε-dominance, a measure of convergence speed named ε-progress, randomized restarts, and auto-adaptive multioperator recombination into a unified optimization framework. A comparative study on 33 instances of 18 test problems from the DTLZ, WFG, and CEC 2009 test suites demonstrates Borg meets or exceeds six state of the art MOEAs on the majority of the tested problems. The performance for each test problem is evaluated using a 1,000 point Latin hypercube sampling of each algorithm's feasible parameterization space. The statistical performance of every sampled MOEA parameterization is evaluated using 50 replicate random seed trials. The Borg MOEA is not a single algorithm; instead it represents a class of algorithms whose operators are adaptively selected based on the problem. The adaptive discovery of key operators is of particular importance for benchmarking how variation operators enhance search for complex many-objective problems. PMID:22385134

  3. An Exploratory Framework for Combining CFD Analysis and Evolutionary Optimization into a Single Integrated Computational Environment

    SciTech Connect

    McCorkle, Douglas S.; Bryden, Kenneth M.

    2011-01-01

    Several recent reports and workshops have identified integrated computational engineering as an emerging technology with the potential to transform engineering design. The goal is to integrate geometric models, analyses, simulations, optimization and decision-making tools, and all other aspects of the engineering process into a shared, interactive computer-generated environment that facilitates multidisciplinary and collaborative engineering. While integrated computational engineering environments can be constructed from scratch with high-level programming languages, the complexity of these proposed environments makes this type of approach prohibitively slow and expensive. Rather, a high-level software framework is needed to provide the user with the capability to construct an application in an intuitive manner using existing models and engineering tools with minimal programming. In this paper, we present an exploratory open source software framework that can be used to integrate the geometric models, computational fluid dynamics (CFD), and optimization tools needed for shape optimization of complex systems. This framework is demonstrated using the multiphase flow analysis of a complete coal transport system for an 800 MW pulverized coal power station. The framework uses engineering objects and three-dimensional visualization to enable the user to interactively design and optimize the performance of the coal transport system.

  4. Shifted power method for computing tensor eigenvalues.

    SciTech Connect

    Mayo, Jackson R.; Kolda, Tamara Gibson

    2010-07-01

    Recent work on eigenvalues and eigenvectors for tensors of order m >= 3 has been motivated by applications in blind source separation, magnetic resonance imaging, molecular conformation, and more. In this paper, we consider methods for computing real symmetric-tensor eigenpairs of the form Ax{sup m-1} = lambda x subject to ||x||=1, which is closely related to optimal rank-1 approximation of a symmetric tensor. Our contribution is a shifted symmetric higher-order power method (SS-HOPM), which we show is guaranteed to converge to a tensor eigenpair. SS-HOPM can be viewed as a generalization of the power iteration method for matrices or of the symmetric higher-order power method. Additionally, using fixed point analysis, we can characterize exactly which eigenpairs can and cannot be found by the method. Numerical examples are presented, including examples from an extension of the method to finding complex eigenpairs.

  5. Shifted power method for computing tensor eigenpairs.

    SciTech Connect

    Mayo, Jackson R.; Kolda, Tamara Gibson

    2010-10-01

    Recent work on eigenvalues and eigenvectors for tensors of order m {>=} 3 has been motivated by applications in blind source separation, magnetic resonance imaging, molecular conformation, and more. In this paper, we consider methods for computing real symmetric-tensor eigenpairs of the form Ax{sup m-1} = {lambda}x subject to {parallel}x{parallel} = 1, which is closely related to optimal rank-1 approximation of a symmetric tensor. Our contribution is a novel shifted symmetric higher-order power method (SS-HOPM), which we showis guaranteed to converge to a tensor eigenpair. SS-HOPM can be viewed as a generalization of the power iteration method for matrices or of the symmetric higher-order power method. Additionally, using fixed point analysis, we can characterize exactly which eigenpairs can and cannot be found by the method. Numerical examples are presented, including examples from an extension of the method to fnding complex eigenpairs.

  6. Computational Thermochemistry and Benchmarking of Reliable Methods

    SciTech Connect

    Feller, David F.; Dixon, David A.; Dunning, Thom H.; Dupuis, Michel; McClemore, Doug; Peterson, Kirk A.; Xantheas, Sotiris S.; Bernholdt, David E.; Windus, Theresa L.; Chalasinski, Grzegorz; Fosada, Rubicelia; Olguim, Jorge; Dobbs, Kerwin D.; Frurip, Donald; Stevens, Walter J.; Rondan, Nelson; Chase, Jared M.; Nichols, Jeffrey A.

    2006-06-20

    During the first and second years of the Computational Thermochemistry and Benchmarking of Reliable Methods project, we completed several studies using the parallel computing capabilities of the NWChem software and Molecular Science Computing Facility (MSCF), including large-scale density functional theory (DFT), second-order Moeller-Plesset (MP2) perturbation theory, and CCSD(T) calculations. During the third year, we continued to pursue the computational thermodynamic and benchmarking studies outlined in our proposal. With the issues affecting the robustness of the coupled cluster part of NWChem resolved, we pursued studies of the heats-of-formation of compounds containing 5 to 7 first- and/or second-row elements and approximately 10 to 14 hydrogens. The size of these systems, when combined with the large basis sets (cc-pVQZ and aug-cc-pVQZ) that are necessary for extrapolating to the complete basis set limit, creates a formidable computational challenge, for which NWChem on NWMPP1 is well suited.

  7. Computational methods for industrial radiation measurement applications

    SciTech Connect

    Gardner, R.P.; Guo, P.; Ao, Q.

    1996-12-31

    Computational methods have been used with considerable success to complement radiation measurements in solving a wide range of industrial problems. The almost exponential growth of computer capability and applications in the last few years leads to a {open_quotes}black box{close_quotes} mentality for radiation measurement applications. If a black box is defined as any radiation measurement device that is capable of measuring the parameters of interest when a wide range of operating and sample conditions may occur, then the development of computational methods for industrial radiation measurement applications should now be focused on the black box approach and the deduction of properties of interest from the response with acceptable accuracy and reasonable efficiency. Nowadays, increasingly better understanding of radiation physical processes, more accurate and complete fundamental physical data, and more advanced modeling and software/hardware techniques have made it possible to make giant strides in that direction with new ideas implemented with computer software. The Center for Engineering Applications of Radioisotopes (CEAR) at North Carolina State University has been working on a variety of projects in the area of radiation analyzers and gauges for accomplishing this for quite some time, and they are discussed here with emphasis on current accomplishments.

  8. Co-evolutionary Models for Reconstructing Ancestral Genomic Sequences: Computational Issues and Biological Examples

    NASA Astrophysics Data System (ADS)

    Tuller, Tamir; Birin, Hadas; Kupiec, Martin; Ruppin, Eytan

    The inference of ancestral genomes is a fundamental problem in molecular evolution. Due to the statistical nature of this problem, the most likely or the most parsimonious ancestral genomes usually include considerable error rates. In general, these errors cannot be abolished by utilizing more exhaustive computational approaches, by using longer genomic sequences, or by analyzing more taxa. In recent studies we showed that co-evolution is an important force that can be used for significantly improving the inference of ancestral genome content.

  9. Computational methods for ideal compressible flow

    NASA Technical Reports Server (NTRS)

    Vanleer, B.

    1983-01-01

    Conservative dissipative difference schemes for computing one dimensional flow are introduced, and the recognition and representation of flow discontinuities are discussed. Multidimensional methods are outlined. Second order finite volume schemes are introduced. Conversion of difference schemes for a single linear convection equation into schemes for the hyperbolic system of the nonlinear conservation laws of ideal compressible flow is explained. Approximate Riemann solvers are presented. Monotone initial value interpolation; and limiters, switches, and artificial dissipation are considered.

  10. Computational methods for vortex dominated compressible flows

    NASA Technical Reports Server (NTRS)

    Murman, Earll M.

    1987-01-01

    The principal objectives were to: understand the mechanisms by which Euler equation computations model leading edge vortex flows; understand the vortical and shock wave structures that may exist for different wing shapes, angles of incidence, and Mach numbers; and compare calculations with experiments in order to ascertain the limitations and advantages of Euler equation models. The initial approach utilized the cell centered finite volume Jameson scheme. The final calculation utilized a cell vertex finite volume method on an unstructured grid. Both methods used Runge-Kutta four stage schemes for integrating the equations. The principal findings are briefly summarized.

  11. Analytic Method for Computing Instrument Pointing Jitter

    NASA Technical Reports Server (NTRS)

    Bayard, David

    2003-01-01

    A new method of calculating the root-mean-square (rms) pointing jitter of a scientific instrument (e.g., a camera, radar antenna, or telescope) is introduced based on a state-space concept. In comparison with the prior method of calculating the rms pointing jitter, the present method involves significantly less computation. The rms pointing jitter of an instrument (the square root of the jitter variance shown in the figure) is an important physical quantity which impacts the design of the instrument, its actuators, controls, sensory components, and sensor- output-sampling circuitry. Using the Sirlin, San Martin, and Lucke definition of pointing jitter, the prior method of computing the rms pointing jitter involves a frequency-domain integral of a rational polynomial multiplied by a transcendental weighting function, necessitating the use of numerical-integration techniques. In practice, numerical integration complicates the problem of calculating the rms pointing error. In contrast, the state-space method provides exact analytic expressions that can be evaluated without numerical integration.

  12. Probabilistic Computational Methods in Structural Failure Analysis

    NASA Astrophysics Data System (ADS)

    Krejsa, Martin; Kralik, Juraj

    2015-12-01

    Probabilistic methods are used in engineering where a computational model contains random variables. Each random variable in the probabilistic calculations contains uncertainties. Typical sources of uncertainties are properties of the material and production and/or assembly inaccuracies in the geometry or the environment where the structure should be located. The paper is focused on methods for the calculations of failure probabilities in structural failure and reliability analysis with special attention on newly developed probabilistic method: Direct Optimized Probabilistic Calculation (DOProC), which is highly efficient in terms of calculation time and the accuracy of the solution. The novelty of the proposed method lies in an optimized numerical integration that does not require any simulation technique. The algorithm has been implemented in mentioned software applications, and has been used several times in probabilistic tasks and probabilistic reliability assessments.

  13. Computer optimization techniques for NASA Langley's CSI evolutionary model's real-time control system

    NASA Technical Reports Server (NTRS)

    Elliott, Kenny B.; Ugoletti, Roberto; Sulla, Jeff

    1992-01-01

    The evolution and optimization of a real-time digital control system is presented. The control system is part of a testbed used to perform focused technology research on the interactions of spacecraft platform and instrument controllers with the flexible-body dynamics of the platform and platform appendages. The control system consists of Computer Automated Measurement and Control (CAMAC) standard data acquisition equipment interfaced to a workstation computer. The goal of this work is to optimize the control system's performance to support controls research using controllers with up to 50 states and frame rates above 200 Hz. The original system could support a 16-state controller operating at a rate of 150 Hz. By using simple yet effective software improvements, Input/Output (I/O) latencies and contention problems are reduced or eliminated in the control system. The final configuration can support a 16-state controller operating at 475 Hz. Effectively the control system's performance was increased by a factor of 3.

  14. Numerical methods for problems in computational aeroacoustics

    NASA Astrophysics Data System (ADS)

    Mead, Jodi Lorraine

    1998-12-01

    A goal of computational aeroacoustics is the accurate calculation of noise from a jet in the far field. This work concerns the numerical aspects of accurately calculating acoustic waves over large distances and long time. More specifically, the stability, efficiency, accuracy, dispersion and dissipation in spatial discretizations, time stepping schemes, and absorbing boundaries for the direct solution of wave propagation problems are determined. Efficient finite difference methods developed by Tam and Webb, which minimize dispersion and dissipation, are commonly used for the spatial and temporal discretization. Alternatively, high order pseudospectral methods can be made more efficient by using the grid transformation introduced by Kosloff and Tal-Ezer. Work in this dissertation confirms that the grid transformation introduced by Kosloff and Tal-Ezer is not spectrally accurate because, in the limit, the grid transformation forces zero derivatives at the boundaries. If a small number of grid points are used, it is shown that approximations with the Chebyshev pseudospectral method with the Kosloff and Tal-Ezer grid transformation are as accurate as with the Chebyshev pseudospectral method. This result is based on the analysis of the phase and amplitude errors of these methods, and their use for the solution of a benchmark problem in computational aeroacoustics. For the grid transformed Chebyshev method with a small number of grid points it is, however, more appropriate to compare its accuracy with that of high- order finite difference methods. This comparison, for an order of accuracy 10-3 for a benchmark problem in computational aeroacoustics, is performed for the grid transformed Chebyshev method and the fourth order finite difference method of Tam. Solutions with the finite difference method are as accurate. and the finite difference method is more efficient than, the Chebyshev pseudospectral method with the grid transformation. The efficiency of the Chebyshev

  15. Delamination detection using methods of computational intelligence

    NASA Astrophysics Data System (ADS)

    Ihesiulor, Obinna K.; Shankar, Krishna; Zhang, Zhifang; Ray, Tapabrata

    2012-11-01

    Abstract Reliable delamination prediction scheme is indispensable in order to prevent potential risks of catastrophic failures in composite structures. The existence of delaminations changes the vibration characteristics of composite laminates and hence such indicators can be used to quantify the health characteristics of laminates. An approach for online health monitoring of in-service composite laminates is presented in this paper that relies on methods based on computational intelligence. Typical changes in the observed vibration characteristics (i.e. change in natural frequencies) are considered as inputs to identify the existence, location and magnitude of delaminations. The performance of the proposed approach is demonstrated using numerical models of composite laminates. Since this identification problem essentially involves the solution of an optimization problem, the use of finite element (FE) methods as the underlying tool for analysis turns out to be computationally expensive. A surrogate assisted optimization approach is hence introduced to contain the computational time within affordable limits. An artificial neural network (ANN) model with Bayesian regularization is used as the underlying approximation scheme while an improved rate of convergence is achieved using a memetic algorithm. However, building of ANN surrogate models usually requires large training datasets. K-means clustering is effectively employed to reduce the size of datasets. ANN is also used via inverse modeling to determine the position, size and location of delaminations using changes in measured natural frequencies. The results clearly highlight the efficiency and the robustness of the approach.

  16. Computational Statistical Methods for Social Network Models

    PubMed Central

    Hunter, David R.; Krivitsky, Pavel N.; Schweinberger, Michael

    2013-01-01

    We review the broad range of recent statistical work in social network models, with emphasis on computational aspects of these methods. Particular focus is applied to exponential-family random graph models (ERGM) and latent variable models for data on complete networks observed at a single time point, though we also briefly review many methods for incompletely observed networks and networks observed at multiple time points. Although we mention far more modeling techniques than we can possibly cover in depth, we provide numerous citations to current literature. We illustrate several of the methods on a small, well-known network dataset, Sampson’s monks, providing code where possible so that these analyses may be duplicated. PMID:23828720

  17. Review of Computational Stirling Analysis Methods

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Wilson, Scott D.; Tew, Roy C.

    2004-01-01

    Nuclear thermal to electric power conversion carries the promise of longer duration missions and higher scientific data transmission rates back to Earth for both Mars rovers and deep space missions. A free-piston Stirling convertor is a candidate technology that is considered an efficient and reliable power conversion device for such purposes. While already very efficient, it is believed that better Stirling engines can be developed if the losses inherent its current designs could be better understood. However, they are difficult to instrument and so efforts are underway to simulate a complete Stirling engine numerically. This has only recently been attempted and a review of the methods leading up to and including such computational analysis is presented. And finally it is proposed that the quality and depth of Stirling loss understanding may be improved by utilizing the higher fidelity and efficiency of recently developed numerical methods. One such method, the Ultra HI-Fl technique is presented in detail.

  18. Computational Fluid Dynamics-Based Design Optimization Method for Archimedes Screw Blood Pumps.

    PubMed

    Yu, Hai; Janiga, Gábor; Thévenin, Dominique

    2016-04-01

    An optimization method suitable for improving the performance of Archimedes screw axial rotary blood pumps is described in the present article. In order to achieve a more robust design and to save computational resources, this method combines the advantages of the established pump design theory with modern computer-aided, computational fluid dynamics (CFD)-based design optimization (CFD-O) relying on evolutionary algorithms and computational fluid dynamics. The main purposes of this project are to: (i) integrate pump design theory within the already existing CFD-based optimization; (ii) demonstrate that the resulting procedure is suitable for optimizing an Archimedes screw blood pump in terms of efficiency. Results obtained in this study demonstrate that the developed tool is able to meet both objectives. Finally, the resulting level of hemolysis can be numerically assessed for the optimal design, as hemolysis is an issue of overwhelming importance for blood pumps. PMID:26526039

  19. An evolutionary computation based algorithm for calculating solar differential rotation by automatic tracking of coronal bright points

    NASA Astrophysics Data System (ADS)

    Shahamatnia, Ehsan; Dorotovič, Ivan; Fonseca, Jose M.; Ribeiro, Rita A.

    2016-03-01

    Developing specialized software tools is essential to support studies of solar activity evolution. With new space missions such as Solar Dynamics Observatory (SDO), solar images are being produced in unprecedented volumes. To capitalize on that huge data availability, the scientific community needs a new generation of software tools for automatic and efficient data processing. In this paper a prototype of a modular framework for solar feature detection, characterization, and tracking is presented. To develop an efficient system capable of automatic solar feature tracking and measuring, a hybrid approach combining specialized image processing, evolutionary optimization, and soft computing algorithms is being followed. The specialized hybrid algorithm for tracking solar features allows automatic feature tracking while gathering characterization details about the tracked features. The hybrid algorithm takes advantages of the snake model, a specialized image processing algorithm widely used in applications such as boundary delineation, image segmentation, and object tracking. Further, it exploits the flexibility and efficiency of Particle Swarm Optimization (PSO), a stochastic population based optimization algorithm. PSO has been used successfully in a wide range of applications including combinatorial optimization, control, clustering, robotics, scheduling, and image processing and video analysis applications. The proposed tool, denoted PSO-Snake model, was already successfully tested in other works for tracking sunspots and coronal bright points. In this work, we discuss the application of the PSO-Snake algorithm for calculating the sidereal rotational angular velocity of the solar corona. To validate the results we compare them with published manual results performed by an expert.

  20. Evolutionary Signatures amongst Disease Genes Permit Novel Methods for Gene Prioritization and Construction of Informative Gene-Based Networks

    PubMed Central

    Priedigkeit, Nolan; Wolfe, Nicholas; Clark, Nathan L.

    2015-01-01

    Genes involved in the same function tend to have similar evolutionary histories, in that their rates of evolution covary over time. This coevolutionary signature, termed Evolutionary Rate Covariation (ERC), is calculated using only gene sequences from a set of closely related species and has demonstrated potential as a computational tool for inferring functional relationships between genes. To further define applications of ERC, we first established that roughly 55% of genetic diseases posses an ERC signature between their contributing genes. At a false discovery rate of 5% we report 40 such diseases including cancers, developmental disorders and mitochondrial diseases. Given these coevolutionary signatures between disease genes, we then assessed ERC's ability to prioritize known disease genes out of a list of unrelated candidates. We found that in the presence of an ERC signature, the true disease gene is effectively prioritized to the top 6% of candidates on average. We then apply this strategy to a melanoma-associated region on chromosome 1 and identify MCL1 as a potential causative gene. Furthermore, to gain global insight into disease mechanisms, we used ERC to predict molecular connections between 310 nominally distinct diseases. The resulting “disease map” network associates several diseases with related pathogenic mechanisms and unveils many novel relationships between clinically distinct diseases, such as between Hirschsprung's disease and melanoma. Taken together, these results demonstrate the utility of molecular evolution as a gene discovery platform and show that evolutionary signatures can be used to build informative gene-based networks. PMID:25679399

  1. Computational methods for optical molecular imaging

    PubMed Central

    Chen, Duan; Wei, Guo-Wei; Cong, Wen-Xiang; Wang, Ge

    2010-01-01

    Summary A new computational technique, the matched interface and boundary (MIB) method, is presented to model the photon propagation in biological tissue for the optical molecular imaging. Optical properties have significant differences in different organs of small animals, resulting in discontinuous coefficients in the diffusion equation model. Complex organ shape of small animal induces singularities of the geometric model as well. The MIB method is designed as a dimension splitting approach to decompose a multidimensional interface problem into one-dimensional ones. The methodology simplifies the topological relation near an interface and is able to handle discontinuous coefficients and complex interfaces with geometric singularities. In the present MIB method, both the interface jump condition and the photon flux jump conditions are rigorously enforced at the interface location by using only the lowest-order jump conditions. This solution near the interface is smoothly extended across the interface so that central finite difference schemes can be employed without the loss of accuracy. A wide range of numerical experiments are carried out to validate the proposed MIB method. The second-order convergence is maintained in all benchmark problems. The fourth-order convergence is also demonstrated for some three-dimensional problems. The robustness of the proposed method over the variable strength of the linear term of the diffusion equation is also examined. The performance of the present approach is compared with that of the standard finite element method. The numerical study indicates that the proposed method is a potentially efficient and robust approach for the optical molecular imaging. PMID:20485461

  2. Computational electromagnetic methods for transcranial magnetic stimulation

    NASA Astrophysics Data System (ADS)

    Gomez, Luis J.

    Transcranial magnetic stimulation (TMS) is a noninvasive technique used both as a research tool for cognitive neuroscience and as a FDA approved treatment for depression. During TMS, coils positioned near the scalp generate electric fields and activate targeted brain regions. In this thesis, several computational electromagnetics methods that improve the analysis, design, and uncertainty quantification of TMS systems were developed. Analysis: A new fast direct technique for solving the large and sparse linear system of equations (LSEs) arising from the finite difference (FD) discretization of Maxwell's quasi-static equations was developed. Following a factorization step, the solver permits computation of TMS fields inside realistic brain models in seconds, allowing for patient-specific real-time usage during TMS. The solver is an alternative to iterative methods for solving FD LSEs, often requiring run-times of minutes. A new integral equation (IE) method for analyzing TMS fields was developed. The human head is highly-heterogeneous and characterized by high-relative permittivities (107). IE techniques for analyzing electromagnetic interactions with such media suffer from high-contrast and low-frequency breakdowns. The novel high-permittivity and low-frequency stable internally combined volume-surface IE method developed. The method not only applies to the analysis of high-permittivity objects, but it is also the first IE tool that is stable when analyzing highly-inhomogeneous negative permittivity plasmas. Design: TMS applications call for electric fields to be sharply focused on regions that lie deep inside the brain. Unfortunately, fields generated by present-day Figure-8 coils stimulate relatively large regions near the brain surface. An optimization method for designing single feed TMS coil-arrays capable of producing more localized and deeper stimulation was developed. Results show that the coil-arrays stimulate 2.4 cm into the head while stimulating 3

  3. Evolutionary stability on graphs

    PubMed Central

    Ohtsuki, Hisashi; Nowak, Martin A.

    2008-01-01

    Evolutionary stability is a fundamental concept in evolutionary game theory. A strategy is called an evolutionarily stable strategy (ESS), if its monomorphic population rejects the invasion of any other mutant strategy. Recent studies have revealed that population structure can considerably affect evolutionary dynamics. Here we derive the conditions of evolutionary stability for games on graphs. We obtain analytical conditions for regular graphs of degree k > 2. Those theoretical predictions are compared with computer simulations for random regular graphs and for lattices. We study three different update rules: birth-death (BD), death-birth (DB), and imitation (IM) updating. Evolutionary stability on sparse graphs does not imply evolutionary stability in a well-mixed population, nor vice versa. We provide a geometrical interpretation of the ESS condition on graphs. PMID:18295801

  4. Computational predictive methods for fracture and fatigue

    NASA Astrophysics Data System (ADS)

    Cordes, J.; Chang, A. T.; Nelson, N.; Kim, Y.

    1994-09-01

    The damage-tolerant design philosophy as used by aircraft industries enables aircraft components and aircraft structures to operate safely with minor damage, small cracks, and flaws. Maintenance and inspection procedures insure that damages developed during service remain below design values. When damage is found, repairs or design modifications are implemented and flight is resumed. Design and redesign guidelines, such as military specifications MIL-A-83444, have successfully reduced the incidence of damage and cracks. However, fatigue cracks continue to appear in aircraft well before the design life has expired. The F16 airplane, for instance, developed small cracks in the engine mount, wing support, bulk heads, the fuselage upper skin, the fuel shelf joints, and along the upper wings. Some cracks were found after 600 hours of the 8000 hour design service life and design modifications were required. Tests on the F16 plane showed that the design loading conditions were close to the predicted loading conditions. Improvements to analytic methods for predicting fatigue crack growth adjacent to holes, when multiple damage sites are present, and in corrosive environments would result in more cost-effective designs, fewer repairs, and fewer redesigns. The overall objective of the research described in this paper is to develop, verify, and extend the computational efficiency of analysis procedures necessary for damage tolerant design. This paper describes an elastic/plastic fracture method and an associated fatigue analysis method for damage tolerant design. Both methods are unique in that material parameters such as fracture toughness, R-curve data, and fatigue constants are not required. The methods are implemented with a general-purpose finite element package. Several proof-of-concept examples are given. With further development, the methods could be extended for analysis of multi-site damage, creep-fatigue, and corrosion fatigue problems.

  5. Computational predictive methods for fracture and fatigue

    NASA Technical Reports Server (NTRS)

    Cordes, J.; Chang, A. T.; Nelson, N.; Kim, Y.

    1994-01-01

    The damage-tolerant design philosophy as used by aircraft industries enables aircraft components and aircraft structures to operate safely with minor damage, small cracks, and flaws. Maintenance and inspection procedures insure that damages developed during service remain below design values. When damage is found, repairs or design modifications are implemented and flight is resumed. Design and redesign guidelines, such as military specifications MIL-A-83444, have successfully reduced the incidence of damage and cracks. However, fatigue cracks continue to appear in aircraft well before the design life has expired. The F16 airplane, for instance, developed small cracks in the engine mount, wing support, bulk heads, the fuselage upper skin, the fuel shelf joints, and along the upper wings. Some cracks were found after 600 hours of the 8000 hour design service life and design modifications were required. Tests on the F16 plane showed that the design loading conditions were close to the predicted loading conditions. Improvements to analytic methods for predicting fatigue crack growth adjacent to holes, when multiple damage sites are present, and in corrosive environments would result in more cost-effective designs, fewer repairs, and fewer redesigns. The overall objective of the research described in this paper is to develop, verify, and extend the computational efficiency of analysis procedures necessary for damage tolerant design. This paper describes an elastic/plastic fracture method and an associated fatigue analysis method for damage tolerant design. Both methods are unique in that material parameters such as fracture toughness, R-curve data, and fatigue constants are not required. The methods are implemented with a general-purpose finite element package. Several proof-of-concept examples are given. With further development, the methods could be extended for analysis of multi-site damage, creep-fatigue, and corrosion fatigue problems.

  6. Computational Methods for Domain Partitioning of Protein Structures

    NASA Astrophysics Data System (ADS)

    Veretnik, Stella; Shindyalov, Ilya

    Analysis of protein structures typically begins with decomposition of structure into more basic units, called "structural domains". The underlying goal is to reduce a complex protein structure to a set of simpler yet structurally meaningful units, each of which can be analyzed independently. Structural semi-independence of domains is their hallmark: domains often have compact structure and can fold or function independently. Domains can undergo so-called "domain shuffling"when they reappear in different combinations in different proteins thus implementing different biological functions (Doolittle, 1995). Proteins can then be conceived as being built of such basic blocks: some, especially small proteins, consist usually of just one domain, while other proteins possess a more complex architecture containing multiple domains. Therefore, the methods for partitioning a structure into domains are of critical importance: their outcome defines the set of basic units upon which structural classifications are built and evolutionary analysis is performed. This is especially true nowadays in the era of structural genomics. Today there are many methods that decompose the structure into domains: some of them are manual (i.e., based on human judgment), others are semiautomatic, and still others are completely automatic (based on algorithms implemented as software). Overall there is a high level of consistency and robustness in the process of partitioning a structure into domains (for ˜80% of proteins); at least for structures where domain location is obvious. The picture is less bright when we consider proteins with more complex architectures—neither human experts nor computational methods can reach consistent partitioning in many such cases. This is a rather accurate reflection of biological phenomena in general since domains are formed by different mechanisms, hence it is nearly impossible to come up with a set of well-defined rules that captures all of the observed cases.

  7. Computational simulation methods for composite fracture mechanics

    NASA Technical Reports Server (NTRS)

    Murthy, Pappu L. N.

    1988-01-01

    Structural integrity, durability, and damage tolerance of advanced composites are assessed by studying damage initiation at various scales (micro, macro, and global) and accumulation and growth leading to global failure, quantitatively and qualitatively. In addition, various fracture toughness parameters associated with a typical damage and its growth must be determined. Computational structural analysis codes to aid the composite design engineer in performing these tasks were developed. CODSTRAN (COmposite Durability STRuctural ANalysis) is used to qualitatively and quantitatively assess the progressive damage occurring in composite structures due to mechanical and environmental loads. Next, methods are covered that are currently being developed and used at Lewis to predict interlaminar fracture toughness and related parameters of fiber composites given a prescribed damage. The general purpose finite element code MSC/NASTRAN was used to simulate the interlaminar fracture and the associated individual as well as mixed-mode strain energy release rates in fiber composites.

  8. Domain decomposition methods in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Gropp, William D.; Keyes, David E.

    1992-01-01

    The divide-and-conquer paradigm of iterative domain decomposition, or substructuring, has become a practical tool in computational fluid dynamic applications because of its flexibility in accommodating adaptive refinement through locally uniform (or quasi-uniform) grids, its ability to exploit multiple discretizations of the operator equations, and the modular pathway it provides towards parallelism. These features are illustrated on the classic model problem of flow over a backstep using Newton's method as the nonlinear iteration. Multiple discretizations (second-order in the operator and first-order in the preconditioner) and locally uniform mesh refinement pay dividends separately, and they can be combined synergistically. Sample performance results are included from an Intel iPSC/860 hypercube implementation.

  9. Domain decomposition methods in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Gropp, William D.; Keyes, David E.

    1991-01-01

    The divide-and-conquer paradigm of iterative domain decomposition, or substructuring, has become a practical tool in computational fluid dynamic applications because of its flexibility in accommodating adaptive refinement through locally uniform (or quasi-uniform) grids, its ability to exploit multiple discretizations of the operator equations, and the modular pathway it provides towards parallelism. These features are illustrated on the classic model problem of flow over a backstep using Newton's method as the nonlinear iteration. Multiple discretizations (second-order in the operator and first-order in the preconditioner) and locally uniform mesh refinement pay dividends separately, and they can be combined synergistically. Sample performance results are included from an Intel iPSC/860 hypercube implementation.

  10. Modules and methods for all photonic computing

    DOEpatents

    Schultz, David R.; Ma, Chao Hung

    2001-01-01

    A method for all photonic computing, comprising the steps of: encoding a first optical/electro-optical element with a two dimensional mathematical function representing input data; illuminating the first optical/electro-optical element with a collimated beam of light; illuminating a second optical/electro-optical element with light from the first optical/electro-optical element, the second optical/electro-optical element having a characteristic response corresponding to an iterative algorithm useful for solving a partial differential equation; iteratively recirculating the signal through the second optical/electro-optical element with light from the second optical/electro-optical element for a predetermined number of iterations; and, after the predetermined number of iterations, optically and/or electro-optically collecting output data representing an iterative optical solution from the second optical/electro-optical element.

  11. 47 CFR 80.771 - Method of computing coverage.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 5 2011-10-01 2011-10-01 false Method of computing coverage. 80.771 Section 80... STATIONS IN THE MARITIME SERVICES Standards for Computing Public Coast Station VHF Coverage § 80.771 Method of computing coverage. Compute the +17 dBu contour as follows: (a) Determine the effective...

  12. 47 CFR 80.771 - Method of computing coverage.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 5 2010-10-01 2010-10-01 false Method of computing coverage. 80.771 Section 80... STATIONS IN THE MARITIME SERVICES Standards for Computing Public Coast Station VHF Coverage § 80.771 Method of computing coverage. Compute the +17 dBu contour as follows: (a) Determine the effective...

  13. LS³: A Method for Improving Phylogenomic Inferences When Evolutionary Rates Are Heterogeneous among Taxa

    PubMed Central

    Rivera-Rivera, Carlos J.; Montoya-Burgos, Juan I.

    2016-01-01

    Phylogenetic inference artifacts can occur when sequence evolution deviates from assumptions made by the models used to analyze them. The combination of strong model assumption violations and highly heterogeneous lineage evolutionary rates can become problematic in phylogenetic inference, and lead to the well-described long-branch attraction (LBA) artifact. Here, we define an objective criterion for assessing lineage evolutionary rate heterogeneity among predefined lineages: the result of a likelihood ratio test between a model in which the lineages evolve at the same rate (homogeneous model) and a model in which different lineage rates are allowed (heterogeneous model). We implement this criterion in the algorithm Locus Specific Sequence Subsampling (LS³), aimed at reducing the effects of LBA in multi-gene datasets. For each gene, LS³ sequentially removes the fastest-evolving taxon of the ingroup and tests for lineage rate homogeneity until all lineages have uniform evolutionary rates. The sequences excluded from the homogeneously evolving taxon subset are flagged as potentially problematic. The software implementation provides the user with the possibility to remove the flagged sequences for generating a new concatenated alignment. We tested LS³ with simulations and two real datasets containing LBA artifacts: a nucleotide dataset regarding the position of Glires within mammals and an amino-acid dataset concerning the position of nematodes within bilaterians. The initially incorrect phylogenies were corrected in all cases upon removing data flagged by LS³. PMID:26912812

  14. LS³: A Method for Improving Phylogenomic Inferences When Evolutionary Rates Are Heterogeneous among Taxa.

    PubMed

    Rivera-Rivera, Carlos J; Montoya-Burgos, Juan I

    2016-06-01

    Phylogenetic inference artifacts can occur when sequence evolution deviates from assumptions made by the models used to analyze them. The combination of strong model assumption violations and highly heterogeneous lineage evolutionary rates can become problematic in phylogenetic inference, and lead to the well-described long-branch attraction (LBA) artifact. Here, we define an objective criterion for assessing lineage evolutionary rate heterogeneity among predefined lineages: the result of a likelihood ratio test between a model in which the lineages evolve at the same rate (homogeneous model) and a model in which different lineage rates are allowed (heterogeneous model). We implement this criterion in the algorithm Locus Specific Sequence Subsampling (LS³), aimed at reducing the effects of LBA in multi-gene datasets. For each gene, LS³ sequentially removes the fastest-evolving taxon of the ingroup and tests for lineage rate homogeneity until all lineages have uniform evolutionary rates. The sequences excluded from the homogeneously evolving taxon subset are flagged as potentially problematic. The software implementation provides the user with the possibility to remove the flagged sequences for generating a new concatenated alignment. We tested LS³ with simulations and two real datasets containing LBA artifacts: a nucleotide dataset regarding the position of Glires within mammals and an amino-acid dataset concerning the position of nematodes within bilaterians. The initially incorrect phylogenies were corrected in all cases upon removing data flagged by LS³. PMID:26912812

  15. Computational Evaluation of the Traceback Method

    ERIC Educational Resources Information Center

    Kol, Sheli; Nir, Bracha; Wintner, Shuly

    2014-01-01

    Several models of language acquisition have emerged in recent years that rely on computational algorithms for simulation and evaluation. Computational models are formal and precise, and can thus provide mathematically well-motivated insights into the process of language acquisition. Such models are amenable to robust computational evaluation,…

  16. Computational Analyses of an Evolutionary Arms Race between Mammalian Immunity Mediated by Immunoglobulin A and Its Subversion by Bacterial Pathogens

    PubMed Central

    Pinheiro, Ana; Woof, Jenny M.; Abi-Rached, Laurent; Parham, Peter; Esteves, Pedro J.

    2013-01-01

    IgA is the predominant immunoglobulin isotype in mucosal tissues and external secretions, playing important roles both in defense against pathogens and in maintenance of commensal microbiota. Considering the complexity of its interactions with the surrounding environment, IgA is a likely target for diversifying or positive selection. To investigate this possibility, the action of natural selection on IgA was examined in depth with six different methods: CODEML from the PAML package and the SLAC, FEL, REL, MEME and FUBAR methods implemented in the Datamonkey webserver. In considering just primate IgA, these analyses show that diversifying selection targeted five positions of the Cα1 and Cα2 domains of IgA. Extending the analysis to include other mammals identified 18 positively selected sites: ten in Cα1, five in Cα2 and three in Cα3. All but one of these positions display variation in polarity and charge. Their structural locations suggest they indirectly influence the conformation of sites on IgA that are critical for interaction with host IgA receptors and also with proteins produced by mucosal pathogens that prevent their elimination by IgA-mediated effector mechanisms. Demonstrating the plasticity of IgA in the evolution of different groups of mammals, only two of the eighteen selected positions in all mammals are included in the five selected positions in primates. That IgA residues subject to positive selection impact sites targeted both by host receptors and subversive pathogen ligands highlights the evolutionary arms race playing out between mammals and pathogens, and further emphasizes the importance of IgA in protection against mucosal pathogens. PMID:24019941

  17. Computational analyses of an evolutionary arms race between mammalian immunity mediated by immunoglobulin A and its subversion by bacterial pathogens.

    PubMed

    Pinheiro, Ana; Woof, Jenny M; Abi-Rached, Laurent; Parham, Peter; Esteves, Pedro J

    2013-01-01

    IgA is the predominant immunoglobulin isotype in mucosal tissues and external secretions, playing important roles both in defense against pathogens and in maintenance of commensal microbiota. Considering the complexity of its interactions with the surrounding environment, IgA is a likely target for diversifying or positive selection. To investigate this possibility, the action of natural selection on IgA was examined in depth with six different methods: CODEML from the PAML package and the SLAC, FEL, REL, MEME and FUBAR methods implemented in the Datamonkey webserver. In considering just primate IgA, these analyses show that diversifying selection targeted five positions of the Cα1 and Cα2 domains of IgA. Extending the analysis to include other mammals identified 18 positively selected sites: ten in Cα1, five in Cα2 and three in Cα3. All but one of these positions display variation in polarity and charge. Their structural locations suggest they indirectly influence the conformation of sites on IgA that are critical for interaction with host IgA receptors and also with proteins produced by mucosal pathogens that prevent their elimination by IgA-mediated effector mechanisms. Demonstrating the plasticity of IgA in the evolution of different groups of mammals, only two of the eighteen selected positions in all mammals are included in the five selected positions in primates. That IgA residues subject to positive selection impact sites targeted both by host receptors and subversive pathogen ligands highlights the evolutionary arms race playing out between mammals and pathogens, and further emphasizes the importance of IgA in protection against mucosal pathogens. PMID:24019941

  18. Object Orientated Methods in Computational Fluid Dynamics.

    NASA Astrophysics Data System (ADS)

    Tabor, Gavin; Weller, Henry; Jasak, Hrvoje; Fureby, Christer

    1997-11-01

    We outline the aims of the FOAM code, a Finite Volume Computational Fluid Dynamics code written in C++, and discuss the use of Object Orientated Programming (OOP) methods to achieve these aims. The intention when writing this code was to make it as easy as possible to alter the modelling : this was achieved by making the top level syntax of the code as close as possible to conventional mathematical notation for tensors and partial differential equations. Object orientation enables us to define classes for both types of objects, and the operator overloading possible in C++ allows normal symbols to be used for the basic operations. The introduction of features such as automatic dimension checking of equations helps to enforce correct coding of models. We also discuss the use of OOP techniques such as data encapsulation and code reuse. As examples of the flexibility of this approach, we discuss the implementation of turbulence modelling using RAS and LES. The code is used to simulate turbulent flow for a number of test cases, including fully developed channel flow and flow around obstacles. We also demonstrate the use of the code for solving structures calculations and magnetohydrodynamics.

  19. ADVANCED COMPUTATIONAL METHODS IN DOSE MODELING

    EPA Science Inventory

    The overall goal of the EPA-ORD NERL research program on Computational Toxicology (CompTox) is to provide the Agency with the tools of modern chemistry, biology, and computing to improve quantitative risk assessments and reduce uncertainties in the source-to-adverse outcome conti...

  20. Methods for Improving the User-Computer Interface. Technical Report.

    ERIC Educational Resources Information Center

    McCann, Patrick H.

    This summary of methods for improving the user-computer interface is based on a review of the pertinent literature. Requirements of the personal computer user are identified and contrasted with computer designer perspectives towards the user. The user's psychological needs are described, so that the design of the user-computer interface may be…

  1. An integrative method for testing form-function linkages and reconstructed evolutionary pathways of masticatory specialization.

    PubMed

    Tseng, Z Jack; Flynn, John J

    2015-06-01

    Morphology serves as a ubiquitous proxy in macroevolutionary studies to identify potential adaptive processes and patterns. Inferences of functional significance of phenotypes or their evolution are overwhelmingly based on data from living taxa. Yet, correspondence between form and function has been tested in only a few model species, and those linkages are highly complex. The lack of explicit methodologies to integrate form and function analyses within a deep-time and phylogenetic context weakens inferences of adaptive morphological evolution, by invoking but not testing form-function linkages. Here, we provide a novel approach to test mechanical properties at reconstructed ancestral nodes/taxa and the strength and direction of evolutionary pathways in feeding biomechanics, in a case study of carnivorous mammals. Using biomechanical profile comparisons that provide functional signals for the separation of feeding morphologies, we demonstrate, using experimental optimization criteria on estimation of strength and direction of functional changes on a phylogeny, that convergence in mechanical properties and degree of evolutionary optimization can be decoupled. This integrative approach is broadly applicable to other clades, by using quantitative data and model-based tests to evaluate interpretations of function from morphology and functional explanations for observed macroevolutionary pathways. PMID:25994295

  2. Cancer Biomarkers from Genome-Scale DNA Methylation: Comparison of Evolutionary and Semantic Analysis Methods

    PubMed Central

    Valavanis, Ioannis; Pilalis, Eleftherios; Georgiadis, Panagiotis; Kyrtopoulos, Soterios; Chatziioannou, Aristotelis

    2015-01-01

    DNA methylation profiling exploits microarray technologies, thus yielding a wealth of high-volume data. Here, an intelligent framework is applied, encompassing epidemiological genome-scale DNA methylation data produced from the Illumina’s Infinium Human Methylation 450K Bead Chip platform, in an effort to correlate interesting methylation patterns with cancer predisposition and, in particular, breast cancer and B-cell lymphoma. Feature selection and classification are employed in order to select, from an initial set of ~480,000 methylation measurements at CpG sites, predictive cancer epigenetic biomarkers and assess their classification power for discriminating healthy versus cancer related classes. Feature selection exploits evolutionary algorithms or a graph-theoretic methodology which makes use of the semantics information included in the Gene Ontology (GO) tree. The selected features, corresponding to methylation of CpG sites, attained moderate-to-high classification accuracies when imported to a series of classifiers evaluated by resampling or blindfold validation. The semantics-driven selection revealed sets of CpG sites performing similarly with evolutionary selection in the classification tasks. However, gene enrichment and pathway analysis showed that it additionally provides more descriptive sets of GO terms and KEGG pathways regarding the cancer phenotypes studied here. Results support the expediency of this methodology regarding its application in epidemiological studies.

  3. An integrative method for testing form–function linkages and reconstructed evolutionary pathways of masticatory specialization

    PubMed Central

    Tseng, Z. Jack; Flynn, John J.

    2015-01-01

    Morphology serves as a ubiquitous proxy in macroevolutionary studies to identify potential adaptive processes and patterns. Inferences of functional significance of phenotypes or their evolution are overwhelmingly based on data from living taxa. Yet, correspondence between form and function has been tested in only a few model species, and those linkages are highly complex. The lack of explicit methodologies to integrate form and function analyses within a deep-time and phylogenetic context weakens inferences of adaptive morphological evolution, by invoking but not testing form–function linkages. Here, we provide a novel approach to test mechanical properties at reconstructed ancestral nodes/taxa and the strength and direction of evolutionary pathways in feeding biomechanics, in a case study of carnivorous mammals. Using biomechanical profile comparisons that provide functional signals for the separation of feeding morphologies, we demonstrate, using experimental optimization criteria on estimation of strength and direction of functional changes on a phylogeny, that convergence in mechanical properties and degree of evolutionary optimization can be decoupled. This integrative approach is broadly applicable to other clades, by using quantitative data and model-based tests to evaluate interpretations of function from morphology and functional explanations for observed macroevolutionary pathways. PMID:25994295

  4. Computational methods in sequence and structure prediction

    NASA Astrophysics Data System (ADS)

    Lang, Caiyi

    This dissertation is organized into two parts. In the first part, we will discuss three computational methods for cis-regulatory element recognition in three different gene regulatory networks as the following: (a) Using a comprehensive "Phylogenetic Footprinting Comparison" method, we will investigate the promoter sequence structures of three enzymes (PAL, CHS and DFR) that catalyze sequential steps in the pathway from phenylalanine to anthocyanins in plants. Our result shows there exists a putative cis-regulatory element "AC(C/G)TAC(C)" in the upstream of these enzyme genes. We propose this cis-regulatory element to be responsible for the genetic regulation of these three enzymes and this element, might also be the binding site for MYB class transcription factor PAP1. (b) We will investigate the role of the Arabidopsis gene glutamate receptor 1.1 (AtGLR1.1) in C and N metabolism by utilizing the microarray data we obtained from AtGLR1.1 deficient lines (antiAtGLR1.1). We focus our investigation on the putatively co-regulated transcript profile of 876 genes we have collected in antiAtGLR1.1 lines. By (a) scanning the occurrence of several groups of known abscisic acid (ABA) related cisregulatory elements in the upstream regions of 876 Arabidopsis genes; and (b) exhaustive scanning of all possible 6-10 bps motif occurrence in the upstream regions of the same set of genes, we are able to make a quantative estimation on the enrichment level of each of the cis-regulatory element candidates. We finally conclude that one specific cis-regulatory element group, called "ABRE" elements, are statistically highly enriched within the 876-gene group as compared to their occurrence within the genome. (c) We will introduce a new general purpose algorithm, called "fuzzy REDUCE1", which we have developed recently for automated cis-regulatory element identification. In the second part, we will discuss our newly devised protein design framework. With this framework we have developed

  5. Reliability of cephalometric analysis using manual and interactive computer methods.

    PubMed

    Davis, D N; Mackay, F

    1991-05-01

    This study compares the results of cephalometric analyses using manual and interactive computer graphics methods. Results are statistically in favour of the interactive computer system. This study provides a basis for ongoing research into alternative methods of cephalometric analyses, such as digitization and automatic landmark identification using sophisticated computer vision systems. PMID:1911687

  6. Evolutionary enhancement of the SLIM-MAUD method of estimating human error rates

    SciTech Connect

    Zamanali, J.H. ); Hubbard, F.R. ); Mosleh, A. ); Waller, M.A. )

    1992-01-01

    The methodology described in this paper assigns plant-specific dynamic human error rates (HERs) for individual plant examinations based on procedural difficulty, on configuration features, and on the time available to perform the action. This methodology is an evolutionary improvement of the success likelihood index methodology (SLIM-MAUD) for use in systemic scenarios. It is based on the assumption that the HER in a particular situation depends of the combined effects of a comprehensive set of performance-shaping factors (PSFs) that influence the operator's ability to perform the action successfully. The PSFs relate the details of the systemic scenario in which the action must be performed according to the operator's psychological and cognitive condition.

  7. Computational structural mechanics methods research using an evolving framework

    NASA Technical Reports Server (NTRS)

    Knight, N. F., Jr.; Lotts, C. G.; Gillian, R. E.

    1990-01-01

    Advanced structural analysis and computational methods that exploit high-performance computers are being developed in a computational structural mechanics research activity sponsored by the NASA Langley Research Center. These new methods are developed in an evolving framework and applied to representative complex structural analysis problems from the aerospace industry. An overview of the methods development environment is presented, and methods research areas are described. Selected application studies are also summarized.

  8. The causal pie model: an epidemiological method applied to evolutionary biology and ecology.

    PubMed

    Wensink, Maarten; Westendorp, Rudi G J; Baudisch, Annette

    2014-05-01

    A general concept for thinking about causality facilitates swift comprehension of results, and the vocabulary that belongs to the concept is instrumental in cross-disciplinary communication. The causal pie model has fulfilled this role in epidemiology and could be of similar value in evolutionary biology and ecology. In the causal pie model, outcomes result from sufficient causes. Each sufficient cause is made up of a "causal pie" of "component causes". Several different causal pies may exist for the same outcome. If and only if all component causes of a sufficient cause are present, that is, a causal pie is complete, does the outcome occur. The effect of a component cause hence depends on the presence of the other component causes that constitute some causal pie. Because all component causes are equally and fully causative for the outcome, the sum of causes for some outcome exceeds 100%. The causal pie model provides a way of thinking that maps into a number of recurrent themes in evolutionary biology and ecology: It charts when component causes have an effect and are subject to natural selection, and how component causes affect selection on other component causes; which partitions of outcomes with respect to causes are feasible and useful; and how to view the composition of a(n apparently homogeneous) population. The diversity of specific results that is directly understood from the causal pie model is a test for both the validity and the applicability of the model. The causal pie model provides a common language in which results across disciplines can be communicated and serves as a template along which future causal analyses can be made. PMID:24963386

  9. The evolutionary relationships and age of Homo naledi: An assessment using dated Bayesian phylogenetic methods.

    PubMed

    Dembo, Mana; Radovčić, Davorka; Garvin, Heather M; Laird, Myra F; Schroeder, Lauren; Scott, Jill E; Brophy, Juliet; Ackermann, Rebecca R; Musiba, Chares M; de Ruiter, Darryl J; Mooers, Arne Ø; Collard, Mark

    2016-08-01

    Homo naledi is a recently discovered species of fossil hominin from South Africa. A considerable amount is already known about H. naledi but some important questions remain unanswered. Here we report a study that addressed two of them: "Where does H. naledi fit in the hominin evolutionary tree?" and "How old is it?" We used a large supermatrix of craniodental characters for both early and late hominin species and Bayesian phylogenetic techniques to carry out three analyses. First, we performed a dated Bayesian analysis to generate estimates of the evolutionary relationships of fossil hominins including H. naledi. Then we employed Bayes factor tests to compare the strength of support for hypotheses about the relationships of H. naledi suggested by the best-estimate trees. Lastly, we carried out a resampling analysis to assess the accuracy of the age estimate for H. naledi yielded by the dated Bayesian analysis. The analyses strongly supported the hypothesis that H. naledi forms a clade with the other Homo species and Australopithecus sediba. The analyses were more ambiguous regarding the position of H. naledi within the (Homo, Au. sediba) clade. A number of hypotheses were rejected, but several others were not. Based on the available craniodental data, Homo antecessor, Asian Homo erectus, Homo habilis, Homo floresiensis, Homo sapiens, and Au. sediba could all be the sister taxon of H. naledi. According to the dated Bayesian analysis, the most likely age for H. naledi is 912 ka. This age estimate was supported by the resampling analysis. Our findings have a number of implications. Most notably, they support the assignment of the new specimens to Homo, cast doubt on the claim that H. naledi is simply a variant of H. erectus, and suggest H. naledi is younger than has been previously proposed. PMID:27457542

  10. Method of performing computational aeroelastic analyses

    NASA Technical Reports Server (NTRS)

    Silva, Walter A. (Inventor)

    2011-01-01

    Computational aeroelastic analyses typically use a mathematical model for the structural modes of a flexible structure and a nonlinear aerodynamic model that can generate a plurality of unsteady aerodynamic responses based on the structural modes for conditions defining an aerodynamic condition of the flexible structure. In the present invention, a linear state-space model is generated using a single execution of the nonlinear aerodynamic model for all of the structural modes where a family of orthogonal functions is used as the inputs. Then, static and dynamic aeroelastic solutions are generated using computational interaction between the mathematical model and the linear state-space model for a plurality of periodic points in time.

  11. Wing analysis using a transonic potential flow computational method

    NASA Technical Reports Server (NTRS)

    Henne, P. A.; Hicks, R. M.

    1978-01-01

    The ability of the method to compute wing transonic performance was determined by comparing computed results with both experimental data and results computed by other theoretical procedures. Both pressure distributions and aerodynamic forces were evaluated. Comparisons indicated that the method is a significant improvement in transonic wing analysis capability. In particular, the computational method generally calculated the correct development of three-dimensional pressure distributions from subcritical to transonic conditions. Complicated, multiple shocked flows observed experimentally were reproduced computationally. The ability to identify the effects of design modifications was demonstrated both in terms of pressure distributions and shock drag characteristics.

  12. A method of billing third generation computer users

    NASA Technical Reports Server (NTRS)

    Anderson, P. N.; Hyter, D. R.

    1973-01-01

    A method is presented for charging users for the processing of their applications on third generation digital computer systems is presented. For background purposes, problems and goals in billing on third generation systems are discussed. Detailed formulas are derived based on expected utilization and computer component cost. These formulas are then applied to a specific computer system (UNIVAC 1108). The method, although possessing some weaknesses, is presented as a definite improvement over use of second generation billing methods.

  13. MOEPGA: A novel method to detect protein complexes in yeast protein-protein interaction networks based on MultiObjective Evolutionary Programming Genetic Algorithm.

    PubMed

    Cao, Buwen; Luo, Jiawei; Liang, Cheng; Wang, Shulin; Song, Dan

    2015-10-01

    The identification of protein complexes in protein-protein interaction (PPI) networks has greatly advanced our understanding of biological organisms. Existing computational methods to detect protein complexes are usually based on specific network topological properties of PPI networks. However, due to the inherent complexity of the network structures, the identification of protein complexes may not be fully addressed by using single network topological property. In this study, we propose a novel MultiObjective Evolutionary Programming Genetic Algorithm (MOEPGA) which integrates multiple network topological features to detect biologically meaningful protein complexes. Our approach first systematically analyzes the multiobjective problem in terms of identifying protein complexes from PPI networks, and then constructs the objective function of the iterative algorithm based on three common topological properties of protein complexes from the benchmark dataset, finally we describe our algorithm, which mainly consists of three steps, population initialization, subgraph mutation and subgraph selection operation. To show the utility of our method, we compared MOEPGA with several state-of-the-art algorithms on two yeast PPI datasets. The experiment results demonstrate that the proposed method can not only find more protein complexes but also achieve higher accuracy in terms of fscore. Moreover, our approach can cover a certain number of proteins in the input PPI network in terms of the normalized clustering score. Taken together, our method can serve as a powerful framework to detect protein complexes in yeast PPI networks, thereby facilitating the identification of the underlying biological functions. PMID:26298638

  14. Integrative tracking methods elucidate the evolutionary dynamics of a migratory divide

    PubMed Central

    Alvarado, Allison H; Fuller, Trevon L; Smith, Thomas B

    2014-01-01

    Migratory divides, the boundary between adjacent bird populations that migrate in different directions, are of considerable interest to evolutionary biologists because of their alleged role in speciation of migratory birds. However, the small size of many passerines has traditionally limited the tools available to track populations and as a result, restricted our ability to study how reproductive isolation might occur across a divide. Here, we integrate multiple approaches by using genetic, geolocator, and morphological data to investigate a migratory divide in hermit thrushes (Catharus guttatus). First, high genetic divergence between migratory groups indicates the divide is a region of secondary contact between historically isolated populations. Second, despite low sample sizes, geolocators reveal dramatic differences in overwintering locations and migratory distance of individuals from either side of the divide. Third, a diagnostic genetic marker that proved useful for tracking a key population suggests a likely intermediate nonbreeding location of birds from the hybrid zone. This finding, combined with lower return rates from this region, is consistent with comparatively lower fitness of hybrids, which is possibly due to this intermediate migration pattern. We discuss our results in the context of reproductive isolating mechanisms associated with migration patterns that have long been hypothesized to promote divergence across migratory divides. PMID:25535561

  15. Soft computing methods for geoidal height transformation

    NASA Astrophysics Data System (ADS)

    Akyilmaz, O.; Özlüdemir, M. T.; Ayan, T.; Çelik, R. N.

    2009-07-01

    Soft computing techniques, such as fuzzy logic and artificial neural network (ANN) approaches, have enabled researchers to create precise models for use in many scientific and engineering applications. Applications that can be employed in geodetic studies include the estimation of earth rotation parameters and the determination of mean sea level changes. Another important field of geodesy in which these computing techniques can be applied is geoidal height transformation. We report here our use of a conventional polynomial model, the Adaptive Network-based Fuzzy (or in some publications, Adaptive Neuro-Fuzzy) Inference System (ANFIS), an ANN and a modified ANN approach to approximate geoid heights. These approximation models have been tested on a number of test points. The results obtained through the transformation processes from ellipsoidal heights into local levelling heights have also been compared.

  16. Computational Methods to Predict Protein Interaction Partners

    NASA Astrophysics Data System (ADS)

    Valencia, Alfonso; Pazos, Florencio

    In the new paradigm for studying biological phenomena represented by Systems Biology, cellular components are not considered in isolation but as forming complex networks of relationships. Protein interaction networks are among the first objects studied from this new point of view. Deciphering the interactome (the whole network of interactions for a given proteome) has been shown to be a very complex task. Computational techniques for detecting protein interactions have become standard tools for dealing with this problem, helping and complementing their experimental counterparts. Most of these techniques use genomic or sequence features intuitively related with protein interactions and are based on "first principles" in the sense that they do not involve training with examples. There are also other computational techniques that use other sources of information (i.e. structural information or even experimental data) or are based on training with examples.

  17. Soft Computing Methods in Design of Superalloys

    NASA Technical Reports Server (NTRS)

    Cios, K. J.; Berke, L.; Vary, A.; Sharma, S.

    1996-01-01

    Soft computing techniques of neural networks and genetic algorithms are used in the design of superalloys. The cyclic oxidation attack parameter K(sub a), generated from tests at NASA Lewis Research Center, is modelled as a function of the superalloy chemistry and test temperature using a neural network. This model is then used in conjunction with a genetic algorithm to obtain an optimized superalloy composition resulting in low K(sub a) values.

  18. Computational methods for physical mapping of chromosomes

    SciTech Connect

    Torney, D.C.; Schenk, K.R. ); Whittaker, C.C. Los Alamos National Lab., NM ); White, S.W. )

    1990-01-01

    A standard technique for mapping a chromosome is to randomly select pieces, to use restriction enzymes to cut these pieces into fragments, and then to use the fragments for estimating the probability of overlap of these pieces. Typically, the order of the fragments within a piece is not determined, and the observed fragment data from each pair of pieces must be permuted N1 {times} N2 ways to evaluate the probability of overlap, N1 and N2 being the observed number of fragments in the two selected pieces. We will describe computational approaches used to substantially reduce the computational complexity of the calculation of overlap probability from fragment data. Presently, about 10{sup {minus}4} CPU seconds on one processor of an IBM 3090 is required for calculation of overlap probability from the fragment data of two randomly selected pieces, with an average of ten fragments per piece. A parallel version has been written using IBM clustered FORTRAN. Parallel measurements for 1, 6, and 12 processors will be presented. This approach has proven promising in the mapping of chromosome 16 at Los Alamos National Laboratory. We will also describe other computational challenges presented by physical mapping. 4 refs., 4 figs., 1 tab.

  19. Computational Methods for Analyzing Health News Coverage

    ERIC Educational Resources Information Center

    McFarlane, Delano J.

    2011-01-01

    Researchers that investigate the media's coverage of health have historically relied on keyword searches to retrieve relevant health news coverage, and manual content analysis methods to categorize and score health news text. These methods are problematic. Manual content analysis methods are labor intensive, time consuming, and inherently…

  20. Integral Deferred Correction methods for scientific computing

    NASA Astrophysics Data System (ADS)

    Morton, Maureen Marilla

    Since high order numerical methods frequently can attain accurate solutions more efficiently than low order methods, we develop and analyze new high order numerical integrators for the time discretization of ordinary and partial differential equations. Our novel methods address some of the issues surrounding high order numerical time integration, such as the difficulty of many popular methods' construction and handling the effects of disparate behaviors produce by different terms in the equations to be solved. We are motivated by the simplicity of how Deferred Correction (DC) methods achieve high order accuracy [72, 27]. DC methods are numerical time integrators that, rather than calculating tedious coefficients for order conditions, instead construct high order accurate solutions by iteratively improving a low order preliminary numerical solution. With each iteration, an error equation is solved, the error decreases, and the order of accuracy increases. Later, DC methods were adjusted to include an integral formulation of the residual, which stabilizes the method. These Spectral Deferred Correction (SDC) methods [25] motivated Integral Deferred Corrections (IDC) methods. Typically, SDC methods are limited to increasing the order of accuracy by one with each iteration due to smoothness properties imposed by the gridspacing. However, under mild assumptions, explicit IDC methods allow for any explicit rth order Runge-Kutta (RK) method to be used within each iteration, and then an order of accuracy increase of r is attained after each iteration [18]. We extend these results to the construction of implicit IDC methods that use implicit RK methods, and we prove analogous results for order of convergence. One means of solving equations with disparate parts is by semi-implicit integrators, handling a "fast" part implicitly and a "slow" part explicitly. We incorporate additive RK (ARK) integrators into the iterations of IDC methods in order to construct new arbitrary order

  1. Investigating preferences for color-shape combinations with gaze driven optimization method based on evolutionary algorithms

    PubMed Central

    Holmes, Tim; Zanker, Johannes M.

    2013-01-01

    Studying aesthetic preference is notoriously difficult because it targets individual experience. Eye movements provide a rich source of behavioral measures that directly reflect subjective choice. To determine individual preferences for simple composition rules we here use fixation duration as the fitness measure in a Gaze Driven Evolutionary Algorithm (GDEA), which has been demonstrated as a tool to identify aesthetic preferences (Holmes and Zanker, 2012). In the present study, the GDEA was used to investigate the preferred combination of color and shape which have been promoted in the Bauhaus arts school. We used the same three shapes (square, circle, triangle) used by Kandinsky (1923), with the three color palette from the original experiment (A), an extended seven color palette (B), and eight different shape orientation (C). Participants were instructed to look for their preferred circle, triangle or square in displays with eight stimuli of different shapes, colors and rotations, in an attempt to test for a strong preference for red squares, yellow triangles and blue circles in such an unbiased experimental design and with an extended set of possible combinations. We Tested six participants extensively on the different conditions and found consistent preferences for color-shape combinations for individuals, but little evidence at the group level for clear color/shape preference consistent with Kandinsky's claims, apart from some weak link between yellow and triangles. Our findings suggest substantial inter-individual differences in the presence of stable individual associations of color and shapes, but also that these associations are robust within a single individual. These individual differences go some way toward challenging the claims of the universal preference for color/shape combinations proposed by Kandinsky, but also indicate that a much larger sample size would be needed to confidently reject that hypothesis. Moreover, these experiments highlight the

  2. Analysis of a Rapid Evolutionary Radiation Using Ultraconserved Elements: Evidence for a Bias in Some Multispecies Coalescent Methods.

    PubMed

    Meiklejohn, Kelly A; Faircloth, Brant C; Glenn, Travis C; Kimball, Rebecca T; Braun, Edward L

    2016-07-01

    Rapid evolutionary radiations are expected to require large amounts of sequence data to resolve. To resolve these types of relationships many systematists believe that it will be necessary to collect data by next-generation sequencing (NGS) and use multispecies coalescent ("species tree") methods. Ultraconserved element (UCE) sequence capture is becoming a popular method to leverage the high throughput of NGS to address problems in vertebrate phylogenetics. Here we examine the performance of UCE data for gallopheasants (true pheasants and allies), a clade that underwent a rapid radiation 10-15 Ma. Relationships among gallopheasant genera have been difficult to establish. We used this rapid radiation to assess the performance of species tree methods, using ∼600 kilobases of DNA sequence data from ∼1500 UCEs. We also integrated information from traditional markers (nuclear intron data from 15 loci and three mitochondrial gene regions). Species tree methods exhibited troubling behavior. Two methods [Maximum Pseudolikelihood for Estimating Species Trees (MP-EST) and Accurate Species TRee ALgorithm (ASTRAL)] appeared to perform optimally when the set of input gene trees was limited to the most variable UCEs, though ASTRAL appeared to be more robust than MP-EST to input trees generated using less variable UCEs. In contrast, the rooted triplet consensus method implemented in Triplec performed better when the largest set of input gene trees was used. We also found that all three species tree methods exhibited a surprising degree of dependence on the program used to estimate input gene trees, suggesting that the details of likelihood calculations (e.g., numerical optimization) are important for loci with limited phylogenetic information. As an alternative to summary species tree methods we explored the performance of SuperMatrix Rooted Triple - Maximum Likelihood (SMRT-ML), a concatenation method that is consistent even when gene trees exhibit topological differences

  3. Computational Methods for Jet Noise Simulation

    NASA Technical Reports Server (NTRS)

    Goodrich, John W. (Technical Monitor); Hagstrom, Thomas

    2003-01-01

    The purpose of our project is to develop, analyze, and test novel numerical technologies central to the long term goal of direct simulations of subsonic jet noise. Our current focus is on two issues: accurate, near-field domain truncations and high-order, single-step discretizations of the governing equations. The Direct Numerical Simulation (DNS) of jet noise poses a number of extreme challenges to computational technique. In particular, the problem involves multiple temporal and spatial scales as well as flow instabilities and is posed on an unbounded spatial domain. Moreover, the basic phenomenon of interest, the radiation of acoustic waves to the far field, involves only a minuscule fraction of the total energy. The best current simulations of jet noise are at low Reynolds number. It is likely that an increase of one to two orders of magnitude will be necessary to reach a regime where the separation between the energy-containing and dissipation scales is sufficient to make the radiated noise essentially independent of the Reynolds number. Such an increase in resolution cannot be obtained in the near future solely through increases in computing power. Therefore, new numerical methodologies of maximal efficiency and accuracy are required.

  4. Computation of Transonic Flows Using Potential Methods

    NASA Technical Reports Server (NTRS)

    Hoist, Terry L.; Kwak, Dochan (Technical Monitor)

    1997-01-01

    The proposed paper will describe the state of the art associated with numerical solution of the full or exact velocity potential equation for solving transonic, external-aerodynamic flows. The presentation will begin with a review of the literature emphasizing research activities of the past decade. Next, the various forms of the full or exact velocity potential equation, the equation's corresponding mathematical characteristics, and the derivation assumptions will be presented and described in detail. Impact of the derivation assumptions on simulation accuracy, especially with respect to shock wave capture, will be presented and discussed relative to the more complete Euler or Navier-Stokes formulations. The technical presentation will continue with a description of recently developed full potential numerical approach characteristics. This description will include governing equation nondimensionalization, physical-to-computational-domain mapping procedures, a limited description of grid generation requirements, the spatial discretization scheme, numerical implementation of boundary conditions, and the iteration scheme. The next portion of the presentation will present and discuss numerical results for several two- and three-dimensional aerodynamic applications. Included in the results section will be a discussion and demonstration of a typical grid refinement analysis for determining spatial convergence of the numerical solution and level of solution accuracy. Computer timings for a variety of full potential applications will be compared and contrasted with similar results for the Euler equation formulation. Finally. the presentation will end with concluding remarks and recommendations for future work.

  5. Discontinuous Galerkin Methods: Theory, Computation and Applications

    SciTech Connect

    Cockburn, B.; Karniadakis, G. E.; Shu, C-W

    2000-12-31

    This volume contains a survey article for Discontinuous Galerkin Methods (DGM) by the editors as well as 16 papers by invited speakers and 32 papers by contributed speakers of the First International Symposium on Discontinuous Galerkin Methods. It covers theory, applications, and implementation aspects of DGM.

  6. Computational methods for aerodynamic design using numerical optimization

    NASA Technical Reports Server (NTRS)

    Peeters, M. F.

    1983-01-01

    Five methods to increase the computational efficiency of aerodynamic design using numerical optimization, by reducing the computer time required to perform gradient calculations, are examined. The most promising method consists of drastically reducing the size of the computational domain on which aerodynamic calculations are made during gradient calculations. Since a gradient calculation requires the solution of the flow about an airfoil whose geometry was slightly perturbed from a base airfoil, the flow about the base airfoil is used to determine boundary conditions on the reduced computational domain. This method worked well in subcritical flow.

  7. Three parallel computation methods for structural vibration analysis

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf; Bostic, Susan; Patrick, Merrell; Mahajan, Umesh; Ma, Shing

    1988-01-01

    The Lanczos (1950), multisectioning, and subspace iteration sequential methods for vibration analysis presently used as bases for three parallel algorithms are noted, in the aftermath of three example problems, to maintain reasonable accuracy in the computation of vibration frequencies. Significant computation time reductions are obtained as the number of processors increases. An analysis is made of the performance of each method, in order to characterize relative strengths and weaknesses as well as to identify those parameters that most strongly affect computation efficiency.

  8. Domain identification in impedance computed tomography by spline collocation method

    NASA Technical Reports Server (NTRS)

    Kojima, Fumio

    1990-01-01

    A method for estimating an unknown domain in elliptic boundary value problems is considered. The problem is formulated as an inverse problem of integral equations of the second kind. A computational method is developed using a splice collocation scheme. The results can be applied to the inverse problem of impedance computed tomography (ICT) for image reconstruction.

  9. Classical versus Computer Algebra Methods in Elementary Geometry

    ERIC Educational Resources Information Center

    Pech, Pavel

    2005-01-01

    Computer algebra methods based on results of commutative algebra like Groebner bases of ideals and elimination of variables make it possible to solve complex, elementary and non elementary problems of geometry, which are difficult to solve using a classical approach. Computer algebra methods permit the proof of geometric theorems, automatic…

  10. Overview of computational structural methods for modern military aircraft

    NASA Technical Reports Server (NTRS)

    Kudva, J. N.

    1992-01-01

    Computational structural methods are essential for designing modern military aircraft. This briefing deals with computational structural methods (CSM) currently used. First a brief summary of modern day aircraft structural design procedures is presented. Following this, several ongoing CSM related projects at Northrop are discussed. Finally, shortcomings in this area, future requirements, and summary remarks are given.

  11. 12 CFR 227.25 - Unfair balance computation method.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... under 12 CFR 226.12 or 12 CFR 226.13; or (2) Adjustments to finance charges as a result of the return of... 12 Banks and Banking 3 2010-01-01 2010-01-01 false Unfair balance computation method. 227.25... Practices Rule § 227.25 Unfair balance computation method. (a) General rule. Except as provided in...

  12. Method and device for computed tomography

    SciTech Connect

    Lux, P.W.; Op De Beek, J.C.A.; Van Leiden, H.F.

    1983-09-06

    A computer tomography device in which the detectors are asymmetrically arranged with respect to the connecting line between the X-ray source, the center of rotation of the source, and the detectors is disclosed. The detector device produces an incomplete profile of measuring values which are supplemented with ''zeros'' during processing in order to form a number of measuring values of a complete profile. In order to avoid artefacts which are produced by the acute transients between measuring values and ''zeros'', a number of measuring values adjoining the acute transients are projected around the center of rotation and multipled by a factor so that from the zeros a smoothly increasing series of adapted measuring values is obtained.

  13. COMSAC: Computational Methods for Stability and Control. Part 1

    NASA Technical Reports Server (NTRS)

    Fremaux, C. Michael (Compiler); Hall, Robert M. (Compiler)

    2004-01-01

    Work on stability and control included the following reports:Introductory Remarks; Introduction to Computational Methods for Stability and Control (COMSAC); Stability & Control Challenges for COMSAC: a NASA Langley Perspective; Emerging CFD Capabilities and Outlook A NASA Langley Perspective; The Role for Computational Fluid Dynamics for Stability and Control:Is it Time?; Northrop Grumman Perspective on COMSAC; Boeing Integrated Defense Systems Perspective on COMSAC; Computational Methods in Stability and Control:WPAFB Perspective; Perspective: Raytheon Aircraft Company; A Greybeard's View of the State of Aerodynamic Prediction; Computational Methods for Stability and Control: A Perspective; Boeing TacAir Stability and Control Issues for Computational Fluid Dynamics; NAVAIR S&C Issues for CFD; An S&C Perspective on CFD; Issues, Challenges & Payoffs: A Boeing User s Perspective on CFD for S&C; and Stability and Control in Computational Simulations for Conceptual and Preliminary Design: the Past, Today, and Future?

  14. Computational modeling of Repeat1 region of INI1/hSNF5: An evolutionary link with ubiquitin.

    PubMed

    Bhutoria, Savita; Kalpana, Ganjam V; Acharya, Seetharama A

    2016-09-01

    The structure of a protein can be very informative of its function. However, determining protein structures experimentally can often be very challenging. Computational methods have been used successfully in modeling structures with sufficient accuracy. Here we have used computational tools to predict the structure of an evolutionarily conserved and functionally significant domain of Integrase interactor (INI)1/hSNF5 protein. INI1 is a component of the chromatin remodeling SWI/SNF complex, a tumor suppressor and is involved in many protein-protein interactions. It belongs to SNF5 family of proteins that contain two conserved repeat (Rpt) domains. Rpt1 domain of INI1 binds to HIV-1 Integrase, and acts as a dominant negative mutant to inhibit viral replication. Rpt1 domain also interacts with oncogene c-MYC and modulates its transcriptional activity. We carried out an ab initio modeling of a segment of INI1 protein containing the Rpt1 domain. The structural model suggested the presence of a compact and well defined ββαα topology as core structure in the Rpt1 domain of INI1. This topology in Rpt1 was similar to PFU domain of Phospholipase A2 Activating Protein, PLAA. Interestingly, PFU domain shares similarity with Ubiquitin and has ubiquitin binding activity. Because of the structural similarity between Rpt1 domain of INI1 and PFU domain of PLAA, we propose that Rpt1 domain of INI1 may participate in ubiquitin recognition or binding with ubiquitin or ubiquitin related proteins. This modeling study may shed light on the mode of interactions of Rpt1 domain of INI1 and is likely to facilitate future functional studies of INI1. PMID:27261671

  15. 3D computational mechanics elucidate the evolutionary implications of orbit position and size diversity of early amphibians.

    PubMed

    Marcé-Nogué, Jordi; Fortuny, Josep; De Esteban-Trivigno, Soledad; Sánchez, Montserrat; Gil, Lluís; Galobart, Àngel

    2015-01-01

    For the first time in vertebrate palaeontology, the potential of joining Finite Element Analysis (FEA) and Parametrical Analysis (PA) is used to shed new light on two different cranial parameters from the orbits to evaluate their biomechanical role and evolutionary patterns. The early tetrapod group of Stereospondyls, one of the largest groups of Temnospondyls is used as a case study because its orbits position and size vary hugely within the members of this group. An adult skull of Edingerella madagascariensis was analysed using two different cases of boundary and loading conditions in order to quantify stress and deformation response under a bilateral bite and during skull raising. Firstly, the variation of the original geometry of its orbits was introduced in the models producing new FEA results, allowing the exploration of the ecomorphology, feeding strategy and evolutionary patterns of these top predators. Secondly, the quantitative results were analysed in order to check if the orbit size and position were correlated with different stress patterns. These results revealed that in most of the cases the stress distribution is not affected by changes in the size and position of the orbit. This finding supports the high mechanical plasticity of this group during the Triassic period. The absence of mechanical constraints regarding the orbit probably promoted the ecomorphological diversity acknowledged for this group, as well as its ecological niche differentiation in the terrestrial Triassic ecosystems in clades as lydekkerinids, trematosaurs, capitosaurs or metoposaurs. PMID:26107295

  16. 3D Computational Mechanics Elucidate the Evolutionary Implications of Orbit Position and Size Diversity of Early Amphibians

    PubMed Central

    Marcé-Nogué, Jordi; Fortuny, Josep; De Esteban-Trivigno, Soledad; Sánchez, Montserrat; Gil, Lluís; Galobart, Àngel

    2015-01-01

    For the first time in vertebrate palaeontology, the potential of joining Finite Element Analysis (FEA) and Parametrical Analysis (PA) is used to shed new light on two different cranial parameters from the orbits to evaluate their biomechanical role and evolutionary patterns. The early tetrapod group of Stereospondyls, one of the largest groups of Temnospondyls is used as a case study because its orbits position and size vary hugely within the members of this group. An adult skull of Edingerella madagascariensis was analysed using two different cases of boundary and loading conditions in order to quantify stress and deformation response under a bilateral bite and during skull raising. Firstly, the variation of the original geometry of its orbits was introduced in the models producing new FEA results, allowing the exploration of the ecomorphology, feeding strategy and evolutionary patterns of these top predators. Secondly, the quantitative results were analysed in order to check if the orbit size and position were correlated with different stress patterns. These results revealed that in most of the cases the stress distribution is not affected by changes in the size and position of the orbit. This finding supports the high mechanical plasticity of this group during the Triassic period. The absence of mechanical constraints regarding the orbit probably promoted the ecomorphological diversity acknowledged for this group, as well as its ecological niche differentiation in the terrestrial Triassic ecosystems in clades as lydekkerinids, trematosaurs, capitosaurs or metoposaurs. PMID:26107295

  17. Computational method for analysis of polyethylene biodegradation

    NASA Astrophysics Data System (ADS)

    Watanabe, Masaji; Kawai, Fusako; Shibata, Masaru; Yokoyama, Shigeo; Sudate, Yasuhiro

    2003-12-01

    In a previous study concerning the biodegradation of polyethylene, we proposed a mathematical model based on two primary factors: the direct consumption or absorption of small molecules and the successive weight loss of large molecules due to β-oxidation. Our model is an initial value problem consisting of a differential equation whose independent variable is time. Its unknown variable represents the total weight of all the polyethylene molecules that belong to a molecular-weight class specified by a parameter. In this paper, we describe a numerical technique to introduce experimental results into analysis of our model. We first establish its mathematical foundation in order to guarantee its validity, by showing that the initial value problem associated with the differential equation has a unique solution. Our computational technique is based on a linear system of differential equations derived from the original problem. We introduce some numerical results to illustrate our technique as a practical application of the linear approximation. In particular, we show how to solve the inverse problem to determine the consumption rate and the β-oxidation rate numerically, and illustrate our numerical technique by analyzing the GPC patterns of polyethylene wax obtained before and after 5 weeks cultivation of a fungus, Aspergillus sp. AK-3. A numerical simulation based on these degradation rates confirms that the primary factors of the polyethylene biodegradation posed in modeling are indeed appropriate.

  18. A Novel College Network Resource Management Method using Cloud Computing

    NASA Astrophysics Data System (ADS)

    Lin, Chen

    At present information construction of college mainly has construction of college networks and management information system; there are many problems during the process of information. Cloud computing is development of distributed processing, parallel processing and grid computing, which make data stored on the cloud, make software and services placed in the cloud and build on top of various standards and protocols, you can get it through all kinds of equipments. This article introduces cloud computing and function of cloud computing, then analyzes the exiting problems of college network resource management, the cloud computing technology and methods are applied in the construction of college information sharing platform.

  19. Transonic Flow Computations Using Nonlinear Potential Methods

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.; Kwak, Dochan (Technical Monitor)

    2000-01-01

    This presentation describes the state of transonic flow simulation using nonlinear potential methods for external aerodynamic applications. The presentation begins with a review of the various potential equation forms (with emphasis on the full potential equation) and includes a discussion of pertinent mathematical characteristics and all derivation assumptions. Impact of the derivation assumptions on simulation accuracy, especially with respect to shock wave capture, is discussed. Key characteristics of all numerical algorithm types used for solving nonlinear potential equations, including steady, unsteady, space marching, and design methods, are described. Both spatial discretization and iteration scheme characteristics are examined. Numerical results for various aerodynamic applications are included throughout the presentation to highlight key discussion points. The presentation ends with concluding remarks and recommendations for future work. Overall. nonlinear potential solvers are efficient, highly developed and routinely used in the aerodynamic design environment for cruise conditions. Published by Elsevier Science Ltd. All rights reserved.

  20. Eco-Evo PVAs: Incorporating Eco-Evolutionary Processes into Population Viability Models

    EPA Science Inventory

    We synthesize how advances in computational methods and population genomics can be combined within an Ecological-Evolutionary (Eco-Evo) PVA model. Eco-Evo PVA models are powerful new tools for understanding the influence of evolutionary processes on plant and animal population pe...

  1. Method for transferring data from an unsecured computer to a secured computer

    DOEpatents

    Nilsen, Curt A.

    1997-01-01

    A method is described for transferring data from an unsecured computer to a secured computer. The method includes transmitting the data and then receiving the data. Next, the data is retransmitted and rereceived. Then, it is determined if errors were introduced when the data was transmitted by the unsecured computer or received by the secured computer. Similarly, it is determined if errors were introduced when the data was retransmitted by the unsecured computer or rereceived by the secured computer. A warning signal is emitted from a warning device coupled to the secured computer if (i) an error was introduced when the data was transmitted or received, and (ii) an error was introduced when the data was retransmitted or rereceived.

  2. Hybrid evolutionary programming for heavily constrained problems.

    PubMed

    Myung, H; Kim, J H

    1996-01-01

    A hybrid of evolutionary programming (EP) and a deterministic optimization procedure is applied to a series of non-linear and quadratic optimization problems. The hybrid scheme is compared with other existing schemes such as EP alone, two-phase (TP) optimization, and EP with a non-stationary penalty function (NS-EP). The results indicate that the hybrid method can outperform the other methods when addressing heavily constrained optimization problems in terms of computational efficiency and solution accuracy. PMID:8833746

  3. Statistical and Computational Methods for Genetic Diseases: An Overview

    PubMed Central

    Di Taranto, Maria Donata

    2015-01-01

    The identification of causes of genetic diseases has been carried out by several approaches with increasing complexity. Innovation of genetic methodologies leads to the production of large amounts of data that needs the support of statistical and computational methods to be correctly processed. The aim of the paper is to provide an overview of statistical and computational methods paying attention to methods for the sequence analysis and complex diseases. PMID:26106440

  4. Coarse-graining methods for computational biology.

    PubMed

    Saunders, Marissa G; Voth, Gregory A

    2013-01-01

    Connecting the molecular world to biology requires understanding how molecular-scale dynamics propagate upward in scale to define the function of biological structures. To address this challenge, multiscale approaches, including coarse-graining methods, become necessary. We discuss here the theoretical underpinnings and history of coarse-graining and summarize the state of the field, organizing key methodologies based on an emerging paradigm for multiscale theory and modeling of biomolecular systems. This framework involves an integrated, iterative approach to couple information from different scales. The primary steps, which coincide with key areas of method development, include developing first-pass coarse-grained models guided by experimental results, performing numerous large-scale coarse-grained simulations, identifying important interactions that drive emergent behaviors, and finally reconnecting to the molecular scale by performing all-atom molecular dynamics simulations guided by the coarse-grained results. The coarse-grained modeling can then be extended and refined, with the entire loop repeated iteratively if necessary. PMID:23451897

  5. Multiscale methods for computational RNA enzymology

    PubMed Central

    Panteva, Maria T.; Dissanayake, Thakshila; Chen, Haoyuan; Radak, Brian K.; Kuechler, Erich R.; Giambaşu, George M.; Lee, Tai-Sung; York, Darrin M.

    2016-01-01

    RNA catalysis is of fundamental importance to biology and yet remains ill-understood due to its complex nature. The multi-dimensional “problem space” of RNA catalysis includes both local and global conformational rearrangements, changes in the ion atmosphere around nucleic acids and metal ion binding, dependence on potentially correlated protonation states of key residues and bond breaking/forming in the chemical steps of the reaction. The goal of this article is to summarize and apply multiscale modeling methods in an effort to target the different parts of the RNA catalysis problem space while also addressing the limitations and pitfalls of these methods. Classical molecular dynamics (MD) simulations, reference interaction site model (RISM) calculations, constant pH molecular dynamics (CpHMD) simulations, Hamiltonian replica exchange molecular dynamics (HREMD) and quantum mechanical/molecular mechanical (QM/MM) simulations will be discussed in the context of the study of RNA backbone cleavage transesterification. This reaction is catalyzed by both RNA and protein enzymes, and here we examine the different mechanistic strategies taken by the hepatitis delta virus ribozyme (HDVr) and RNase A. PMID:25726472

  6. A Method for Fast Computation of FTLE Fields

    NASA Astrophysics Data System (ADS)

    Brunton, Steven; Rowley, Clarence

    2008-11-01

    An efficient method for computing finite time Lyapunov exponent (FTLE) fields is investigated. FTLE fields, which measure the stretching between nearby particles, are important in determining transport mechanisms in unsteady flows. Ridges of the FTLE field are Lagrangian Coherent Structures (LCS) and provide an unsteady analogue of invariant manifolds from dynamical systems theory. FTLE field computations are expensive because of the large number of particle trajectories which must be integrated. However, when computing a time series of fields, it is possible to use the integrated trajectories at a previous time to compute an approximation of the integrated trajectories initialized at a later time, resulting in significant computational savings. This work provides analytic estimates for accumulated error and computation time as well as simulations comparing exact results with the approximate method for a number of interesting flows.

  7. Testing and Validation of Computational Methods for Mass Spectrometry.

    PubMed

    Gatto, Laurent; Hansen, Kasper D; Hoopmann, Michael R; Hermjakob, Henning; Kohlbacher, Oliver; Beyer, Andreas

    2016-03-01

    High-throughput methods based on mass spectrometry (proteomics, metabolomics, lipidomics, etc.) produce a wealth of data that cannot be analyzed without computational methods. The impact of the choice of method on the overall result of a biological study is often underappreciated, but different methods can result in very different biological findings. It is thus essential to evaluate and compare the correctness and relative performance of computational methods. The volume of the data as well as the complexity of the algorithms render unbiased comparisons challenging. This paper discusses some problems and challenges in testing and validation of computational methods. We discuss the different types of data (simulated and experimental validation data) as well as different metrics to compare methods. We also introduce a new public repository for mass spectrometric reference data sets ( http://compms.org/RefData ) that contains a collection of publicly available data sets for performance evaluation for a wide range of different methods. PMID:26549429

  8. Computational Simulations and the Scientific Method

    NASA Technical Reports Server (NTRS)

    Kleb, Bil; Wood, Bill

    2005-01-01

    As scientific simulation software becomes more complicated, the scientific-software implementor's need for component tests from new model developers becomes more crucial. The community's ability to follow the basic premise of the Scientific Method requires independently repeatable experiments, and model innovators are in the best position to create these test fixtures. Scientific software developers also need to quickly judge the value of the new model, i.e., its cost-to-benefit ratio in terms of gains provided by the new model and implementation risks such as cost, time, and quality. This paper asks two questions. The first is whether other scientific software developers would find published component tests useful, and the second is whether model innovators think publishing test fixtures is a feasible approach.

  9. Computer systems and methods for visualizing data

    DOEpatents

    Stolte, Chris; Hanrahan, Patrick

    2010-07-13

    A method for forming a visual plot using a hierarchical structure of a dataset. The dataset comprises a measure and a dimension. The dimension consists of a plurality of levels. The plurality of levels form a dimension hierarchy. The visual plot is constructed based on a specification. A first level from the plurality of levels is represented by a first component of the visual plot. A second level from the plurality of levels is represented by a second component of the visual plot. The dataset is queried to retrieve data in accordance with the specification. The data includes all or a portion of the dimension and all or a portion of the measure. The visual plot is populated with the retrieved data in accordance with the specification.

  10. Low-Rank Incremental Methods for Computing Dominant Singular Subspaces

    SciTech Connect

    Baker, Christopher G; Gallivan, Dr. Kyle A; Van Dooren, Dr. Paul

    2012-01-01

    Computing the singular values and vectors of a matrix is a crucial kernel in numerous scientific and industrial applications. As such, numerous methods have been proposed to handle this problem in a computationally efficient way. This paper considers a family of methods for incrementally computing the dominant SVD of a large matrix A. Specifically, we describe a unification of a number of previously disparate methods for approximating the dominant SVD via a single pass through A. We tie the behavior of these methods to that of a class of optimization-based iterative eigensolvers on A'*A. An iterative procedure is proposed which allows the computation of an accurate dominant SVD via multiple passes through A. We present an analysis of the convergence of this iteration, and provide empirical demonstration of the proposed method on both synthetic and benchmark data.

  11. Developing a multimodal biometric authentication system using soft computing methods.

    PubMed

    Malcangi, Mario

    2015-01-01

    Robust personal authentication is becoming ever more important in computer-based applications. Among a variety of methods, biometric offers several advantages, mainly in embedded system applications. Hard and soft multi-biometric, combined with hard and soft computing methods, can be applied to improve the personal authentication process and to generalize the applicability. This chapter describes the embedded implementation of a multi-biometric (voiceprint and fingerprint) multimodal identification system based on hard computing methods (DSP) for feature extraction and matching, an artificial neural network (ANN) for soft feature pattern matching, and a fuzzy logic engine (FLE) for data fusion and decision. PMID:25502384

  12. Review - Computational methods for internal flows with emphasis on turbomachinery

    NASA Technical Reports Server (NTRS)

    Mcnally, W. D.; Sockol, P. M.

    1985-01-01

    Current computational methods for analyzing flows in turbomachinery and other related internal propulsion components are presented. The methods are divided into two classes. The inviscid methods deal specifically with turbomachinery applications. Viscous methods, deal with generalized duct flows as well as flows in turbomachinery passages. Inviscid methods are categorized into the potential, stream function, and Euler approaches. Viscous methods are treated in terms of parabolic, partially parabolic, and elliptic procedures. Various grids used in association with these procedures are also discussed.

  13. Computational methods for internal flows with emphasis on turbomachinery

    NASA Technical Reports Server (NTRS)

    Mcnally, W. D.; Sockol, P. M.

    1981-01-01

    Current computational methods for analyzing flows in turbomachinery and other related internal propulsion components are presented. The methods are divided into two classes. The inviscid methods deal specifically with turbomachinery applications. Viscous methods, deal with generalized duct flows as well as flows in turbomachinery passages. Inviscid methods are categorized into the potential, stream function, and Euler aproaches. Viscous methods are treated in terms of parabolic, partially parabolic, and elliptic procedures. Various grids used in association with these procedures are also discussed.

  14. Evolutionary thinking

    PubMed Central

    Hunt, Tam

    2014-01-01

    Evolution as an idea has a lengthy history, even though the idea of evolution is generally associated with Darwin today. Rebecca Stott provides an engaging and thoughtful overview of this history of evolutionary thinking in her 2013 book, Darwin's Ghosts: The Secret History of Evolution. Since Darwin, the debate over evolution—both how it takes place and, in a long war of words with religiously-oriented thinkers, whether it takes place—has been sustained and heated. A growing share of this debate is now devoted to examining how evolutionary thinking affects areas outside of biology. How do our lives change when we recognize that all is in flux? What can we learn about life more generally if we study change instead of stasis? Carter Phipps’ book, Evolutionaries: Unlocking the Spiritual and Cultural Potential of Science's Greatest Idea, delves deep into this relatively new development. Phipps generally takes as a given the validity of the Modern Synthesis of evolutionary biology. His story takes us into, as the subtitle suggests, the spiritual and cultural implications of evolutionary thinking. Can religion and evolution be reconciled? Can evolutionary thinking lead to a new type of spirituality? Is our culture already being changed in ways that we don't realize by evolutionary thinking? These are all important questions and Phipps book is a great introduction to this discussion. Phipps is an author, journalist, and contributor to the emerging “integral” or “evolutionary” cultural movement that combines the insights of Integral Philosophy, evolutionary science, developmental psychology, and the social sciences. He has served as the Executive Editor of EnlightenNext magazine (no longer published) and more recently is the co-founder of the Institute for Cultural Evolution, a public policy think tank addressing the cultural roots of America's political challenges. What follows is an email interview with Phipps. PMID:26478766

  15. 47 CFR 80.771 - Method of computing coverage.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 5 2012-10-01 2012-10-01 false Method of computing coverage. 80.771 Section 80.771 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Standards for Computing Public Coast Station VHF Coverage § 80.771...

  16. GAP Noise Computation By The CE/SE Method

    NASA Technical Reports Server (NTRS)

    Loh, Ching Y.; Chang, Sin-Chung; Wang, Xiao Y.; Jorgenson, Philip C. E.

    2001-01-01

    A typical gap noise problem is considered in this paper using the new space-time conservation element and solution element (CE/SE) method. Implementation of the computation is straightforward. No turbulence model, LES (large eddy simulation) or a preset boundary layer profile is used, yet the computed frequency agrees well with the experimental one.

  17. Computational Performance and Statistical Accuracy of *BEAST and Comparisons with Other Methods.

    PubMed

    Ogilvie, Huw A; Heled, Joseph; Xie, Dong; Drummond, Alexei J

    2016-05-01

    Under the multispecies coalescent model of molecular evolution, gene trees have independent evolutionary histories within a shared species tree. In comparison, supermatrix concatenation methods assume that gene trees share a single common genealogical history, thereby equating gene coalescence with species divergence. The multispecies coalescent is supported by previous studies which found that its predicted distributions fit empirical data, and that concatenation is not a consistent estimator of the species tree. *BEAST, a fully Bayesian implementation of the multispecies coalescent, is popular but computationally intensive, so the increasing size of phylogenetic data sets is both a computational challenge and an opportunity for better systematics. Using simulation studies, we characterize the scaling behavior of *BEAST, and enable quantitative prediction of the impact increasing the number of loci has on both computational performance and statistical accuracy. Follow-up simulations over a wide range of parameters show that the statistical performance of *BEAST relative to concatenation improves both as branch length is reduced and as the number of loci is increased. Finally, using simulations based on estimated parameters from two phylogenomic data sets, we compare the performance of a range of species tree and concatenation methods to show that using *BEAST with tens of loci can be preferable to using concatenation with thousands of loci. Our results provide insight into the practicalities of Bayesian species tree estimation, the number of loci required to obtain a given level of accuracy and the situations in which supermatrix or summary methods will be outperformed by the fully Bayesian multispecies coalescent. PMID:26821913

  18. Computational Performance and Statistical Accuracy of *BEAST and Comparisons with Other Methods

    PubMed Central

    Ogilvie, Huw A.; Heled, Joseph; Xie, Dong; Drummond, Alexei J.

    2016-01-01

    Under the multispecies coalescent model of molecular evolution, gene trees have independent evolutionary histories within a shared species tree. In comparison, supermatrix concatenation methods assume that gene trees share a single common genealogical history, thereby equating gene coalescence with species divergence. The multispecies coalescent is supported by previous studies which found that its predicted distributions fit empirical data, and that concatenation is not a consistent estimator of the species tree. *BEAST, a fully Bayesian implementation of the multispecies coalescent, is popular but computationally intensive, so the increasing size of phylogenetic data sets is both a computational challenge and an opportunity for better systematics. Using simulation studies, we characterize the scaling behavior of *BEAST, and enable quantitative prediction of the impact increasing the number of loci has on both computational performance and statistical accuracy. Follow-up simulations over a wide range of parameters show that the statistical performance of *BEAST relative to concatenation improves both as branch length is reduced and as the number of loci is increased. Finally, using simulations based on estimated parameters from two phylogenomic data sets, we compare the performance of a range of species tree and concatenation methods to show that using *BEAST with tens of loci can be preferable to using concatenation with thousands of loci. Our results provide insight into the practicalities of Bayesian species tree estimation, the number of loci required to obtain a given level of accuracy and the situations in which supermatrix or summary methods will be outperformed by the fully Bayesian multispecies coalescent. PMID:26821913

  19. Evolutionary analysis of apolipoprotein E by Maximum Likelihood and complex network methods.

    PubMed

    Benevides, Leandro de Jesus; Carvalho, Daniel Santana de; Andrade, Roberto Fernandes Silva; Bomfim, Gilberto Cafezeiro; Fernandes, Flora Maria de Campos

    2016-07-14

    Apolipoprotein E (apo E) is a human glycoprotein with 299 amino acids, and it is a major component of very low density lipoproteins (VLDL) and a group of high-density lipoproteins (HDL). Phylogenetic studies are important to clarify how various apo E proteins are related in groups of organisms and whether they evolved from a common ancestor. Here, we aimed at performing a phylogenetic study on apo E carrying organisms. We employed a classical and robust method, such as Maximum Likelihood (ML), and compared the results using a more recent approach based on complex networks. Thirty-two apo E amino acid sequences were downloaded from NCBI. A clear separation could be observed among three major groups: mammals, fish and amphibians. The results obtained from ML method, as well as from the constructed networks showed two different groups: one with mammals only (C1) and another with fish (C2), and a single node with the single sequence available for an amphibian. The accordance in results from the different methods shows that the complex networks approach is effective in phylogenetic studies. Furthermore, our results revealed the conservation of apo E among animal groups. PMID:27419397

  20. Job-shop scheduling with a combination of evolutionary and heuristic methods

    NASA Astrophysics Data System (ADS)

    Patkai, Bela; Torvinen, Seppo

    1999-08-01

    Since almost all of the scheduling problems are NP-hard-- cannot be solved in polynomial time--those companies that need a realistic scheduling system face serious limitations of available methods for finding an optimal schedule, especially if the given environment requires adaptation to dynamic variations. Exact methods do find an optimal schedule, but the size of the problem they can solve is very limited, excluding this way the required scalability. The solution presented in this paper is a simple, multi-pass heuristic method, which aims to avoid the limitations of other well-known formulations. Even though the dispatching rules are fast and provide near-optimal solutions in most cases, they are severely limited in efficiency--especially in case the schedule builder satisfies a significant number of constraints. That is the main motivation for adding a simplified genetic algorithm to the dispatching rules, which--due to its stochastic nature--belongs to heuristic, too. The scheduling problem is of a middle size Finnish factory, throughout the investigations their up-to-date manufacturing data has been used for the sake of realistic calculations.

  1. Using THz Spectroscopy, Evolutionary Network Analysis Methods, and MD Simulation to Map the Evolution of Allosteric Communication Pathways in c-Type Lysozymes

    PubMed Central

    Woods, Kristina N.; Pfeffer, Juergen

    2016-01-01

    It is now widely accepted that protein function is intimately tied with the navigation of energy landscapes. In this framework, a protein sequence is not described by a distinct structure but rather by an ensemble of conformations. And it is through this ensemble that evolution is able to modify a protein’s function by altering its landscape. Hence, the evolution of protein functions involves selective pressures that adjust the sampling of the conformational states. In this work, we focus on elucidating the evolutionary pathway that shaped the function of individual proteins that make-up the mammalian c-type lysozyme subfamily. Using both experimental and computational methods, we map out specific intermolecular interactions that direct the sampling of conformational states and accordingly, also underlie shifts in the landscape that are directly connected with the formation of novel protein functions. By contrasting three representative proteins in the family we identify molecular mechanisms that are associated with the selectivity of enhanced antimicrobial properties and consequently, divergent protein function. Namely, we link the extent of localized fluctuations involving the loop separating helices A and B with shifts in the equilibrium of the ensemble of conformational states that mediate interdomain coupling and concurrently moderate substrate binding affinity. This work reveals unique insights into the molecular level mechanisms that promote the progression of interactions that connect the immune response to infection with the nutritional properties of lactation, while also providing a deeper understanding about how evolving energy landscapes may define present-day protein function. PMID:26337549

  2. Computer method for identification of boiler transfer functions

    NASA Technical Reports Server (NTRS)

    Miles, J. H.

    1972-01-01

    Iterative computer aided procedure was developed which provides for identification of boiler transfer functions using frequency response data. Method uses frequency response data to obtain satisfactory transfer function for both high and low vapor exit quality data.

  3. Platform-independent method for computer aided schematic drawings

    DOEpatents

    Vell, Jeffrey L.; Siganporia, Darius M.; Levy, Arthur J.

    2012-02-14

    A CAD/CAM method is disclosed for a computer system to capture and interchange schematic drawing and associated design information. The schematic drawing and design information are stored in an extensible, platform-independent format.

  4. Computer Simulation Methods for Defect Configurations and Nanoscale Structures

    SciTech Connect

    Gao, Fei

    2010-01-01

    This chapter will describe general computer simulation methods, including ab initio calculations, molecular dynamics and kinetic Monte-Carlo method, and their applications to the calculations of defect configurations in various materials (metals, ceramics and oxides) and the simulations of nanoscale structures due to ion-solid interactions. The multiscale theory, modeling, and simulation techniques (both time scale and space scale) will be emphasized, and the comparisons between computer simulation results and exprimental observations will be made.

  5. Panel-Method Computer Code For Potential Flow

    NASA Technical Reports Server (NTRS)

    Ashby, Dale L.; Dudley, Michael R.; Iguchi, Steven K.

    1992-01-01

    Low-order panel method used to reduce computation time. Panel code PMARC (Panel Method Ames Research Center) numerically simulates flow field around or through complex three-dimensional bodies such as complete aircraft models or wind tunnel. Based on potential-flow theory. Facilitates addition of new features to code and tailoring of code to specific problems and computer-hardware constraints. Written in standard FORTRAN 77.

  6. Method and computer program product for maintenance and modernization backlogging

    DOEpatents

    Mattimore, Bernard G; Reynolds, Paul E; Farrell, Jill M

    2013-02-19

    According to one embodiment, a computer program product for determining future facility conditions includes a computer readable medium having computer readable program code stored therein. The computer readable program code includes computer readable program code for calculating a time period specific maintenance cost, for calculating a time period specific modernization factor, and for calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. In another embodiment, a computer-implemented method for calculating future facility conditions includes calculating a time period specific maintenance cost, calculating a time period specific modernization factor, and calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. Other embodiments are also presented.

  7. Automated Antenna Design with Evolutionary Algorithms

    NASA Technical Reports Server (NTRS)

    Linden, Derek; Hornby, Greg; Lohn, Jason; Globus, Al; Krishunkumor, K.

    2006-01-01

    Current methods of designing and optimizing antennas by hand are time and labor intensive, and limit complexity. Evolutionary design techniques can overcome these limitations by searching the design space and automatically finding effective solutions. In recent years, evolutionary algorithms have shown great promise in finding practical solutions in large, poorly understood design spaces. In particular, spacecraft antenna design has proven tractable to evolutionary design techniques. Researchers have been investigating evolutionary antenna design and optimization since the early 1990s, and the field has grown in recent years as computer speed has increased and electromagnetic simulators have improved. Two requirements-compliant antennas, one for ST5 and another for TDRS-C, have been automatically designed by evolutionary algorithms. The ST5 antenna is slated to fly this year, and a TDRS-C phased array element has been fabricated and tested. Such automated evolutionary design is enabled by medium-to-high quality simulators and fast modern computers to evaluate computer-generated designs. Evolutionary algorithms automate cut-and-try engineering, substituting automated search though millions of potential designs for intelligent search by engineers through a much smaller number of designs. For evolutionary design, the engineer chooses the evolutionary technique, parameters and the basic form of the antenna, e.g., single wire for ST5 and crossed-element Yagi for TDRS-C. Evolutionary algorithms then search for optimal configurations in the space defined by the engineer. NASA's Space Technology 5 (ST5) mission will launch three small spacecraft to test innovative concepts and technologies. Advanced evolutionary algorithms were used to automatically design antennas for ST5. The combination of wide beamwidth for a circularly-polarized wave and wide impedance bandwidth made for a challenging antenna design problem. From past experience in designing wire antennas, we chose to

  8. Methods for operating parallel computing systems employing sequenced communications

    DOEpatents

    Benner, R.E.; Gustafson, J.L.; Montry, G.R.

    1999-08-10

    A parallel computing system and method are disclosed having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system. 15 figs.

  9. Methods for operating parallel computing systems employing sequenced communications

    DOEpatents

    Benner, Robert E.; Gustafson, John L.; Montry, Gary R.

    1999-01-01

    A parallel computing system and method having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system.

  10. Convergence acceleration of the Proteus computer code with multigrid methods

    NASA Technical Reports Server (NTRS)

    Demuren, A. O.; Ibraheem, S. O.

    1992-01-01

    Presented here is the first part of a study to implement convergence acceleration techniques based on the multigrid concept in the Proteus computer code. A review is given of previous studies on the implementation of multigrid methods in computer codes for compressible flow analysis. Also presented is a detailed stability analysis of upwind and central-difference based numerical schemes for solving the Euler and Navier-Stokes equations. Results are given of a convergence study of the Proteus code on computational grids of different sizes. The results presented here form the foundation for the implementation of multigrid methods in the Proteus code.