Science.gov

Sample records for evolutionary computing methods

  1. Evolutionary Computing Methods for Spectral Retrieval

    NASA Technical Reports Server (NTRS)

    Terrile, Richard; Fink, Wolfgang; Huntsberger, Terrance; Lee, Seugwon; Tisdale, Edwin; VonAllmen, Paul; Tinetti, Geivanna

    2009-01-01

    A methodology for processing spectral images to retrieve information on underlying physical, chemical, and/or biological phenomena is based on evolutionary and related computational methods implemented in software. In a typical case, the solution (the information that one seeks to retrieve) consists of parameters of a mathematical model that represents one or more of the phenomena of interest. The methodology was developed for the initial purpose of retrieving the desired information from spectral image data acquired by remote-sensing instruments aimed at planets (including the Earth). Examples of information desired in such applications include trace gas concentrations, temperature profiles, surface types, day/night fractions, cloud/aerosol fractions, seasons, and viewing angles. The methodology is also potentially useful for retrieving information on chemical and/or biological hazards in terrestrial settings. In this methodology, one utilizes an iterative process that minimizes a fitness function indicative of the degree of dissimilarity between observed and synthetic spectral and angular data. The evolutionary computing methods that lie at the heart of this process yield a population of solutions (sets of the desired parameters) within an accuracy represented by a fitness-function value specified by the user. The evolutionary computing methods (ECM) used in this methodology are Genetic Algorithms and Simulated Annealing, both of which are well-established optimization techniques and have also been described in previous NASA Tech Briefs articles. These are embedded in a conceptual framework, represented in the architecture of the implementing software, that enables automatic retrieval of spectral and angular data and analysis of the retrieved solutions for uniqueness.

  2. Evolutionary Computing

    SciTech Connect

    Patton, Robert M; Cui, Xiaohui; Jiao, Yu; Potok, Thomas E

    2008-01-01

    The rate at which information overwhelms humans is significantly more than the rate at which humans have learned to process, analyze, and leverage this information. To overcome this challenge, new methods of computing must be formulated, and scientist and engineers have looked to nature for inspiration in developing these new methods. Consequently, evolutionary computing has emerged as new paradigm for computing, and has rapidly demonstrated its ability to solve real-world problems where traditional techniques have failed. This field of work has now become quite broad and encompasses areas ranging from artificial life to neural networks. This chapter focuses specifically on two sub-areas of nature-inspired computing: Evolutionary Algorithms and Swarm Intelligence.

  3. Evolutionary Computational Methods for Identifying Emergent Behavior in Autonomous Systems

    NASA Technical Reports Server (NTRS)

    Terrile, Richard J.; Guillaume, Alexandre

    2011-01-01

    A technique based on Evolutionary Computational Methods (ECMs) was developed that allows for the automated optimization of complex computationally modeled systems, such as autonomous systems. The primary technology, which enables the ECM to find optimal solutions in complex search spaces, derives from evolutionary algorithms such as the genetic algorithm and differential evolution. These methods are based on biological processes, particularly genetics, and define an iterative process that evolves parameter sets into an optimum. Evolutionary computation is a method that operates on a population of existing computational-based engineering models (or simulators) and competes them using biologically inspired genetic operators on large parallel cluster computers. The result is the ability to automatically find design optimizations and trades, and thereby greatly amplify the role of the system engineer.

  4. [The history of development of evolutionary methods in St. Petersburg school of computer simulation in biology].

    PubMed

    Menshutkin, V V; Kazanskiĭ, A B; Levchenko, V F

    2010-01-01

    The history of rise and development of evolutionary methods in Saint Petersburg school of biological modelling is traced and analyzed. Some pioneering works in simulation of ecological and evolutionary processes, performed in St.-Petersburg school became an exemplary ones for many followers in Russia and abroad. The individual-based approach became the crucial point in the history of the school as an adequate instrument for construction of models of biological evolution. This approach is natural for simulation of the evolution of life-history parameters and adaptive processes in populations and communities. In some cases simulated evolutionary process was used for solving a reverse problem, i. e., for estimation of uncertain life-history parameters of population. Evolutionary computations is one more aspect of this approach application in great many fields. The problems and vistas of ecological and evolutionary modelling in general are discussed.

  5. Practical advantages of evolutionary computation

    NASA Astrophysics Data System (ADS)

    Fogel, David B.

    1997-10-01

    Evolutionary computation is becoming a common technique for solving difficult, real-world problems in industry, medicine, and defense. This paper reviews some of the practical advantages to using evolutionary algorithms as compared with classic methods of optimization or artificial intelligence. Specific advantages include the flexibility of the procedures, as well as their ability to self-adapt the search for optimum solutions on the fly. As desktop computers increase in speed, the application of evolutionary algorithms will become routine.

  6. Optimizing neural networks for river flow forecasting - Evolutionary Computation methods versus the Levenberg-Marquardt approach

    NASA Astrophysics Data System (ADS)

    Piotrowski, Adam P.; Napiorkowski, Jarosław J.

    2011-09-01

    SummaryAlthough neural networks have been widely applied to various hydrological problems, including river flow forecasting, for at least 15 years, they have usually been trained by means of gradient-based algorithms. Recently nature inspired Evolutionary Computation algorithms have rapidly developed as optimization methods able to cope not only with non-differentiable functions but also with a great number of local minima. Some of proposed Evolutionary Computation algorithms have been tested for neural networks training, but publications which compare their performance with gradient-based training methods are rare and present contradictory conclusions. The main goal of the present study is to verify the applicability of a number of recently developed Evolutionary Computation optimization methods, mostly from the Differential Evolution family, to multi-layer perceptron neural networks training for daily rainfall-runoff forecasting. In the present paper eight Evolutionary Computation methods, namely the first version of Differential Evolution (DE), Distributed DE with Explorative-Exploitative Population Families, Self-Adaptive DE, DE with Global and Local Neighbors, Grouping DE, JADE, Comprehensive Learning Particle Swarm Optimization and Efficient Population Utilization Strategy Particle Swarm Optimization are tested against the Levenberg-Marquardt algorithm - probably the most efficient in terms of speed and success rate among gradient-based methods. The Annapolis River catchment was selected as the area of this study due to its specific climatic conditions, characterized by significant seasonal changes in runoff, rapid floods, dry summers, severe winters with snowfall, snow melting, frequent freeze and thaw, and presence of river ice - conditions which make flow forecasting more troublesome. The overall performance of the Levenberg-Marquardt algorithm and the DE with Global and Local Neighbors method for neural networks training turns out to be superior to other

  7. Augmented Evolutionary Computation Using Genetic Programming

    NASA Astrophysics Data System (ADS)

    Ae, Tadashi; Kamitani, Motoki

    2006-06-01

    Evolutionary computation is an anticipatory computation for generation of creative sets including the set of sequences. The Interactive Evolutionary Computation (IEC, in short) is known as one of evolutionary computations, but it is not necessarily efficient because it may make the user tired. Therefore, we propose an improved method, that is, Augmented Interactive Evolutionary Computation (AIEC, in short), where the hypothesis/verification is applied for the generative agent instead of the objective element. We will state this type of evolutionary computation which is realized by a Genetic Programming.

  8. Algorithmic Mechanism Design of Evolutionary Computation

    PubMed Central

    Pei, Yan

    2015-01-01

    We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm. PMID:26257777

  9. Algorithmic Mechanism Design of Evolutionary Computation.

    PubMed

    Pei, Yan

    2015-01-01

    We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm.

  10. Retrieval of Earthshine Spectra Using Evolutionary Computational Methods as Analogs for Extra-Solar Planetary Spectra

    NASA Astrophysics Data System (ADS)

    Terrile, R. J.; Tinetti, G.; Lee, S.; Fink, W.; Huntsberger, T.; von Allmen, P.; Tisdale, E. R.

    2006-05-01

    The spectral information provided by the next generation of extra-solar planet exploration missions will be averaged over the visible disk and the exposure time. Most probably, the interpretation of the observed spectra will not be unique, but families of solutions will provide equally good explanations of the spectral features (degeneracy). Traditional retrieval techniques developed to study the environments of planets in our solar system are inadequate to analyze disk/time-averaged spectra because they assume homogeneous environments, short observational time scales and search only for solutions belonging to the local domain of the initial conditions. We developed an innovative technique that couples evolutionary computational methods to a 3D model that simulates the spectral response of the planet rotating (Tinetti et al., 2005). We have performed a set of preliminary experiments in retrieving the earthshine spectrum recorded by Woolf et al. (2002): nine weighting parameters were retrieved, corresponding to different surface/cloud types (ocean, forest, grass, ground, tundra, ice, high/medium/low clouds) uniformly distributed over 48 planetary pixels. Two distinct retrieval experiments were run: i) evolution of one large solution population with 1000 individuals and ii) evolution of multiple solution islands with 100 individuals in each island. These two experiments returned over 2700 automatically generated retrievals satisfying the error criterion (fitness) of 10% least squares match to the observed spectra. The spectral retrieval procedure with this reduced set of parameters already resulted in a high quality fit of the earthshine spectrum, in agreement with ground truth. The retrieved solutions were divided into classes of spectral fit using clustering tools, which helped visualize the degeneracy in the set of solutions. We have also repeated the experiment using non-uniformly distributed 3 cloud types over ground- truth surface types in 22 illuminated pixels

  11. Retrieval of Extra-Solar Planetary Spectra Using Evolutionary Computational Methods

    NASA Astrophysics Data System (ADS)

    Terrile, R. J.; Fink, W.; Huntsberger, T.; Lee, S.; Tisdale, E. R.; Tinetti, G.; von Allmen, P.

    2005-12-01

    The spectral information provided by the next generation of extra-solar planet exploration missions will be averaged over the visible disk and the exposure time. Most probably, the interpretation of the observed spectra will not be unique, but families of solutions will provide equally good explanations of the spectral features (degeneracy). Traditional retrieval techniques developed to study the environments of planets in our solar system are inadequate to analyze disk/time-averaged spectra because they assume homogeneous environments, short observational time scales and search only for solutions belonging to the local domain of the initial conditions. We developed an innovative technique that couples evolutionary computational methods to a 3D model that simulates the spectral response of the planet rotating (Tinetti et al., 2005). We have performed a set of preliminary experiments in retrieving the earthshine spectrum recorded by Woolf et al. (2002): nine weighting parameters were retrieved, corresponding to different surface/cloud types (ocean, forest, grass, ground, tundra, ice, high/medium/low clouds) uniformly distributed over 48 planetary pixels. Two distinct retrieval experiments were run: i) evolution of one large solution population with 1000 individuals and ii) evolution of multiple solution islands with 100 individuals in each island. These two experiments returned over 2700 automatically generated retrievals satisfying the error criterion (fitness) of 10% least squares match to the observed spectra. The spectral retrieval procedure with this reduced set of parameters already resulted in a high quality fit of the earthshine spectrum, in agreement with ground truth. The retrieved solutions were divided into classes of spectral fit using clustering tools, which helped visualize the degeneracy in the set of solutions. As a next step we are repeating the experiment using non-uniformly distributed 9 surface/cloud types in 12 planetary pixels (108 retrieved

  12. Statistical methods for evolutionary trees.

    PubMed

    Edwards, A W F

    2009-09-01

    In 1963 and 1964, L. L. Cavalli-Sforza and A. W. F. Edwards introduced novel methods for computing evolutionary trees from genetical data, initially for human populations from blood-group gene frequencies. The most important development was their introduction of statistical methods of estimation applied to stochastic models of evolution.

  13. A constrained evolutionary computation method for detecting controlling regions of cortical networks.

    PubMed

    Tang, Yang; Wang, Zidong; Gao, Huijun; Swift, Stephen; Kurths, Jürgen

    2012-01-01

    Controlling regions in cortical networks, which serve as key nodes to control the dynamics of networks to a desired state, can be detected by minimizing the eigenratio R and the maximum imaginary part \\sigma of an extended connection matrix. Until now, optimal selection of the set of controlling regions is still an open problem and this paper represents the first attempt to include two measures of controllability into one unified framework. The detection problem of controlling regions in cortical networks is converted into a constrained optimization problem (COP), where the objective function R is minimized and \\sigma is regarded as a constraint. Then, the detection of controlling regions of a weighted and directed complex network (e.g., a cortical network of a cat), is thoroughly investigated. The controlling regions of cortical networks are successfully detected by means of an improved dynamic hybrid framework (IDyHF). Our experiments verify that the proposed IDyHF outperforms two recently developed evolutionary computation methods in constrained optimization field and some traditional methods in control theory as well as graph theory. Based on the IDyHF, the controlling regions are detected in a microscopic and macroscopic way. Our results unveil the dependence of controlling regions on the number of driver nodes l and the constraint r. The controlling regions are largely selected from the regions with a large in-degree and a small out-degree. When r=+ \\infty, there exists a concave shape of the mean degrees of the driver nodes, i.e., the regions with a large degree are of great importance to the control of the networks when l is small and the regions with a small degree are helpful to control the networks when l increases. When r=0, the mean degrees of the driver nodes increase as a function of l. We find that controlling \\sigma is becoming more important in controlling a cortical network with increasing l. The methods and results of detecting controlling

  14. EVOLUTIONARY COMPUTING PROJECT

    SciTech Connect

    C. BARRETT; C. REIDYS

    2000-09-01

    This report summarizes LDRD-funded mathematical research related to computer simulation, inspired in part by combinatorial analysis of sequence to structure relationships of bio-molecules. Computer simulations calculate the interactions among many individual, local entities, thereby generating global dynamics. The objective of this project was to establish a mathematical basis for a comprehensive theory of computer simulations. This mathematical theory is intended to rigorously underwrite very large complex simulations, including simulation of bio- and socio-technical systems. We believe excellent progress has been made. Abstraction of three main ingredients of simulation forms the mathematical setting, called Sequential Dynamical Systems (SDS): (1) functions realized as data-local procedures represent entity state transformations, (2) a graph that expresses locality of the functions and which represents the dependencies among entities, and (3) an ordering, or schedule according to which the entities are evaluated, e.g., up-dated. The research spans algebraic foundations, formal dynamical systems, computer simulation, and theoretical computer science. The theoretical approach is also deeply related to theoretical issues in parallel compilation. Numerous publications were produced, follow-on projects have been identified and are being developed programmatically, and a new area in computational algebra, SDS, was produced.

  15. Scalable computing for evolutionary genomics.

    PubMed

    Prins, Pjotr; Belhachemi, Dominique; Möller, Steffen; Smant, Geert

    2012-01-01

    Genomic data analysis in evolutionary biology is becoming so computationally intensive that analysis of multiple hypotheses and scenarios takes too long on a single desktop computer. In this chapter, we discuss techniques for scaling computations through parallelization of calculations, after giving a quick overview of advanced programming techniques. Unfortunately, parallel programming is difficult and requires special software design. The alternative, especially attractive for legacy software, is to introduce poor man's parallelization by running whole programs in parallel as separate processes, using job schedulers. Such pipelines are often deployed on bioinformatics computer clusters. Recent advances in PC virtualization have made it possible to run a full computer operating system, with all of its installed software, on top of another operating system, inside a "box," or virtual machine (VM). Such a VM can flexibly be deployed on multiple computers, in a local network, e.g., on existing desktop PCs, and even in the Cloud, to create a "virtual" computer cluster. Many bioinformatics applications in evolutionary biology can be run in parallel, running processes in one or more VMs. Here, we show how a ready-made bioinformatics VM image, named BioNode, effectively creates a computing cluster, and pipeline, in a few steps. This allows researchers to scale-up computations from their desktop, using available hardware, anytime it is required. BioNode is based on Debian Linux and can run on networked PCs and in the Cloud. Over 200 bioinformatics and statistical software packages, of interest to evolutionary biology, are included, such as PAML, Muscle, MAFFT, MrBayes, and BLAST. Most of these software packages are maintained through the Debian Med project. In addition, BioNode contains convenient configuration scripts for parallelizing bioinformatics software. Where Debian Med encourages packaging free and open source bioinformatics software through one central project

  16. Evolutionary Computing for Low-thrust Navigation

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon; Fink, Wolfgang; vonAllmed, Paul; Petropoulos, Anastassios E.; Russell, Ryan P.; Terrile, Richard J.

    2005-01-01

    The development of new mission concepts requires efficient methodologies to analyze, design and simulate the concepts before implementation. New mission concepts are increasingly considering the use of ion thrusters for fuel-efficient navigation in deep space. This paper presents parallel, evolutionary computing methods to design trajectories of spacecraft propelled by ion thrusters and to assess the trade-off between delivered payload mass and required flight time. The developed methods utilize a distributed computing environment in order to speed up computation, and use evolutionary algorithms to find globally Pareto-optimal solutions. The methods are coupled with two main traditional trajectory design approaches, which are called direct and indirect. In the direct approach, thrust control is discretized in either arc time or arc length, and the resulting discrete thrust vectors are optimized. In the indirect approach, a thrust control problem is transformed into a costate control problem, and the initial values of the costate vector are optimized. The developed methods are applied to two problems: 1) an orbit transfer around the Earth and 2) a transfer between two distance retrograde orbits around Europa, the closest to Jupiter of the icy Galilean moons. The optimal solutions found with the present methods are comparable to other state-of-the-art trajectory optimizers and to analytical approximations for optimal transfers, while the required computational time is several orders of magnitude shorter than other optimizers thanks to an intelligent design of control vector discretization, advanced algorithmic parameterization, and parallel computing.

  17. Evolutionary Computing for Low-thrust Navigation

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon; Fink, Wolfgang; vonAllmed, Paul; Petropoulos, Anastassios E.; Russell, Ryan P.; Terrile, Richard J.

    2005-01-01

    The development of new mission concepts requires efficient methodologies to analyze, design and simulate the concepts before implementation. New mission concepts are increasingly considering the use of ion thrusters for fuel-efficient navigation in deep space. This paper presents parallel, evolutionary computing methods to design trajectories of spacecraft propelled by ion thrusters and to assess the trade-off between delivered payload mass and required flight time. The developed methods utilize a distributed computing environment in order to speed up computation, and use evolutionary algorithms to find globally Pareto-optimal solutions. The methods are coupled with two main traditional trajectory design approaches, which are called direct and indirect. In the direct approach, thrust control is discretized in either arc time or arc length, and the resulting discrete thrust vectors are optimized. In the indirect approach, a thrust control problem is transformed into a costate control problem, and the initial values of the costate vector are optimized. The developed methods are applied to two problems: 1) an orbit transfer around the Earth and 2) a transfer between two distance retrograde orbits around Europa, the closest to Jupiter of the icy Galilean moons. The optimal solutions found with the present methods are comparable to other state-of-the-art trajectory optimizers and to analytical approximations for optimal transfers, while the required computational time is several orders of magnitude shorter than other optimizers thanks to an intelligent design of control vector discretization, advanced algorithmic parameterization, and parallel computing.

  18. Optimizing a reconfigurable material via evolutionary computation

    NASA Astrophysics Data System (ADS)

    Wilken, Sam; Miskin, Marc Z.; Jaeger, Heinrich M.

    2015-08-01

    Rapid prototyping by combining evolutionary computation with simulations is becoming a powerful tool for solving complex design problems in materials science. This method of optimization operates in a virtual design space that simulates potential material behaviors and after completion needs to be validated by experiment. However, in principle an evolutionary optimizer can also operate on an actual physical structure or laboratory experiment directly, provided the relevant material parameters can be accessed by the optimizer and information about the material's performance can be updated by direct measurements. Here we provide a proof of concept of such direct, physical optimization by showing how a reconfigurable, highly nonlinear material can be tuned to respond to impact. We report on an entirely computer controlled laboratory experiment in which a 6 ×6 grid of electromagnets creates a magnetic field pattern that tunes the local rigidity of a concentrated suspension of ferrofluid and iron filings. A genetic algorithm is implemented and tasked to find field patterns that minimize the force transmitted through the suspension. Searching within a space of roughly 1010 possible configurations, after testing only 1500 independent trials the algorithm identifies an optimized configuration of layered rigid and compliant regions.

  19. From computers to cultivation: reconceptualizing evolutionary psychology

    PubMed Central

    Barrett, Louise; Pollet, Thomas V.; Stulp, Gert

    2014-01-01

    Does evolutionary theorizing have a role in psychology? This is a more contentious issue than one might imagine, given that, as evolved creatures, the answer must surely be yes. The contested nature of evolutionary psychology lies not in our status as evolved beings, but in the extent to which evolutionary ideas add value to studies of human behavior, and the rigor with which these ideas are tested. This, in turn, is linked to the framework in which particular evolutionary ideas are situated. While the framing of the current research topic places the brain-as-computer metaphor in opposition to evolutionary psychology, the most prominent school of thought in this field (born out of cognitive psychology, and often known as the Santa Barbara school) is entirely wedded to the computational theory of mind as an explanatory framework. Its unique aspect is to argue that the mind consists of a large number of functionally specialized (i.e., domain-specific) computational mechanisms, or modules (the massive modularity hypothesis). Far from offering an alternative to, or an improvement on, the current perspective, we argue that evolutionary psychology is a mainstream computational theory, and that its arguments for domain-specificity often rest on shaky premises. We then go on to suggest that the various forms of e-cognition (i.e., embodied, embedded, enactive) represent a true alternative to standard computational approaches, with an emphasis on “cognitive integration” or the “extended mind hypothesis” in particular. We feel this offers the most promise for human psychology because it incorporates the social and historical processes that are crucial to human “mind-making” within an evolutionarily informed framework. In addition to linking to other research areas in psychology, this approach is more likely to form productive links to other disciplines within the social sciences, not least by encouraging a healthy pluralism in approach. PMID:25161633

  20. From computers to cultivation: reconceptualizing evolutionary psychology.

    PubMed

    Barrett, Louise; Pollet, Thomas V; Stulp, Gert

    2014-01-01

    Does evolutionary theorizing have a role in psychology? This is a more contentious issue than one might imagine, given that, as evolved creatures, the answer must surely be yes. The contested nature of evolutionary psychology lies not in our status as evolved beings, but in the extent to which evolutionary ideas add value to studies of human behavior, and the rigor with which these ideas are tested. This, in turn, is linked to the framework in which particular evolutionary ideas are situated. While the framing of the current research topic places the brain-as-computer metaphor in opposition to evolutionary psychology, the most prominent school of thought in this field (born out of cognitive psychology, and often known as the Santa Barbara school) is entirely wedded to the computational theory of mind as an explanatory framework. Its unique aspect is to argue that the mind consists of a large number of functionally specialized (i.e., domain-specific) computational mechanisms, or modules (the massive modularity hypothesis). Far from offering an alternative to, or an improvement on, the current perspective, we argue that evolutionary psychology is a mainstream computational theory, and that its arguments for domain-specificity often rest on shaky premises. We then go on to suggest that the various forms of e-cognition (i.e., embodied, embedded, enactive) represent a true alternative to standard computational approaches, with an emphasis on "cognitive integration" or the "extended mind hypothesis" in particular. We feel this offers the most promise for human psychology because it incorporates the social and historical processes that are crucial to human "mind-making" within an evolutionarily informed framework. In addition to linking to other research areas in psychology, this approach is more likely to form productive links to other disciplines within the social sciences, not least by encouraging a healthy pluralism in approach.

  1. Integrated evolutionary computation neural network quality controller for automated systems

    SciTech Connect

    Patro, S.; Kolarik, W.J.

    1999-06-01

    With increasing competition in the global market, more and more stringent quality standards and specifications are being demands at lower costs. Manufacturing applications of computing power are becoming more common. The application of neural networks to identification and control of dynamic processes has been discussed. The limitations of using neural networks for control purposes has been pointed out and a different technique, evolutionary computation, has been discussed. The results of identifying and controlling an unstable, dynamic process using evolutionary computation methods has been presented. A framework for an integrated system, using both neural networks and evolutionary computation, has been proposed to identify the process and then control the product quality, in a dynamic, multivariable system, in real-time.

  2. Evolutionary Games and Computer Simulations

    NASA Astrophysics Data System (ADS)

    Huberman, Bernardo A.; Glance, Natalie S.

    1993-08-01

    The Prisoner's Dilemma has long been considered the paradigm for studying the emergence of cooperation among selfish individuals. Because of its importance, it has been studied through computer experiments as well as in the laboratory and by analytical means. However, there are important differences between the way a system composed of many interacting elements is simulated by a digital machine and the manner in which it behaves when studied in real experiments. In some instances, these disparities can be marked enough so as to cast doubt on the implications of cellular automata-type simulations for the study of cooperation in social systems. In particular, if such a simulation imposes space-time granularity, then its ability to describe the real world may be compromised. Indeed, we show that the results of digital simulations regarding territoriality and cooperation differ greatly when time is discrete as opposed to continuous.

  3. A Bright Future for Evolutionary Methods in Drug Design.

    PubMed

    Le, Tu C; Winkler, David A

    2015-08-01

    Most medicinal chemists understand that chemical space is extremely large, essentially infinite. Although high-throughput experimental methods allow exploration of drug-like space more rapidly, they are still insufficient to fully exploit the opportunities that such large chemical space offers. Evolutionary methods can synergistically blend automated synthesis and characterization methods with computational design to identify promising regions of chemical space more efficiently. We describe how evolutionary methods are implemented, and provide examples of published drug development research in which these methods have generated molecules with increased efficacy. We anticipate that evolutionary methods will play an important role in future drug discovery.

  4. A New Multiplex-PCR for Urinary Tract Pathogen Detection Using Primer Design Based on an Evolutionary Computation Method.

    PubMed

    García, Liliana Torcoroma; Cristancho, Laura Maritza; Vera, Erika Patricia; Begambre, Oscar

    2015-10-01

    This work describes a new strategy for optimal design of Multiplex-PCR primer sequences. The process is based on the Particle Swarm Optimization-Simplex algorithm (Mult-PSOS). Diverging from previous solutions centered on heuristic tools, the Mult-PSOS is selfconfigured because it does not require the definition of the algorithm's initial search parameters. The successful performance of this method was validated in vitro using Multiplex- PCR assays. For this validation, seven gene sequences of the most prevalent bacteria implicated in urinary tract infections were taken as DNA targets. The in vitro tests confirmed the good performance of the Mult-PSOS, with respect to infectious disease diagnosis, in the rapid and efficient selection of the optimal oligonucleotide sequences for Multiplex-PCRs. The predicted sequences allowed the adequate amplification of all amplicons in a single step (with the correct amount of DNA template and primers), reducing significantly the need for trial and error experiments. In addition, owing to its independence from the initial selection of the heuristic constants, the Mult-PSOS can be employed by non-expert users in computational techniques or in primer design problems.

  5. Biomimetic design processes in architecture: morphogenetic and evolutionary computational design.

    PubMed

    Menges, Achim

    2012-03-01

    Design computation has profound impact on architectural design methods. This paper explains how computational design enables the development of biomimetic design processes specific to architecture, and how they need to be significantly different from established biomimetic processes in engineering disciplines. The paper first explains the fundamental difference between computer-aided and computational design in architecture, as the understanding of this distinction is of critical importance for the research presented. Thereafter, the conceptual relation and possible transfer of principles from natural morphogenesis to design computation are introduced and the related developments of generative, feature-based, constraint-based, process-based and feedback-based computational design methods are presented. This morphogenetic design research is then related to exploratory evolutionary computation, followed by the presentation of two case studies focusing on the exemplary development of spatial envelope morphologies and urban block morphologies.

  6. Evolutionary Computation Applied to the Tuning of MEMS Gyroscopes

    NASA Technical Reports Server (NTRS)

    Keymeulen, Didier; Fink, Wolfgang; Ferguson, Michael I.; Peay, Chris; Oks, Boris; Terrile, Richard; Yee, Karl

    2005-01-01

    We propose a tuning method for MEMS gyroscopes based on evolutionary computation to efficiently increase the sensitivity of MEMS gyroscopes through tuning and, furthermore, to find the optimally tuned configuration for this state of increased sensitivity. The tuning method was tested for the second generation JPL/Boeing Post-resonator MEMS gyroscope using the measurement of the frequency response of the MEMS device in open-loop operation.

  7. Evolutionary Computation Applied to the Tuning of MEMS Gyroscopes

    NASA Technical Reports Server (NTRS)

    Keymeulen, Didier; Fink, Wolfgang; Ferguson, Michael I.; Peay, Chris; Oks, Boris; Terrile, Richard; Yee, Karl

    2005-01-01

    We propose a tuning method for MEMS gyroscopes based on evolutionary computation to efficiently increase the sensitivity of MEMS gyroscopes through tuning and, furthermore, to find the optimally tuned configuration for this state of increased sensitivity. The tuning method was tested for the second generation JPL/Boeing Post-resonator MEMS gyroscope using the measurement of the frequency response of the MEMS device in open-loop operation.

  8. Using Evolutionary Computation on GPS Position Correction

    PubMed Central

    2014-01-01

    More and more devices are equipped with global positioning system (GPS). However, those handheld devices with consumer-grade GPS receivers usually have low accuracy in positioning. A position correction algorithm is therefore useful in this case. In this paper, we proposed an evolutionary computation based technique to generate a correction function by two GPS receivers and a known reference location. Locating one GPS receiver on the known location and combining its longitude and latitude information and exact poisoning information, the proposed technique is capable of evolving a correction function by such. The proposed technique can be implemented and executed on handheld devices without hardware reconfiguration. Experiments are conducted to demonstrate performance of the proposed technique. Positioning error could be significantly reduced from the order of 10 m to the order of 1 m. PMID:24578657

  9. Regulatory RNA design through evolutionary computation and strand displacement.

    PubMed

    Rostain, William; Landrain, Thomas E; Rodrigo, Guillermo; Jaramillo, Alfonso

    2015-01-01

    The discovery and study of a vast number of regulatory RNAs in all kingdoms of life over the past decades has allowed the design of new synthetic RNAs that can regulate gene expression in vivo. Riboregulators, in particular, have been used to activate or repress gene expression. However, to accelerate and scale up the design process, synthetic biologists require computer-assisted design tools, without which riboregulator engineering will remain a case-by-case design process requiring expert attention. Recently, the design of RNA circuits by evolutionary computation and adapting strand displacement techniques from nanotechnology has proven to be suited to the automated generation of DNA sequences implementing regulatory RNA systems in bacteria. Herein, we present our method to carry out such evolutionary design and how to use it to create various types of riboregulators, allowing the systematic de novo design of genetic control systems in synthetic biology.

  10. Embodiment of evolutionary computation in general agents.

    PubMed

    Smith, R E; Bonacina, C; Kearney, P; Merlat, W

    2000-01-01

    Holland's Adaptation in Natural and Artificial Systems largely dealt with how systems, comprised of many self-interested entities, can and should adapt as a whole. This seminal book led to the last 25 years of work in genetic algorithms (GAs) and related forms of evolutionary computation (EC). In recent years, the expansion of the Internet, other telecommunications technologies, and other large scale networks have led to a world where large numbers of semi-autonomous software entities (i.e., agents) will be interacting in an open, universal system. This development cast the importance of Holland's legacy in a new light. This paper argues that Holland's fundamental arguments, and the years of developments that have followed, have a direct impact on systems of general network agents, regardless of whether they explicitly exploit EC. However, it also argues that the techniques and theories of EC cannot be directly transferred to the world of general agents (rather than EC-specific) without examination of effects that are embodied in general software agents. This paper introduces a framework for EC interchanges between general-purpose software agents. Preliminary results are shown that illustrate the EC effects of asynchronous actions of agents within this framework. Building on this framework, coevolutionary agents that interact in a simulated producer/consumer economy are introduced. Using these preliminary results as illustrations, areas for future investigation of embodied EC software agents are discussed.

  11. From evolutionary computation to the evolution of things

    NASA Astrophysics Data System (ADS)

    Eiben, Agoston E.; Smith, Jim

    2015-05-01

    Evolution has provided a source of inspiration for algorithm designers since the birth of computers. The resulting field, evolutionary computation, has been successful in solving engineering tasks ranging in outlook from the molecular to the astronomical. Today, the field is entering a new phase as evolutionary algorithms that take place in hardware are developed, opening up new avenues towards autonomous machines that can adapt to their environment. We discuss how evolutionary computation compares with natural evolution and what its benefits are relative to other computing approaches, and we introduce the emerging area of artificial evolution in physical systems.

  12. From evolutionary computation to the evolution of things.

    PubMed

    Eiben, Agoston E; Smith, Jim

    2015-05-28

    Evolution has provided a source of inspiration for algorithm designers since the birth of computers. The resulting field, evolutionary computation, has been successful in solving engineering tasks ranging in outlook from the molecular to the astronomical. Today, the field is entering a new phase as evolutionary algorithms that take place in hardware are developed, opening up new avenues towards autonomous machines that can adapt to their environment. We discuss how evolutionary computation compares with natural evolution and what its benefits are relative to other computing approaches, and we introduce the emerging area of artificial evolution in physical systems.

  13. Development of X-TOOLSS: Preliminary Design of Space Systems Using Evolutionary Computation

    NASA Technical Reports Server (NTRS)

    Schnell, Andrew R.; Hull, Patrick V.; Turner, Mike L.; Dozier, Gerry; Alverson, Lauren; Garrett, Aaron; Reneau, Jarred

    2008-01-01

    Evolutionary computational (EC) techniques such as genetic algorithms (GA) have been identified as promising methods to explore the design space of mechanical and electrical systems at the earliest stages of design. In this paper the authors summarize their research in the use of evolutionary computation to develop preliminary designs for various space systems. An evolutionary computational solver developed over the course of the research, X-TOOLSS (Exploration Toolset for the Optimization of Launch and Space Systems) is discussed. With the success of early, low-fidelity example problems, an outline of work involving more computationally complex models is discussed.

  14. MEGA5: Molecular Evolutionary Genetics Analysis Using Maximum Likelihood, Evolutionary Distance, and Maximum Parsimony Methods

    PubMed Central

    Tamura, Koichiro; Peterson, Daniel; Peterson, Nicholas; Stecher, Glen; Nei, Masatoshi; Kumar, Sudhir

    2011-01-01

    Comparative analysis of molecular sequence data is essential for reconstructing the evolutionary histories of species and inferring the nature and extent of selective forces shaping the evolution of genes and species. Here, we announce the release of Molecular Evolutionary Genetics Analysis version 5 (MEGA5), which is a user-friendly software for mining online databases, building sequence alignments and phylogenetic trees, and using methods of evolutionary bioinformatics in basic biology, biomedicine, and evolution. The newest addition in MEGA5 is a collection of maximum likelihood (ML) analyses for inferring evolutionary trees, selecting best-fit substitution models (nucleotide or amino acid), inferring ancestral states and sequences (along with probabilities), and estimating evolutionary rates site-by-site. In computer simulation analyses, ML tree inference algorithms in MEGA5 compared favorably with other software packages in terms of computational efficiency and the accuracy of the estimates of phylogenetic trees, substitution parameters, and rate variation among sites. The MEGA user interface has now been enhanced to be activity driven to make it easier for the use of both beginners and experienced scientists. This version of MEGA is intended for the Windows platform, and it has been configured for effective use on Mac OS X and Linux desktops. It is available free of charge from http://www.megasoftware.net. PMID:21546353

  15. Evolutionary optimization methods for accelerator design

    NASA Astrophysics Data System (ADS)

    Poklonskiy, Alexey A.

    Many problems from the fields of accelerator physics and beam theory can be formulated as optimization problems and, as such, solved using optimization methods. Despite growing efficiency of the optimization methods, the adoption of modern optimization techniques in these fields is rather limited. Evolutionary Algorithms (EAs) form a relatively new and actively developed optimization methods family. They possess many attractive features such as: ease of the implementation, modest requirements on the objective function, a good tolerance to noise, robustness, and the ability to perform a global search efficiently. In this work we study the application of EAs to problems from accelerator physics and beam theory. We review the most commonly used methods of unconstrained optimization and describe the GATool, evolutionary algorithm and the software package, used in this work, in detail. Then we use a set of test problems to assess its performance in terms of computational resources, quality of the obtained result, and the tradeoff between them. We justify the choice of GATool as a heuristic method to generate cutoff values for the COSY-GO rigorous global optimization package for the COSY Infinity scientific computing package. We design the model of their mutual interaction and demonstrate that the quality of the result obtained by GATool increases as the information about the search domain is refined, which supports the usefulness of this model. We Giscuss GATool's performance on the problems suffering from static and dynamic noise and study useful strategies of GATool parameter tuning for these and other difficult problems. We review the challenges of constrained optimization with EAs and methods commonly used to overcome them. We describe REPA, a new constrained optimization method based on repairing, in exquisite detail, including the properties of its two repairing techniques: REFIND and REPROPT. We assess REPROPT's performance on the standard constrained

  16. Computational and evolutionary aspects of language

    NASA Astrophysics Data System (ADS)

    Nowak, Martin A.; Komarova, Natalia L.; Niyogi, Partha

    2002-06-01

    Language is our legacy. It is the main evolutionary contribution of humans, and perhaps the most interesting trait that has emerged in the past 500 million years. Understanding how darwinian evolution gives rise to human language requires the integration of formal language theory, learning theory and evolutionary dynamics. Formal language theory provides a mathematical description of language and grammar. Learning theory formalizes the task of language acquisition-it can be shown that no procedure can learn an unrestricted set of languages. Universal grammar specifies the restricted set of languages learnable by the human brain. Evolutionary dynamics can be formulated to describe the cultural evolution of language and the biological evolution of universal grammar.

  17. Evolutionary Cell Computing: From Protocells to Self-Organized Computing

    NASA Technical Reports Server (NTRS)

    Colombano, Silvano; New, Michael H.; Pohorille, Andrew; Scargle, Jeffrey; Stassinopoulos, Dimitris; Pearson, Mark; Warren, James

    2000-01-01

    On the path from inanimate to animate matter, a key step was the self-organization of molecules into protocells - the earliest ancestors of contemporary cells. Studies of the properties of protocells and the mechanisms by which they maintained themselves and reproduced are an important part of astrobiology. These studies also have the potential to greatly impact research in nanotechnology and computer science. Previous studies of protocells have focussed on self-replication. In these systems, Darwinian evolution occurs through a series of small alterations to functional molecules whose identities are stored. Protocells, however, may have been incapable of such storage. We hypothesize that under such conditions, the replication of functions and their interrelationships, rather than the precise identities of the functional molecules, is sufficient for survival and evolution. This process is called non-genomic evolution. Recent breakthroughs in experimental protein chemistry have opened the gates for experimental tests of non-genomic evolution. On the basis of these achievements, we have developed a stochastic model for examining the evolutionary potential of non-genomic systems. In this model, the formation and destruction (hydrolysis) of bonds joining amino acids in proteins occur through catalyzed, albeit possibly inefficient, pathways. Each protein can act as a substrate for polymerization or hydrolysis, or as a catalyst of these chemical reactions. When a protein is hydrolyzed to form two new proteins, or two proteins are joined into a single protein, the catalytic abilities of the product proteins are related to the catalytic abilities of the reactants. We will demonstrate that the catalytic capabilities of such a system can increase. Its evolutionary potential is dependent upon the competition between the formation of bond-forming and bond-cutting catalysts. The degree to which hydrolysis preferentially affects bonds in less efficient, and therefore less well

  18. Evolutionary Cell Computing: From Protocells to Self-Organized Computing

    NASA Technical Reports Server (NTRS)

    Colombano, Silvano; New, Michael H.; Pohorille, Andrew; Scargle, Jeffrey; Stassinopoulos, Dimitris; Pearson, Mark; Warren, James

    2000-01-01

    On the path from inanimate to animate matter, a key step was the self-organization of molecules into protocells - the earliest ancestors of contemporary cells. Studies of the properties of protocells and the mechanisms by which they maintained themselves and reproduced are an important part of astrobiology. These studies also have the potential to greatly impact research in nanotechnology and computer science. Previous studies of protocells have focussed on self-replication. In these systems, Darwinian evolution occurs through a series of small alterations to functional molecules whose identities are stored. Protocells, however, may have been incapable of such storage. We hypothesize that under such conditions, the replication of functions and their interrelationships, rather than the precise identities of the functional molecules, is sufficient for survival and evolution. This process is called non-genomic evolution. Recent breakthroughs in experimental protein chemistry have opened the gates for experimental tests of non-genomic evolution. On the basis of these achievements, we have developed a stochastic model for examining the evolutionary potential of non-genomic systems. In this model, the formation and destruction (hydrolysis) of bonds joining amino acids in proteins occur through catalyzed, albeit possibly inefficient, pathways. Each protein can act as a substrate for polymerization or hydrolysis, or as a catalyst of these chemical reactions. When a protein is hydrolyzed to form two new proteins, or two proteins are joined into a single protein, the catalytic abilities of the product proteins are related to the catalytic abilities of the reactants. We will demonstrate that the catalytic capabilities of such a system can increase. Its evolutionary potential is dependent upon the competition between the formation of bond-forming and bond-cutting catalysts. The degree to which hydrolysis preferentially affects bonds in less efficient, and therefore less well

  19. Bi-directional evolutionary level set method for topology optimization

    NASA Astrophysics Data System (ADS)

    Zhu, Benliang; Zhang, Xianmin; Fatikow, Sergej; Wang, Nianfeng

    2015-03-01

    A bi-directional evolutionary level set method for solving topology optimization problems is presented in this article. The proposed method has three main advantages over the standard level set method. First, new holes can be automatically generated in the design domain during the optimization process. Second, the dependency of the obtained optimized configurations upon the initial configurations is eliminated. Optimized configurations can be obtained even being started from a minimum possible initial guess. Third, the method can be easily implemented and is computationally more efficient. The validity of the proposed method is tested on the mean compliance minimization problem and the compliant mechanisms topology optimization problem.

  20. Protein 3D Structure Computed from Evolutionary Sequence Variation

    PubMed Central

    Sheridan, Robert; Hopf, Thomas A.; Pagnani, Andrea; Zecchina, Riccardo; Sander, Chris

    2011-01-01

    The evolutionary trajectory of a protein through sequence space is constrained by its function. Collections of sequence homologs record the outcomes of millions of evolutionary experiments in which the protein evolves according to these constraints. Deciphering the evolutionary record held in these sequences and exploiting it for predictive and engineering purposes presents a formidable challenge. The potential benefit of solving this challenge is amplified by the advent of inexpensive high-throughput genomic sequencing. In this paper we ask whether we can infer evolutionary constraints from a set of sequence homologs of a protein. The challenge is to distinguish true co-evolution couplings from the noisy set of observed correlations. We address this challenge using a maximum entropy model of the protein sequence, constrained by the statistics of the multiple sequence alignment, to infer residue pair couplings. Surprisingly, we find that the strength of these inferred couplings is an excellent predictor of residue-residue proximity in folded structures. Indeed, the top-scoring residue couplings are sufficiently accurate and well-distributed to define the 3D protein fold with remarkable accuracy. We quantify this observation by computing, from sequence alone, all-atom 3D structures of fifteen test proteins from different fold classes, ranging in size from 50 to 260 residues., including a G-protein coupled receptor. These blinded inferences are de novo, i.e., they do not use homology modeling or sequence-similar fragments from known structures. The co-evolution signals provide sufficient information to determine accurate 3D protein structure to 2.7–4.8 Å Cα-RMSD error relative to the observed structure, over at least two-thirds of the protein (method called EVfold, details at http://EVfold.org). This discovery provides insight into essential interactions constraining protein evolution and will facilitate a comprehensive survey of the universe of protein

  1. Protein 3D structure computed from evolutionary sequence variation.

    PubMed

    Marks, Debora S; Colwell, Lucy J; Sheridan, Robert; Hopf, Thomas A; Pagnani, Andrea; Zecchina, Riccardo; Sander, Chris

    2011-01-01

    The evolutionary trajectory of a protein through sequence space is constrained by its function. Collections of sequence homologs record the outcomes of millions of evolutionary experiments in which the protein evolves according to these constraints. Deciphering the evolutionary record held in these sequences and exploiting it for predictive and engineering purposes presents a formidable challenge. The potential benefit of solving this challenge is amplified by the advent of inexpensive high-throughput genomic sequencing.In this paper we ask whether we can infer evolutionary constraints from a set of sequence homologs of a protein. The challenge is to distinguish true co-evolution couplings from the noisy set of observed correlations. We address this challenge using a maximum entropy model of the protein sequence, constrained by the statistics of the multiple sequence alignment, to infer residue pair couplings. Surprisingly, we find that the strength of these inferred couplings is an excellent predictor of residue-residue proximity in folded structures. Indeed, the top-scoring residue couplings are sufficiently accurate and well-distributed to define the 3D protein fold with remarkable accuracy.We quantify this observation by computing, from sequence alone, all-atom 3D structures of fifteen test proteins from different fold classes, ranging in size from 50 to 260 residues, including a G-protein coupled receptor. These blinded inferences are de novo, i.e., they do not use homology modeling or sequence-similar fragments from known structures. The co-evolution signals provide sufficient information to determine accurate 3D protein structure to 2.7-4.8 Å C(α)-RMSD error relative to the observed structure, over at least two-thirds of the protein (method called EVfold, details at http://EVfold.org). This discovery provides insight into essential interactions constraining protein evolution and will facilitate a comprehensive survey of the universe of protein structures

  2. Studying Collective Human Decision Making and Creativity with Evolutionary Computation.

    PubMed

    Sayama, Hiroki; Dionne, Shelley D

    2015-01-01

    We report a summary of our interdisciplinary research project "Evolutionary Perspective on Collective Decision Making" that was conducted through close collaboration between computational, organizational, and social scientists at Binghamton University. We redefined collective human decision making and creativity as evolution of ecologies of ideas, where populations of ideas evolve via continual applications of evolutionary operators such as reproduction, recombination, mutation, selection, and migration of ideas, each conducted by participating humans. Based on this evolutionary perspective, we generated hypotheses about collective human decision making, using agent-based computer simulations. The hypotheses were then tested through several experiments with real human subjects. Throughout this project, we utilized evolutionary computation (EC) in non-traditional ways-(1) as a theoretical framework for reinterpreting the dynamics of idea generation and selection, (2) as a computational simulation model of collective human decision-making processes, and (3) as a research tool for collecting high-resolution experimental data on actual collaborative design and decision making from human subjects. We believe our work demonstrates untapped potential of EC for interdisciplinary research involving human and social dynamics.

  3. Evolutionary game theory using agent-based methods.

    PubMed

    Adami, Christoph; Schossau, Jory; Hintze, Arend

    2016-12-01

    Evolutionary game theory is a successful mathematical framework geared towards understanding the selective pressures that affect the evolution of the strategies of agents engaged in interactions with potential conflicts. While a mathematical treatment of the costs and benefits of decisions can predict the optimal strategy in simple settings, more realistic settings such as finite populations, non-vanishing mutations rates, stochastic decisions, communication between agents, and spatial interactions, require agent-based methods where each agent is modeled as an individual, carries its own genes that determine its decisions, and where the evolutionary outcome can only be ascertained by evolving the population of agents forward in time. While highlighting standard mathematical results, we compare those to agent-based methods that can go beyond the limitations of equations and simulate the complexity of heterogeneous populations and an ever-changing set of interactors. We conclude that agent-based methods can predict evolutionary outcomes where purely mathematical treatments cannot tread (for example in the weak selection-strong mutation limit), but that mathematics is crucial to validate the computational simulations. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Inference for phylogenies under a hybrid parsimony method: evolutionary-symmetric transversion parsimony.

    PubMed

    Sinsheimer, J S; Lake, J A; Little, R J

    1997-03-01

    A new method is proposed for inferring topology for evolutionary trees. Existing methods have complementary strengths and weaknesses. Maximum and transversion parsimony are powerful methods, but they lack statistical consistency, that is, they do not always infer the correct tree as the sequence length becomes very large. Evolutionary parsimony overcomes this deficiency, but it may lack sufficient power when sequence length is small (less than 1000 aligned nucleotides; Sinsheimer, Lake, and Little, 1996, Biometrics 52, 193-210). Our proposed method, evolutionary-symmetric transversion parsimony, is a hybrid that retains the consistency of evolutionary parsimony, while increasing power by incorporating a modified form of transversion parsimony within a statistical model. The method requires choice of a parameter gamma that represents the prior probability that symmetric transversion parsimony yields consistent results. Properties of the method are assessed for a variety of choices of gamma in a large simulation study. In general, inference under the evolutionary-symmetric transversion parsimony has more discriminating power than inference under evolutionary parsimony and is better calibrated than inference under symmetric transversion parsimony. The results are quite robust to the choice of gamma, indicating a value of 0.90 as a reasonable overall choice when the true value of gamma ranges between 0.85 to 1.00. Our method is, like evolutionary parsimony and maximum parsimony, computationally straightforward. The same statistical approach can be applied to combine evolutionary parsimony with other inconsistent methods, such as maximum parsimony, but at the expense of more difficult computations.

  5. Advances in computer simulation of genome evolution: toward more realistic evolutionary genomics analysis by approximate bayesian computation.

    PubMed

    Arenas, Miguel

    2015-04-01

    NGS technologies present a fast and cheap generation of genomic data. Nevertheless, ancestral genome inference is not so straightforward due to complex evolutionary processes acting on this material such as inversions, translocations, and other genome rearrangements that, in addition to their implicit complexity, can co-occur and confound ancestral inferences. Recently, models of genome evolution that accommodate such complex genomic events are emerging. This letter explores these novel evolutionary models and proposes their incorporation into robust statistical approaches based on computer simulations, such as approximate Bayesian computation, that may produce a more realistic evolutionary analysis of genomic data. Advantages and pitfalls in using these analytical methods are discussed. Potential applications of these ancestral genomic inferences are also pointed out.

  6. Mapping an expanding territory: computer simulations in evolutionary biology.

    PubMed

    Huneman, Philippe

    2014-08-01

    The pervasive use of computer simulations in the sciences brings novel epistemological issues discussed in the philosophy of science literature since about a decade. Evolutionary biology strongly relies on such simulations, and in relation to it there exists a research program (Artificial Life) that mainly studies simulations themselves. This paper addresses the specificity of computer simulations in evolutionary biology, in the context (described in Sect. 1) of a set of questions about their scope as explanations, the nature of validation processes and the relation between simulations and true experiments or mathematical models. After making distinctions, especially between a weak use where simulations test hypotheses about the world, and a strong use where they allow one to explore sets of evolutionary dynamics not necessarily extant in our world, I argue in Sect. 2 that (weak) simulations are likely to represent in virtue of the fact that they instantiate specific features of causal processes that may be isomorphic to features of some causal processes in the world, though the latter are always intertwined with a myriad of different processes and hence unlikely to be directly manipulated and studied. I therefore argue that these simulations are merely able to provide candidate explanations for real patterns. Section 3 ends up by placing strong and weak simulations in Levins' triangle, that conceives of simulations as devices trying to fulfil one or two among three incompatible epistemic values (precision, realism, genericity).

  7. Evolutionary dynamics on graphs: Efficient method for weak selection

    NASA Astrophysics Data System (ADS)

    Fu, Feng; Wang, Long; Nowak, Martin A.; Hauert, Christoph

    2009-04-01

    Investigating the evolutionary dynamics of game theoretical interactions in populations where individuals are arranged on a graph can be challenging in terms of computation time. Here, we propose an efficient method to study any type of game on arbitrary graph structures for weak selection. In this limit, evolutionary game dynamics represents a first-order correction to neutral evolution. Spatial correlations can be empirically determined under neutral evolution and provide the basis for formulating the game dynamics as a discrete Markov process by incorporating a detailed description of the microscopic dynamics based on the neutral correlations. This framework is then applied to one of the most intriguing questions in evolutionary biology: the evolution of cooperation. We demonstrate that the degree heterogeneity of a graph impedes cooperation and that the success of tit for tat depends not only on the number of rounds but also on the degree of the graph. Moreover, considering the mutation-selection equilibrium shows that the symmetry of the stationary distribution of states under weak selection is skewed in favor of defectors for larger selection strengths. In particular, degree heterogeneity—a prominent feature of scale-free networks—generally results in a more pronounced increase in the critical benefit-to-cost ratio required for evolution to favor cooperation as compared to regular graphs. This conclusion is corroborated by an analysis of the effects of population structures on the fixation probabilities of strategies in general 2×2 games for different types of graphs. Computer simulations confirm the predictive power of our method and illustrate the improved accuracy as compared to previous studies.

  8. Designing a parallel evolutionary algorithm for inferring gene networks on the cloud computing environment

    PubMed Central

    2014-01-01

    Background To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. Results This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Conclusions Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel

  9. Designing a parallel evolutionary algorithm for inferring gene networks on the cloud computing environment.

    PubMed

    Lee, Wei-Po; Hsiao, Yu-Ting; Hwang, Wei-Che

    2014-01-16

    To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel computational framework, high

  10. Multiple von Neumann computers: an evolutionary approach to functional emergence.

    PubMed

    Suzuki, H

    1997-01-01

    A novel system composed of multiple von Neumann computers and an appropriate problem environment is proposed and simulated. Each computer has a memory to store the machine instruction program, and when a program is executed, a series of machine codes in the memory is sequentially decoded, leading to register operations in the central processing unit (CPU). By means of these operations, the computer not only can handle its generally used registers but also can read and write the environmental database. Simulation is driven by genetic algorithms (GAs) performed on the population of program memories. Mutation and crossover create program diversity in the memory, and selection facilitates the reproduction of appropriate programs. Through these evolutionary operations, advantageous combinations of machine codes are created and fixed in the population one by one, and the higher function, which enables the computer to calculate an appropriate number from the environment, finally emerges in the program memory. In the latter half of the article, the performance of GAs on this system is studied. Under different sets of parameters, the evolutionary speed, which is determined by the time until the domination of the final program, is examined and the conditions for faster evolution are clarified. At an intermediate mutation rate and at an intermediate population size, crossover helps create novel advantageous sets of machine codes and evidently accelerates optimization by GAs.

  11. Generative Representations for Computer-Automated Evolutionary Design

    NASA Technical Reports Server (NTRS)

    Hornby, Gregory S.

    2006-01-01

    With the increasing computational power of computers, software design systems are progressing from being tools for architects and designers to express their ideas to tools capable of creating designs under human guidance. One of the main limitations for these computer-automated design systems is the representation with which they encode designs. If the representation cannot encode a certain design, then the design system cannot produce it. To be able to produce new types of designs, and not just optimize pre-defined parameterizations, evolutionary design systems must use generative representations. Generative representations are assembly procedures, or algorithms, for constructing a design thereby allowing for truly novel design solutions to be encoded. In addition, by enabling modularity, regularity and hierarchy, the level of sophistication that can be evolved is increased. We demonstrate the advantages of generative representations on two different design domains: the evolution of spacecraft antennas and the evolution of 3D objects.

  12. Automating the search of molecular motor templates by evolutionary methods.

    PubMed

    Fernández, Jose D; Vico, Francisco J

    2011-11-01

    Biological molecular motors are nanoscale devices capable of transforming chemical energy into mechanical work, which are being researched in many scientific disciplines. From a computational point of view, the characteristics and dynamics of these motors are studied at multiple time scales, ranging from very detailed and complex molecular dynamics simulations spanning a few microseconds, to extremely simple and coarse-grained theoretical models of their working cycles. However, this research is performed only in the (relatively few) instances known from molecular biology. In this work, results from elastic network analysis and behaviour-finding methods are applied to explore a subset of the configuration space of template molecular structures that are able to transform chemical energy into directed movement, for a fixed instance of working cycle. While using methods based on elastic networks limits the scope of our results, it enables the implementation of computationally lightweight methods, in a way that evolutionary search techniques can be applied to discover novel molecular motor templates. The results show that molecular motion can be attained from a variety of structural configurations, when a functional working cycle is provided. Additionally, these methods enable a new computational way to test hypotheses about molecular motors. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  13. Tuning of MEMS Devices using Evolutionary Computation and Open-loop Frequency Response

    NASA Technical Reports Server (NTRS)

    Keymeulen, Didier; Fink, Wolfgang; Ferguson, Michael I.; Peay, Chris; Oks, Boris; Terrile, Richard; Yee, Karl

    2005-01-01

    We propose a tuning method for MEMS gyroscopes based on evolutionary computation that has the capacity to efficiently increase the sensitivity of MEMS gyroscopes through tuning and, furthermore, to find the optimally tuned configuration for this state of increased sensitivity. The tuning method was tested for the second generation JPL/Boeing Post-resonator MEMS gyroscope using the measurement of the frequency response of the MEMS device in open-loop operation.

  14. Tuning of MEMS Devices using Evolutionary Computation and Open-loop Frequency Response

    NASA Technical Reports Server (NTRS)

    Keymeulen, Didier; Fink, Wolfgang; Ferguson, Michael I.; Peay, Chris; Oks, Boris; Terrile, Richard; Yee, Karl

    2005-01-01

    We propose a tuning method for MEMS gyroscopes based on evolutionary computation that has the capacity to efficiently increase the sensitivity of MEMS gyroscopes through tuning and, furthermore, to find the optimally tuned configuration for this state of increased sensitivity. The tuning method was tested for the second generation JPL/Boeing Post-resonator MEMS gyroscope using the measurement of the frequency response of the MEMS device in open-loop operation.

  15. Evolutionary Computation for the Identification of Emergent Behavior in Autonomous Systems

    NASA Technical Reports Server (NTRS)

    Terrile, Richard J.; Guillaume, Alexandre

    2009-01-01

    Over the past several years the Center for Evolutionary Computation and Automated Design at the Jet Propulsion Laboratory has developed a technique based on Evolutionary Computational Methods (ECM) that allows for the automated optimization of complex computationally modeled systems. An important application of this technique is for the identification of emergent behaviors in autonomous systems. Mobility platforms such as rovers or airborne vehicles are now being designed with autonomous mission controllers that can find trajectories over a solution space that is larger than can reasonably be tested. It is critical to identify control behaviors that are not predicted and can have surprising results (both good and bad). These emergent behaviors need to be identified, characterized and either incorporated into or isolated from the acceptable range of control characteristics. We use cluster analysis of automatically retrieved solutions to identify isolated populations of solutions with divergent behaviors.

  16. Computational complexity of ecological and evolutionary spatial dynamics.

    PubMed

    Ibsen-Jensen, Rasmus; Chatterjee, Krishnendu; Nowak, Martin A

    2015-12-22

    There are deep, yet largely unexplored, connections between computer science and biology. Both disciplines examine how information proliferates in time and space. Central results in computer science describe the complexity of algorithms that solve certain classes of problems. An algorithm is deemed efficient if it can solve a problem in polynomial time, which means the running time of the algorithm is a polynomial function of the length of the input. There are classes of harder problems for which the fastest possible algorithm requires exponential time. Another criterion is the space requirement of the algorithm. There is a crucial distinction between algorithms that can find a solution, verify a solution, or list several distinct solutions in given time and space. The complexity hierarchy that is generated in this way is the foundation of theoretical computer science. Precise complexity results can be notoriously difficult. The famous question whether polynomial time equals nondeterministic polynomial time (i.e., P = NP) is one of the hardest open problems in computer science and all of mathematics. Here, we consider simple processes of ecological and evolutionary spatial dynamics. The basic question is: What is the probability that a new invader (or a new mutant) will take over a resident population? We derive precise complexity results for a variety of scenarios. We therefore show that some fundamental questions in this area cannot be answered by simple equations (assuming that P is not equal to NP).

  17. Computational complexity of ecological and evolutionary spatial dynamics

    PubMed Central

    Ibsen-Jensen, Rasmus; Chatterjee, Krishnendu; Nowak, Martin A.

    2015-01-01

    There are deep, yet largely unexplored, connections between computer science and biology. Both disciplines examine how information proliferates in time and space. Central results in computer science describe the complexity of algorithms that solve certain classes of problems. An algorithm is deemed efficient if it can solve a problem in polynomial time, which means the running time of the algorithm is a polynomial function of the length of the input. There are classes of harder problems for which the fastest possible algorithm requires exponential time. Another criterion is the space requirement of the algorithm. There is a crucial distinction between algorithms that can find a solution, verify a solution, or list several distinct solutions in given time and space. The complexity hierarchy that is generated in this way is the foundation of theoretical computer science. Precise complexity results can be notoriously difficult. The famous question whether polynomial time equals nondeterministic polynomial time (i.e., P = NP) is one of the hardest open problems in computer science and all of mathematics. Here, we consider simple processes of ecological and evolutionary spatial dynamics. The basic question is: What is the probability that a new invader (or a new mutant) will take over a resident population? We derive precise complexity results for a variety of scenarios. We therefore show that some fundamental questions in this area cannot be answered by simple equations (assuming that P is not equal to NP). PMID:26644569

  18. Evolutionary adaptive eye tracking for low-cost human computer interaction applications

    NASA Astrophysics Data System (ADS)

    Shen, Yan; Shin, Hak Chul; Sung, Won Jun; Khim, Sarang; Kim, Honglak; Rhee, Phill Kyu

    2013-01-01

    We present an evolutionary adaptive eye-tracking framework aiming for low-cost human computer interaction. The main focus is to guarantee eye-tracking performance without using high-cost devices and strongly controlled situations. The performance optimization of eye tracking is formulated into the dynamic control problem of deciding on an eye tracking algorithm structure and associated thresholds/parameters, where the dynamic control space is denoted by genotype and phenotype spaces. The evolutionary algorithm is responsible for exploring the genotype control space, and the reinforcement learning algorithm organizes the evolved genotype into a reactive phenotype. The evolutionary algorithm encodes an eye-tracking scheme as a genetic code based on image variation analysis. Then, the reinforcement learning algorithm defines internal states in a phenotype control space limited by the perceived genetic code and carries out interactive adaptations. The proposed method can achieve optimal performance by compromising the difficulty in the real-time performance of the evolutionary algorithm and the drawback of the huge search space of the reinforcement learning algorithm. Extensive experiments were carried out using webcam image sequences and yielded very encouraging results. The framework can be readily applied to other low-cost vision-based human computer interactions in solving their intrinsic brittleness in unstable operational environments.

  19. Kernel method based human model for enhancing interactive evolutionary optimization.

    PubMed

    Pei, Yan; Zhao, Qiangfu; Liu, Yong

    2015-01-01

    A fitness landscape presents the relationship between individual and its reproductive success in evolutionary computation (EC). However, discrete and approximate landscape in an original search space may not support enough and accurate information for EC search, especially in interactive EC (IEC). The fitness landscape of human subjective evaluation in IEC is very difficult and impossible to model, even with a hypothesis of what its definition might be. In this paper, we propose a method to establish a human model in projected high dimensional search space by kernel classification for enhancing IEC search. Because bivalent logic is a simplest perceptual paradigm, the human model is established by considering this paradigm principle. In feature space, we design a linear classifier as a human model to obtain user preference knowledge, which cannot be supported linearly in original discrete search space. The human model is established by this method for predicting potential perceptual knowledge of human. With the human model, we design an evolution control method to enhance IEC search. From experimental evaluation results with a pseudo-IEC user, our proposed model and method can enhance IEC search significantly.

  20. Kernel Method Based Human Model for Enhancing Interactive Evolutionary Optimization

    PubMed Central

    Zhao, Qiangfu; Liu, Yong

    2015-01-01

    A fitness landscape presents the relationship between individual and its reproductive success in evolutionary computation (EC). However, discrete and approximate landscape in an original search space may not support enough and accurate information for EC search, especially in interactive EC (IEC). The fitness landscape of human subjective evaluation in IEC is very difficult and impossible to model, even with a hypothesis of what its definition might be. In this paper, we propose a method to establish a human model in projected high dimensional search space by kernel classification for enhancing IEC search. Because bivalent logic is a simplest perceptual paradigm, the human model is established by considering this paradigm principle. In feature space, we design a linear classifier as a human model to obtain user preference knowledge, which cannot be supported linearly in original discrete search space. The human model is established by this method for predicting potential perceptual knowledge of human. With the human model, we design an evolution control method to enhance IEC search. From experimental evaluation results with a pseudo-IEC user, our proposed model and method can enhance IEC search significantly. PMID:25879050

  1. Crowd Computing as a Cooperation Problem: An Evolutionary Approach

    NASA Astrophysics Data System (ADS)

    Christoforou, Evgenia; Fernández Anta, Antonio; Georgiou, Chryssis; Mosteiro, Miguel A.; Sánchez, Angel

    2013-05-01

    Cooperation is one of the socio-economic issues that has received more attention from the physics community. The problem has been mostly considered by studying games such as the Prisoner's Dilemma or the Public Goods Game. Here, we take a step forward by studying cooperation in the context of crowd computing. We introduce a model loosely based on Principal-agent theory in which people (workers) contribute to the solution of a distributed problem by computing answers and reporting to the problem proposer (master). To go beyond classical approaches involving the concept of Nash equilibrium, we work on an evolutionary framework in which both the master and the workers update their behavior through reinforcement learning. Using a Markov chain approach, we show theoretically that under certain----not very restrictive—conditions, the master can ensure the reliability of the answer resulting of the process. Then, we study the model by numerical simulations, finding that convergence, meaning that the system reaches a point in which it always produces reliable answers, may in general be much faster than the upper bounds given by the theoretical calculation. We also discuss the effects of the master's level of tolerance to defectors, about which the theory does not provide information. The discussion shows that the system works even with very large tolerances. We conclude with a discussion of our results and possible directions to carry this research further.

  2. Application of evolutionary computation on ensemble forecast of quantitative precipitation

    NASA Astrophysics Data System (ADS)

    Dufek, Amanda S.; Augusto, Douglas A.; Dias, Pedro L. S.; Barbosa, Helio J. C.

    2017-09-01

    An evolutionary computation algorithm known as genetic programming (GP) has been explored as an alternative tool for improving the ensemble forecast of 24-h accumulated precipitation. Three GP versions and six ensembles' languages were applied to several real-world datasets over southern, southeastern and central Brazil during the rainy period from October to February of 2008-2013. According to the results, the GP algorithms performed better than two traditional statistical techniques, with errors 27-57% lower than simple ensemble mean and the MASTER super model ensemble system. In addition, the results revealed that GP algorithms outperformed the best individual forecasts, reaching an improvement of 34-42%. On the other hand, the GP algorithms had a similar performance with respect to each other and to the Bayesian model averaging, but the former are far more versatile techniques. Although the results for the six ensembles' languages are almost indistinguishable, our most complex linear language turned out to be the best overall proposal. Moreover, some meteorological attributes, including the weather patterns over Brazil, seem to play an important role in the prediction of daily rainfall amount.

  3. Solving multi-objective water management problems using evolutionary computation.

    PubMed

    Lewis, A; Randall, M

    2017-09-04

    Water as a resource is becoming increasingly more valuable given the changes in global climate. In an agricultural sense, the role of water is vital to ensuring food security. Therefore the management of it has become a subject of increasing attention and the development of effective tools to support participative decision-making in water management will be a valuable contribution. In this paper, evolutionary computation techniques and Pareto optimisation are incorporated in a model-based system for water management. An illustrative test case modelling optimal crop selection across dry, average and wet years based on data from the Murrumbidgee Irrigation Area in Australia is presented. It is shown that sets of trade-off solutions that provide large net revenues, or minimise environmental flow deficits can be produced rapidly, easily and automatically. The system is capable of providing detailed information on optimal solutions to achieve desired outcomes, responding to a variety of factors including climate conditions and economics. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Exploring Evolutionary Patterns in Genetic Sequence: A Computer Exercise

    ERIC Educational Resources Information Center

    Shumate, Alice M.; Windsor, Aaron J.

    2010-01-01

    The increase in publications presenting molecular evolutionary analyses and the availability of comparative sequence data through resources such as NCBI's GenBank underscore the necessity of providing undergraduates with hands-on sequence analysis skills in an evolutionary context. This need is particularly acute given that students have been…

  5. Optimization of nonlinear dose- and concentration-response models utilizing evolutionary computation.

    PubMed

    Beam, Andrew L; Motsinger-Reif, Alison A

    2011-01-01

    An essential part of toxicity and chemical screening is assessing the concentrated related effects of a test article. Most often this concentration-response is a nonlinear, necessitating sophisticated regression methodologies. The parameters derived from curve fitting are essential in determining a test article's potency (EC(50)) and efficacy (E(max)) and variations in model fit may lead to different conclusions about an article's performance and safety. Previous approaches have leveraged advanced statistical and mathematical techniques to implement nonlinear least squares (NLS) for obtaining the parameters defining such a curve. These approaches, while mathematically rigorous, suffer from initial value sensitivity, computational intensity, and rely on complex and intricate computational and numerical techniques. However if there is a known mathematical model that can reliably predict the data, then nonlinear regression may be equally viewed as parameter optimization. In this context, one may utilize proven techniques from machine learning, such as evolutionary algorithms, which are robust, powerful, and require far less computational framework to optimize the defining parameters. In the current study we present a new method that uses such techniques, Evolutionary Algorithm Dose Response Modeling (EADRM), and demonstrate its effectiveness compared to more conventional methods on both real and simulated data.

  6. Computational Methods for Crashworthiness

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Compiler); Carden, Huey D. (Compiler)

    1993-01-01

    Presentations and discussions from the joint UVA/NASA Workshop on Computational Methods for Crashworthiness held at Langley Research Center on 2-3 Sep. 1992 are included. The presentations addressed activities in the area of impact dynamics. Workshop attendees represented NASA, the Army and Air Force, the Lawrence Livermore and Sandia National Laboratories, the aircraft and automotive industries, and academia. The workshop objectives were to assess the state-of-technology in the numerical simulation of crash and to provide guidelines for future research.

  7. Speeding up ecological and evolutionary computations in R; essentials of high performance computing for biologists.

    PubMed

    Visser, Marco D; McMahon, Sean M; Merow, Cory; Dixon, Philip M; Record, Sydne; Jongejans, Eelke

    2015-03-01

    Computation has become a critical component of research in biology. A risk has emerged that computational and programming challenges may limit research scope, depth, and quality. We review various solutions to common computational efficiency problems in ecological and evolutionary research. Our review pulls together material that is currently scattered across many sources and emphasizes those techniques that are especially effective for typical ecological and environmental problems. We demonstrate how straightforward it can be to write efficient code and implement techniques such as profiling or parallel computing. We supply a newly developed R package (aprof) that helps to identify computational bottlenecks in R code and determine whether optimization can be effective. Our review is complemented by a practical set of examples and detailed Supporting Information material (S1-S3 Texts) that demonstrate large improvements in computational speed (ranging from 10.5 times to 14,000 times faster). By improving computational efficiency, biologists can feasibly solve more complex tasks, ask more ambitious questions, and include more sophisticated analyses in their research.

  8. An Evolutionary Computation Approach to Examine Functional Brain Plasticity.

    PubMed

    Roy, Arnab; Campbell, Colin; Bernier, Rachel A; Hillary, Frank G

    2016-01-01

    One common research goal in systems neurosciences is to understand how the functional relationship between a pair of regions of interest (ROIs) evolves over time. Examining neural connectivity in this way is well-suited for the study of developmental processes, learning, and even in recovery or treatment designs in response to injury. For most fMRI based studies, the strength of the functional relationship between two ROIs is defined as the correlation between the average signal representing each region. The drawback to this approach is that much information is lost due to averaging heterogeneous voxels, and therefore, the functional relationship between a ROI-pair that evolve at a spatial scale much finer than the ROIs remain undetected. To address this shortcoming, we introduce a novel evolutionary computation (EC) based voxel-level procedure to examine functional plasticity between an investigator defined ROI-pair by simultaneously using subject-specific BOLD-fMRI data collected from two sessions seperated by finite duration of time. This data-driven procedure detects a sub-region composed of spatially connected voxels from each ROI (a so-called sub-regional-pair) such that the pair shows a significant gain/loss of functional relationship strength across the two time points. The procedure is recursive and iteratively finds all statistically significant sub-regional-pairs within the ROIs. Using this approach, we examine functional plasticity between the default mode network (DMN) and the executive control network (ECN) during recovery from traumatic brain injury (TBI); the study includes 14 TBI and 12 healthy control subjects. We demonstrate that the EC based procedure is able to detect functional plasticity where a traditional averaging based approach fails. The subject-specific plasticity estimates obtained using the EC-procedure are highly consistent across multiple runs. Group-level analyses using these plasticity estimates showed an increase in the strength

  9. Autonomous management of distributed information systems using evolutionary computation techniques

    NASA Astrophysics Data System (ADS)

    Oates, Martin J.

    1999-03-01

    can provide reliable and consistent performance. This paper investigates evolutionary computation techniques, comparing results from genetic algorithms, simulated annealing and hillclimbing. Major differential algorithm performance is found across different fitness criteria. Preliminary conclusions are that a genetic algorithm approach seems superior to hillclimbing or simulated annealing when more realistic (from a quality of service viewpoint) objective functions are used. Further, the genetic algorithm approach displays regions of adequate robustness to parameter variation, which is also critical from a maintained quality of service viewpoint.

  10. An Evolutionary Computation Approach to Examine Functional Brain Plasticity

    PubMed Central

    Roy, Arnab; Campbell, Colin; Bernier, Rachel A.; Hillary, Frank G.

    2016-01-01

    One common research goal in systems neurosciences is to understand how the functional relationship between a pair of regions of interest (ROIs) evolves over time. Examining neural connectivity in this way is well-suited for the study of developmental processes, learning, and even in recovery or treatment designs in response to injury. For most fMRI based studies, the strength of the functional relationship between two ROIs is defined as the correlation between the average signal representing each region. The drawback to this approach is that much information is lost due to averaging heterogeneous voxels, and therefore, the functional relationship between a ROI-pair that evolve at a spatial scale much finer than the ROIs remain undetected. To address this shortcoming, we introduce a novel evolutionary computation (EC) based voxel-level procedure to examine functional plasticity between an investigator defined ROI-pair by simultaneously using subject-specific BOLD-fMRI data collected from two sessions seperated by finite duration of time. This data-driven procedure detects a sub-region composed of spatially connected voxels from each ROI (a so-called sub-regional-pair) such that the pair shows a significant gain/loss of functional relationship strength across the two time points. The procedure is recursive and iteratively finds all statistically significant sub-regional-pairs within the ROIs. Using this approach, we examine functional plasticity between the default mode network (DMN) and the executive control network (ECN) during recovery from traumatic brain injury (TBI); the study includes 14 TBI and 12 healthy control subjects. We demonstrate that the EC based procedure is able to detect functional plasticity where a traditional averaging based approach fails. The subject-specific plasticity estimates obtained using the EC-procedure are highly consistent across multiple runs. Group-level analyses using these plasticity estimates showed an increase in the strength

  11. The evolutionary forces maintaining a wild polymorphism of Littorina saxatilis: model selection by computer simulations.

    PubMed

    Pérez-Figueroa, A; Cruz, F; Carvajal-Rodríguez, A; Rolán-Alvarez, E; Caballero, A

    2005-01-01

    Two rocky shore ecotypes of Littorina saxatilis from north-west Spain live at different shore levels and habitats and have developed an incomplete reproductive isolation through size assortative mating. The system is regarded as an example of sympatric ecological speciation. Several experiments have indicated that different evolutionary forces (migration, assortative mating and habitat-dependent selection) play a role in maintaining the polymorphism. However, an assessment of the combined contributions of these forces supporting the observed pattern in the wild is absent. A model selection procedure using computer simulations was used to investigate the contribution of the different evolutionary forces towards the maintenance of the polymorphism. The agreement between alternative models and experimental estimates for a number of parameters was quantified by a least square method. The results of the analysis show that the fittest evolutionary model for the observed polymorphism is characterized by a high gene flow, intermediate-high reproductive isolation between ecotypes, and a moderate to strong selection against the nonresident ecotypes on each shore level. In addition, a substantial number of additive loci contributing to the selected trait and a narrow hybrid definition with respect to the phenotype are scenarios that better explain the polymorphism, whereas the ecotype fitnesses at the mid-shore, the level of phenotypic plasticity, and environmental effects are not key parameters.

  12. Evolutionary computing for knowledge discovery in medical diagnosis.

    PubMed

    Tan, K C; Yu, Q; Heng, C M; Lee, T H

    2003-02-01

    One of the major challenges in medical domain is the extraction of comprehensible knowledge from medical diagnosis data. In this paper, a two-phase hybrid evolutionary classification technique is proposed to extract classification rules that can be used in clinical practice for better understanding and prevention of unwanted medical events. In the first phase, a hybrid evolutionary algorithm (EA) is utilized to confine the search space by evolving a pool of good candidate rules, e.g. genetic programming (GP) is applied to evolve nominal attributes for free structured rules and genetic algorithm (GA) is used to optimize the numeric attributes for concise classification rules without the need of discretization. These candidate rules are then used in the second phase to optimize the order and number of rules in the evolution for forming accurate and comprehensible rule sets. The proposed evolutionary classifier (EvoC) is validated upon hepatitis and breast cancer datasets obtained from the UCI machine-learning repository. Simulation results show that the evolutionary classifier produces comprehensible rules and good classification accuracy for the medical datasets. Results obtained from t-tests further justify its robustness and invariance to random partition of datasets.

  13. Update-based evolution control: A new fitness approximation method for evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Ma, Haiping; Fei, Minrui; Simon, Dan; Mo, Hongwei

    2015-09-01

    Evolutionary algorithms are robust optimization methods that have been used in many engineering applications. However, real-world fitness evaluations can be computationally expensive, so it may be necessary to estimate the fitness with an approximate model. This article reviews design and analysis of computer experiments (DACE) as an approximation method that combines a global polynomial with a local Gaussian model to estimate continuous fitness functions. The article incorporates DACE in various evolutionary algorithms, to test unconstrained and constrained benchmarks, both with and without fitness function evaluation noise. The article also introduces a new evolution control strategy called update-based control that estimates the fitness of certain individuals of each generation based on the exact fitness values of other individuals during that same generation. The results show that update-based evolution control outperforms other strategies on noise-free, noisy, constrained and unconstrained benchmarks. The results also show that update-based evolution control can compensate for fitness evaluation noise.

  14. De-Hazing of Multi-Spectral Images with Evolutionary Computing

    NASA Astrophysics Data System (ADS)

    von Allmen, P.; Lee, S.; Diner, D. J.; Martonchik, J.; Davis, A. B.

    2009-12-01

    We developed an algorithm that allows for removing haze from a digital picture by numerically subtracting the contribution of optical scattering by aerosols. The scene is modeled by defining a reflectance function for each pixel, which describes the angular dependence of light scattering at the surface, and by describing the scattering from aerosols with a set of models of varying complexity. An optimization algorithm that mixes downhill methods with evolutionary computing approaches was used to fit the observed image to the model of the scene. The contribution of the aerosol scattering is then removed to obtain a de-hazed image. We will present results for multispectral images taken by NASA’s Multi-angle Imaging SpectroRadiometer and we will discuss the numerical efficiency of the algorithm implemented on a multi-node quadcore cluster computer.

  15. Directionality theory: a computational study of an entropic principle in evolutionary processes.

    PubMed

    Kowald, Axel; Demetrius, Lloyd

    2005-04-07

    Analytical studies of evolutionary processes based on the demographic parameter entropy-a measure of the uncertainty in the age of the mother of a randomly chosen newborn-show that evolutionary changes in entropy are contingent on environmental constraints and can be characterized in terms of three tenets: (i) a unidirectional increase in entropy for populations subject to bounded growth constraints; (ii) a unidirectional decrease in entropy for large populations subject to unbounded growth constraints; (iii) random, non-directional change in entropy for small populations subject to unbounded growth constraints. This article aims to assess the robustness of these analytical tenets by computer simulation. The results of the computational study are shown to be consistent with the analytical predictions. Computational analysis, together with complementary empirical studies of evolutionary changes in entropy underscore the universality of the entropic principle as a model of the evolutionary process.

  16. Determining basic characteristics of stars from evolutionary computations

    NASA Astrophysics Data System (ADS)

    Sichevskij, S. G.

    2017-03-01

    A technique for determining a star's radius from its atmospheric characteristics (effective temperature, surface gravity, and metallicity) is realized based on modernmodel computations of the stellar internal structure and evolution. The atmospheric characteristics can also be used to find the mass and luminosity of the star. The star's rate of evolution and the initial mass function are taken into account when determining the stellar characteristics, increasing the correctness of the results. Computations of stellar evolution of with and without the stellar rotation taken into account make it possible to remove ambiguity due to missing data on the star's rotational velocity. The results are checked and uncertainties estimated using stars occupying two heavily populated regions in the Hertzsprung-Russell diagram that have been well studied using various methods: the main sequence and red giant branch. Good agreement with the observations is achieved; there are almost no systematic deviations of the derived point estimates of the fundamental characteristics. The metallicities of the individual components of eclipsing variable stars are estimated using observational data on for such stars displaying lines of both components in their spectra. These metallicities were determined as a function of the stellar masses in a way that eliminates systematic deviations in the derived fundamental characteristics.

  17. Proposal of Evolutionary Simplex Method for Global Optimization Problem

    NASA Astrophysics Data System (ADS)

    Shimizu, Yoshiaki

    To make an agile decision in a rational manner, role of optimization engineering has been notified increasingly under diversified customer demand. With this point of view, in this paper, we have proposed a new evolutionary method serving as an optimization technique in the paradigm of optimization engineering. The developed method has prospects to solve globally various complicated problem appearing in real world applications. It is evolved from the conventional method known as Nelder and Mead’s Simplex method by virtue of idea borrowed from recent meta-heuristic method such as PSO. Mentioning an algorithm to handle linear inequality constraints effectively, we have validated effectiveness of the proposed method through comparison with other methods using several benchmark problems.

  18. Toward an alternative evolutionary theory of religion: looking past computational evolutionary psychology to a wider field of possibilities.

    PubMed

    Barrett, Nathaniel F

    2010-01-01

    Cognitive science of the last half-century has been dominated by the computational theory of mind and its picture of thought as information processing. Taking this picture for granted, the most prominent evolutionary theories of religion of the last fifteen years have sought to understand human religiosity as the product or by-product of universal information processing mechanisms that were adaptive in our ancestral environment. The rigidity of such explanations is at odds with the highly context-sensitive nature of historical studies of religion, and thus contributes to the apparent tug-of-war between scientific and humanistic perspectives. This essay argues that this antagonism stems in part from a deep flaw of computational theory, namely its notion of information as pre-given and context-free. In contrast, non-computational theories that picture mind as an adaptive, interactive process in which information is jointly constructed by organism and environment offer an alternative approach to an evolutionary understanding of human religiosity, one that is compatible with historical studies and amenable to a wide range of inquiries, including some limited kinds of theological inquiry.

  19. Computational Evolutionary Methodology for Knowledge Discovery and Forecasting in Epidemiology and Medicine

    SciTech Connect

    Rao, Dhananjai M.; Chernyakhovsky, Alexander; Rao, Victoria

    2008-05-08

    Humanity is facing an increasing number of highly virulent and communicable diseases such as avian influenza. Researchers believe that avian influenza has potential to evolve into one of the deadliest pandemics. Combating these diseases requires in-depth knowledge of their epidemiology. An effective methodology for discovering epidemiological knowledge is to utilize a descriptive, evolutionary, ecological model and use bio-simulations to study and analyze it. These types of bio-simulations fall under the category of computational evolutionary methods because the individual entities participating in the simulation are permitted to evolve in a natural manner by reacting to changes in the simulated ecosystem. This work describes the application of the aforementioned methodology to discover epidemiological knowledge about avian influenza using a novel eco-modeling and bio-simulation environment called SEARUMS. The mathematical principles underlying SEARUMS, its design, and the procedure for using SEARUMS are discussed. The bio-simulations and multi-faceted case studies conducted using SEARUMS elucidate its ability to pinpoint timelines, epicenters, and socio-economic impacts of avian influenza. This knowledge is invaluable for proactive deployment of countermeasures in order to minimize negative socioeconomic impacts, combat the disease, and avert a pandemic.

  20. Computational Evolutionary Methodology for Knowledge Discovery and Forecasting in Epidemiology and Medicine

    NASA Astrophysics Data System (ADS)

    Rao, Dhananjai M.; Chernyakhovsky, Alexander; Rao, Victoria

    2008-05-01

    Humanity is facing an increasing number of highly virulent and communicable diseases such as avian influenza. Researchers believe that avian influenza has potential to evolve into one of the deadliest pandemics. Combating these diseases requires in-depth knowledge of their epidemiology. An effective methodology for discovering epidemiological knowledge is to utilize a descriptive, evolutionary, ecological model and use bio-simulations to study and analyze it. These types of bio-simulations fall under the category of computational evolutionary methods because the individual entities participating in the simulation are permitted to evolve in a natural manner by reacting to changes in the simulated ecosystem. This work describes the application of the aforementioned methodology to discover epidemiological knowledge about avian influenza using a novel eco-modeling and bio-simulation environment called SEARUMS. The mathematical principles underlying SEARUMS, its design, and the procedure for using SEARUMS are discussed. The bio-simulations and multi-faceted case studies conducted using SEARUMS elucidate its ability to pinpoint timelines, epicenters, and socio-economic impacts of avian influenza. This knowledge is invaluable for proactive deployment of countermeasures in order to minimize negative socioeconomic impacts, combat the disease, and avert a pandemic.

  1. Score-based resampling method for evolutionary algorithms.

    PubMed

    Park, Jonghwan; Jeon, Moongu; Pedrycz, Witold

    2008-10-01

    In this paper, a gene-handling method for evolutionary algorithms (EAs) is proposed. Such algorithms are characterized by a nonanalytic optimization process when dealing with complex systems as multiple behavioral responses occur in the realization of intelligent tasks. In generic EAs which optimize internal parameters of a given system, evaluation and selection are performed at the chromosome level. When a survived chromosome includes noneffective genes, the solution can be trapped in a local optimum during evolution, which causes an increase in the uncertainty of the results and reduces the quality of the overall system. This phenomenon also results in an unbalanced performance of partial behaviors. To alleviate this problem, a score-based resampling method is proposed, where a score function of a gene is introduced as a criterion of handling genes in each allele. The proposed method was empirically evaluated with various test functions, and the results show its effectiveness.

  2. An efficient non-dominated sorting method for evolutionary algorithms.

    PubMed

    Fang, Hongbing; Wang, Qian; Tu, Yi-Cheng; Horstemeyer, Mark F

    2008-01-01

    We present a new non-dominated sorting algorithm to generate the non-dominated fronts in multi-objective optimization with evolutionary algorithms, particularly the NSGA-II. The non-dominated sorting algorithm used by NSGA-II has a time complexity of O(MN(2)) in generating non-dominated fronts in one generation (iteration) for a population size N and M objective functions. Since generating non-dominated fronts takes the majority of total computational time (excluding the cost of fitness evaluations) of NSGA-II, making this algorithm faster will significantly improve the overall efficiency of NSGA-II and other genetic algorithms using non-dominated sorting. The new non-dominated sorting algorithm proposed in this study reduces the number of redundant comparisons existing in the algorithm of NSGA-II by recording the dominance information among solutions from their first comparisons. By utilizing a new data structure called the dominance tree and the divide-and-conquer mechanism, the new algorithm is faster than NSGA-II for different numbers of objective functions. Although the number of solution comparisons by the proposed algorithm is close to that of NSGA-II when the number of objectives becomes large, the total computational time shows that the proposed algorithm still has better efficiency because of the adoption of the dominance tree structure and the divide-and-conquer mechanism.

  3. Multi-Objective UAV Mission Planning Using Evolutionary Computation

    DTIC Science & Technology

    2008-03-01

    sors Applications and Demonstrations Division (AFRL/SNZ), specifically, the Virtual Combat Laboratory (AFRL/SNZW) at Wright Patterson Air Force Base...Chapman & Hall/Crc Computer and Information Sciences). Chapman & Hall/CRC, 2006. ISBN 1584886439. 8. de Castro, Leandro Nunes and Fernando Jos Von Zuben

  4. Device-dependent screen optimization using evolutionary computing

    NASA Astrophysics Data System (ADS)

    Bartels, Rudi

    2000-12-01

    Most of the half toning algorithms are based on ideal imaging devices that can render perfect square pixels. In real printing environments this is not the case. Most imaging deices are a trade-off between the best quality and the highest speed. In this paper a screen will be designed for Agfa's newspaper-dedicated computer-to-plate imaging device Polaris.

  5. Tuning of MEMS Gyroscope using Evolutionary Algorithm and "Switched Drive-Angle" Method

    NASA Technical Reports Server (NTRS)

    Keymeulen, Didier; Ferguson, Michael I.; Breuer, Luke; Peay, Chris; Oks, Boris; Cheng, Yen; Kim, Dennis; MacDonald, Eric; Foor, David; Terrile, Rich; hide

    2006-01-01

    We propose a tuning method for Micro-Electro-Mechanical Systems (MEMS) gyroscopes based on evolutionary computation that has the capacity to efficiently increase the sensitivity of MEMS gyroscopes through tuning and, furthermore, to find the optimally tuned configuration for this state of increased sensitivity. We present the results of an experiment to determine the speed and efficiency of an evolutionary algorithm applied to electrostatic tuning of MEMS micro gyros. The MEMS gyro used in this experiment is a pyrex post resonator gyro (PRG) in a closed-loop control system. A measure of the quality of tuning is given by the difference in resonant frequencies, or frequency split, for the two orthogonal rocking axes. The current implementation of the closed-loop platform is able to measure and attain a relative stability in the sub-millihertz range, leading to a reduction of the frequency split to less than 100 mHz.

  6. Tuning of MEMS Gyroscope using Evolutionary Algorithm and "Switched Drive-Angle" Method

    NASA Technical Reports Server (NTRS)

    Keymeulen, Didier; Ferguson, Michael I.; Breuer, Luke; Peay, Chris; Oks, Boris; Cheng, Yen; Kim, Dennis; MacDonald, Eric; Foor, David; Terrile, Rich; Yee, Karl

    2006-01-01

    We propose a tuning method for Micro-Electro-Mechanical Systems (MEMS) gyroscopes based on evolutionary computation that has the capacity to efficiently increase the sensitivity of MEMS gyroscopes through tuning and, furthermore, to find the optimally tuned configuration for this state of increased sensitivity. We present the results of an experiment to determine the speed and efficiency of an evolutionary algorithm applied to electrostatic tuning of MEMS micro gyros. The MEMS gyro used in this experiment is a pyrex post resonator gyro (PRG) in a closed-loop control system. A measure of the quality of tuning is given by the difference in resonant frequencies, or frequency split, for the two orthogonal rocking axes. The current implementation of the closed-loop platform is able to measure and attain a relative stability in the sub-millihertz range, leading to a reduction of the frequency split to less than 100 mHz.

  7. Computational methods for remote homolog identification.

    PubMed

    Wan, Xiu-Feng; Xu, Dong

    2005-12-01

    As more and more protein sequences are available, homolog identification becomes increasingly important for functional, structural, and evolutional studies of proteins. Many homologous proteins were separated a very long time ago in their evolutionary history and thus their sequences share low sequence identity. These remote homologs have become a research focus in bioinformatics over the past decade, and some significant advances have been achieved. In this paper, we provide a comprehensive review on computational techniques used in remote homolog identification based on different methods, including sequence-sequence comparison, and sequence-structure comparison, and structure-structure comparison. Other miscellaneous approaches are also summarized. Pointers to the online resources of these methods and their related databases are provided. Comparisons among different methods in terms of their technical approaches, their strengths, and limitations are followed. Studies on proteins in SARS-CoV are shown as an example for remote homolog identification application.

  8. Scalable Evolutionary Computation for Efficient Information Extraction from Remote Sensed Imagery

    NASA Astrophysics Data System (ADS)

    Almutairi, L. M.; Shetty, S.; Momm, H. G.

    2014-11-01

    Evolutionary computation, in the form of genetic programming, is used to aid information extraction process from high-resolution satellite imagery in a semi-automatic fashion. Distributing and parallelizing the task of evaluating all candidate solutions during the evolutionary process could significantly reduce the inherent computational cost of evolving solutions that are composed of multichannel large images. In this study, we present the design and implementation of a system that leverages cloud-computing technology to expedite supervised solution development in a centralized evolutionary framework. The system uses the MapReduce programming model to implement a distributed version of the existing framework in a cloud-computing platform. The proposed system has two major subsystems; (i) data preparation: the generation of random spectral indices; and (ii) distributed processing: the distributed implementation of genetic programming, which is used to spectrally distinguish the features of interest from the remaining image background in the cloud computing environment in order to improve scalability. The proposed system reduces response time by leveraging the vast computational and storage resources in a cloud computing environment. The results demonstrate that distributing the candidate solutions reduces the execution time by 91.58%. These findings indicate that such technology could be applied to more complex problems that involve a larger population size and number of generations.

  9. Evolutionary computing for the design search and optimization of space vehicle power subsystems

    NASA Technical Reports Server (NTRS)

    Kordon, M.; Klimeck, G.; Hanks, D.

    2004-01-01

    Evolutionary computing has proven to be a straightforward and robust approach for optimizing a wide range of difficult analysis and design problems. This paper discusses the application of these techniques to an existing space vehicle power subsystem resource and performance analysis simulation in a parallel processing environment.

  10. Supervised and unsupervised discretization methods for evolutionary algorithms

    SciTech Connect

    Cantu-Paz, E

    2001-01-24

    This paper introduces simple model-building evolutionary algorithms (EAs) that operate on continuous domains. The algorithms are based on supervised and unsupervised discretization methods that have been used as preprocessing steps in machine learning. The basic idea is to discretize the continuous variables and use the discretization as a simple model of the solutions under consideration. The model is then used to generate new solutions directly, instead of using the usual operators based on sexual recombination and mutation. The algorithms presented here have fewer parameters than traditional and other model-building EAs. They expect that the proposed algorithms that use multivariate models scale up better to the dimensionality of the problem than existing EAs.

  11. Analysis of a Schnute postulate-based unified growth mode for model selection in evolutionary computations

    PubMed Central

    Bentil, D.E.; Osei, B.M.; Ellingwood, C.D.; Hoffmann, J.P.

    2007-01-01

    In order to evaluate the feasibility of a combined evolutionary algorithm-information theoretic approach to select the best model from a set of candidate invasive species models in ecology, and/or to evolve the most parsimonious model from a suite of competing models by comparing their relative performance, it is prudent to use a unified model that covers a myriad of situations. Using Schnute’s postulates as a starting point, we present a single, unified model for growth that can be successfully utilized for model selection in evolutionary computations. Depending on the parameter settings, the unified equation can describe several growth mechanisms. Such a generalized model mechanism, which encompasses a suite of competing models, can be successfully implemented in evolutionary computational algorithms to evolve the most parsimonious model that best fits ground truth data. We have done exactly this by testing the effectiveness of our reaction-diffusion-advection (RDA) model in an evolutionary computation model selection algorithm. The algorithm was validated (with success) against field data sets of the Zebra mussel invasion of Lake Champlain in the United States. PMID:17197072

  12. EVOLUTIONARY SYSTEMATICS OF THE CHIMPANZEE: IMMUNODIFFUSION COMPUTER APPROACH.

    DTIC Science & Technology

    man and gorilla, and shows increasingly more marked divergence from orangutan , gibbons, cercopithecoids, and ceboids. The method for constructing...the gibbon branch from the remaining hominoids, while the next most distant common ancestor separates the orangutan from man, chimpanzee, and gorilla...cercopithecoid-hominoid separation as 30 million years, the chimpanzee-man-gorilla separations were dated at about 6 million years, the orangutan at 14 million years, and the gibbon at about 19 million years. (Author)

  13. Nuclear spatial and spectral features based evolutionary method for meningioma subtypes classification in histopathology.

    PubMed

    Fatima, Kiran; Majeed, Hammad; Irshad, Humayun

    2017-04-05

    Meningioma subtypes classification is a real-world multiclass problem from the realm of neuropathology. The major challenge in solving this problem is the inherent complexity due to high intra-class variability and low inter-class variation in tissue samples. The development of computational methods to assist pathologists in characterization of these tissue samples would have great diagnostic and prognostic value. In this article, we proposed an optimized evolutionary framework for the classification of benign meningioma into four subtypes. This framework investigates the imperative role of RGB color channels for discrimination of tumor subtypes and compute structural, statistical and spectral phenotypes. An evolutionary technique, Genetic Algorithm, in combination with Support Vector Machine is applied to tune classifier parameters and to select the best possible combination of extracted phenotypes that improved the classification accuracy (94.88%) on meningioma histology dataset, provided by the Institute of Neuropathology, Bielefeld. These statistics show that computational framework can robustly discriminate four subtypes of benign meningioma and may aid pathologists in the diagnosis and classification of these lesions.

  14. Computational Methods Development at Ames

    NASA Technical Reports Server (NTRS)

    Kwak, Dochan; Smith, Charles A. (Technical Monitor)

    1998-01-01

    This viewgraph presentation outlines the development at Ames Research Center of advanced computational methods to provide appropriate fidelity computational analysis/design capabilities. Current thrusts of the Ames research include: 1) methods to enhance/accelerate viscous flow simulation procedures, and the development of hybrid/polyhedral-grid procedures for viscous flow; 2) the development of real time transonic flow simulation procedures for a production wind tunnel, and intelligent data management technology; and 3) the validation of methods and the flow physics study gives historical precedents to above research, and speculates on its future course.

  15. Computational Modeling Method for Superalloys

    NASA Technical Reports Server (NTRS)

    Bozzolo, Guillermo; Noebe, Ronald D.; Gayda, John

    1997-01-01

    Computer modeling based on theoretical quantum techniques has been largely inefficient due to limitations on the methods or the computer needs associated with such calculations, thus perpetuating the notion that little help can be expected from computer simulations for the atomistic design of new materials. In a major effort to overcome these limitations and to provide a tool for efficiently assisting in the development of new alloys, we developed the BFS method for alloys, which together with the experimental results from previous and current research that validate its use for large-scale simulations, provide the ideal grounds for developing a computationally economical and physically sound procedure for supplementing the experimental work at great cost and time savings.

  16. Non-Evolutionary Algorithms for Scheduling Dependent Tasks in Distributed Heterogeneous Computing Environments

    SciTech Connect

    Wayne F. Boyer; Gurdeep S. Hura

    2005-09-01

    The Problem of obtaining an optimal matching and scheduling of interdependent tasks in distributed heterogeneous computing (DHC) environments is well known to be an NP-hard problem. In a DHC system, task execution time is dependent on the machine to which it is assigned and task precedence constraints are represented by a directed acyclic graph. Recent research in evolutionary techniques has shown that genetic algorithms usually obtain more efficient schedules that other known algorithms. We propose a non-evolutionary random scheduling (RS) algorithm for efficient matching and scheduling of inter-dependent tasks in a DHC system. RS is a succession of randomized task orderings and a heuristic mapping from task order to schedule. Randomized task ordering is effectively a topological sort where the outcome may be any possible task order for which the task precedent constraints are maintained. A detailed comparison to existing evolutionary techniques (GA and PSGA) shows the proposed algorithm is less complex than evolutionary techniques, computes schedules in less time, requires less memory and fewer tuning parameters. Simulation results show that the average schedules produced by RS are approximately as efficient as PSGA schedules for all cases studied and clearly more efficient than PSGA for certain cases. The standard formulation for the scheduling problem addressed in this paper is Rm|prec|Cmax.,

  17. Support measures to estimate the reliability of evolutionary events predicted by reconciliation methods.

    PubMed

    Nguyen, Thi-Hau; Ranwez, Vincent; Berry, Vincent; Scornavacca, Celine

    2013-01-01

    The genome content of extant species is derived from that of ancestral genomes, distorted by evolutionary events such as gene duplications, transfers and losses. Reconciliation methods aim at recovering such events and at localizing them in the species history, by comparing gene family trees to species trees. These methods play an important role in studying genome evolution as well as in inferring orthology relationships. A major issue with reconciliation methods is that the reliability of predicted evolutionary events may be questioned for various reasons: Firstly, there may be multiple equally optimal reconciliations for a given species tree-gene tree pair. Secondly, reconciliation methods can be misled by inaccurate gene or species trees. Thirdly, predicted events may fluctuate with method parameters such as the cost or rate of elementary events. For all of these reasons, confidence values for predicted evolutionary events are sorely needed. It was recently suggested that the frequency of each event in the set of all optimal reconciliations could be used as a support measure. We put this proposition to the test here and also consider a variant where the support measure is obtained by additionally accounting for suboptimal reconciliations. Experiments on simulated data show the relevance of event supports computed by both methods, while resorting to suboptimal sampling was shown to be more effective. Unfortunately, we also show that, unlike the majority-rule consensus tree for phylogenies, there is no guarantee that a single reconciliation can contain all events having above 50% support. In this paper, we detail how to rely on the reconciliation graph to efficiently identify the median reconciliation. Such median reconciliation can be found in polynomial time within the potentially exponential set of most parsimonious reconciliations.

  18. Support Measures to Estimate the Reliability of Evolutionary Events Predicted by Reconciliation Methods

    PubMed Central

    Nguyen, Thi-Hau; Ranwez, Vincent; Berry, Vincent; Scornavacca, Celine

    2013-01-01

    The genome content of extant species is derived from that of ancestral genomes, distorted by evolutionary events such as gene duplications, transfers and losses. Reconciliation methods aim at recovering such events and at localizing them in the species history, by comparing gene family trees to species trees. These methods play an important role in studying genome evolution as well as in inferring orthology relationships. A major issue with reconciliation methods is that the reliability of predicted evolutionary events may be questioned for various reasons: Firstly, there may be multiple equally optimal reconciliations for a given species tree–gene tree pair. Secondly, reconciliation methods can be misled by inaccurate gene or species trees. Thirdly, predicted events may fluctuate with method parameters such as the cost or rate of elementary events. For all of these reasons, confidence values for predicted evolutionary events are sorely needed. It was recently suggested that the frequency of each event in the set of all optimal reconciliations could be used as a support measure. We put this proposition to the test here and also consider a variant where the support measure is obtained by additionally accounting for suboptimal reconciliations. Experiments on simulated data show the relevance of event supports computed by both methods, while resorting to suboptimal sampling was shown to be more effective. Unfortunately, we also show that, unlike the majority-rule consensus tree for phylogenies, there is no guarantee that a single reconciliation can contain all events having above 50% support. In this paper, we detail how to rely on the reconciliation graph to efficiently identify the median reconciliation. Such median reconciliation can be found in polynomial time within the potentially exponential set of most parsimonious reconciliations. PMID:24124449

  19. Computational Methods in Drug Discovery

    PubMed Central

    Sliwoski, Gregory; Kothiwale, Sandeepkumar; Meiler, Jens

    2014-01-01

    Computer-aided drug discovery/design methods have played a major role in the development of therapeutically important small molecules for over three decades. These methods are broadly classified as either structure-based or ligand-based methods. Structure-based methods are in principle analogous to high-throughput screening in that both target and ligand structure information is imperative. Structure-based approaches include ligand docking, pharmacophore, and ligand design methods. The article discusses theory behind the most important methods and recent successful applications. Ligand-based methods use only ligand information for predicting activity depending on its similarity/dissimilarity to previously known active ligands. We review widely used ligand-based methods such as ligand-based pharmacophores, molecular descriptors, and quantitative structure-activity relationships. In addition, important tools such as target/ligand data bases, homology modeling, ligand fingerprint methods, etc., necessary for successful implementation of various computer-aided drug discovery/design methods in a drug discovery campaign are discussed. Finally, computational methods for toxicity prediction and optimization for favorable physiologic properties are discussed with successful examples from literature. PMID:24381236

  20. Evolutionary method for finding communities in bipartite networks.

    PubMed

    Zhan, Weihua; Zhang, Zhongzhi; Guan, Jihong; Zhou, Shuigeng

    2011-06-01

    An important step in unveiling the relation between network structure and dynamics defined on networks is to detect communities, and numerous methods have been developed separately to identify community structure in different classes of networks, such as unipartite networks, bipartite networks, and directed networks. Here, we show that the finding of communities in such networks can be unified in a general framework-detection of community structure in bipartite networks. Moreover, we propose an evolutionary method for efficiently identifying communities in bipartite networks. To this end, we show that both unipartite and directed networks can be represented as bipartite networks, and their modularity is completely consistent with that for bipartite networks, the detection of modular structure on which can be reformulated as modularity maximization. To optimize the bipartite modularity, we develop a modified adaptive genetic algorithm (MAGA), which is shown to be especially efficient for community structure detection. The high efficiency of the MAGA is based on the following three improvements we make. First, we introduce a different measure for the informativeness of a locus instead of the standard deviation, which can exactly determine which loci mutate. This measure is the bias between the distribution of a locus over the current population and the uniform distribution of the locus, i.e., the Kullback-Leibler divergence between them. Second, we develop a reassignment technique for differentiating the informative state a locus has attained from the random state in the initial phase. Third, we present a modified mutation rule which by incorporating related operations can guarantee the convergence of the MAGA to the global optimum and can speed up the convergence process. Experimental results show that the MAGA outperforms existing methods in terms of modularity for both bipartite and unipartite networks.

  1. Using evolutionary computations to understand the design and evolution of gene and cell regulatory networks.

    PubMed

    Spirov, Alexander; Holloway, David

    2013-07-15

    This paper surveys modeling approaches for studying the evolution of gene regulatory networks (GRNs). Modeling of the design or 'wiring' of GRNs has become increasingly common in developmental and medical biology, as a means of quantifying gene-gene interactions, the response to perturbations, and the overall dynamic motifs of networks. Drawing from developments in GRN 'design' modeling, a number of groups are now using simulations to study how GRNs evolve, both for comparative genomics and to uncover general principles of evolutionary processes. Such work can generally be termed evolution in silico. Complementary to these biologically-focused approaches, a now well-established field of computer science is Evolutionary Computations (ECs), in which highly efficient optimization techniques are inspired from evolutionary principles. In surveying biological simulation approaches, we discuss the considerations that must be taken with respect to: (a) the precision and completeness of the data (e.g. are the simulations for very close matches to anatomical data, or are they for more general exploration of evolutionary principles); (b) the level of detail to model (we proceed from 'coarse-grained' evolution of simple gene-gene interactions to 'fine-grained' evolution at the DNA sequence level); (c) to what degree is it important to include the genome's cellular context; and (d) the efficiency of computation. With respect to the latter, we argue that developments in computer science EC offer the means to perform more complete simulation searches, and will lead to more comprehensive biological predictions. Copyright © 2013 Elsevier Inc. All rights reserved.

  2. Using evolutionary computations to understand the design and evolution of gene and cell regulatory networks

    PubMed Central

    Spirov, Alexander; Holloway, David

    2013-01-01

    This paper surveys modeling approaches for studying the evolution of gene regulatory networks (GRNs). Modeling of the design or ‘wiring’ of GRNs has become increasingly common in developmental and medical biology, as a means of quantifying gene-gene interactions, the response to perturbations, and the overall dynamic motifs of networks. Drawing from developments in GRN ‘design’ modeling, a number of groups are now using simulations to study how GRNs evolve, both for comparative genomics and to uncover general principles of evolutionary processes. Such work can generally be termed evolution in silico. Complementary to these biologically-focused approaches, a now well-established field of computer science is Evolutionary Computations (EC), in which highly efficient optimization techniques are inspired from evolutionary principles. In surveying biological simulation approaches, we discuss the considerations that must be taken with respect to: a) the precision and completeness of the data (e.g. are the simulations for very close matches to anatomical data, or are they for more general exploration of evolutionary principles); b) the level of detail to model (we proceed from ‘coarse-grained’ evolution of simple gene-gene interactions to ‘fine-grained’ evolution at the DNA sequence level); c) to what degree is it important to include the genome’s cellular context; and d) the efficiency of computation. With respect to the latter, we argue that developments in computer science EC offer the means to perform more complete simulation searches, and will lead to more comprehensive biological predictions. PMID:23726941

  3. Computational methods for Gene Orthology inference

    PubMed Central

    Kristensen, David M.; Wolf, Yuri I.; Mushegian, Arcady R.

    2011-01-01

    Accurate inference of orthologous genes is a pre-requisite for most comparative genomics studies, and is also important for functional annotation of new genomes. Identification of orthologous gene sets typically involves phylogenetic tree analysis, heuristic algorithms based on sequence conservation, synteny analysis, or some combination of these approaches. The most direct tree-based methods typically rely on the comparison of an individual gene tree with a species tree. Once the two trees are accurately constructed, orthologs are straightforwardly identified by the definition of orthology as those homologs that are related by speciation, rather than gene duplication, at their most recent point of origin. Although ideal for the purpose of orthology identification in principle, phylogenetic trees are computationally expensive to construct for large numbers of genes and genomes, and they often contain errors, especially at large evolutionary distances. Moreover, in many organisms, in particular prokaryotes and viruses, evolution does not appear to have followed a simple ‘tree-like’ mode, which makes conventional tree reconciliation inapplicable. Other, heuristic methods identify probable orthologs as the closest homologous pairs or groups of genes in a set of organisms. These approaches are faster and easier to automate than tree-based methods, with efficient implementations provided by graph-theoretical algorithms enabling comparisons of thousands of genomes. Comparisons of these two approaches show that, despite conceptual differences, they produce similar sets of orthologs, especially at short evolutionary distances. Synteny also can aid in identification of orthologs. Often, tree-based, sequence similarity- and synteny-based approaches can be combined into flexible hybrid methods. PMID:21690100

  4. Methods for computing color anaglyphs

    NASA Astrophysics Data System (ADS)

    McAllister, David F.; Zhou, Ya; Sullivan, Sophia

    2010-02-01

    A new computation technique is presented for calculating pixel colors in anaglyph images. The method depends upon knowing the RGB spectral distributions of the display device and the transmission functions of the filters in the viewing glasses. It requires the solution of a nonlinear least-squares program for each pixel in a stereo pair and is based on minimizing color distances in the CIEL*a*b* uniform color space. The method is compared with several techniques for computing anaglyphs including approximation in CIE space using the Euclidean and Uniform metrics, the Photoshop method and its variants, and a method proposed by Peter Wimmer. We also discuss the methods of desaturation and gamma correction for reducing retinal rivalry.

  5. Computational methods in drug discovery

    PubMed Central

    Leelananda, Sumudu P

    2016-01-01

    The process for drug discovery and development is challenging, time consuming and expensive. Computer-aided drug discovery (CADD) tools can act as a virtual shortcut, assisting in the expedition of this long process and potentially reducing the cost of research and development. Today CADD has become an effective and indispensable tool in therapeutic development. The human genome project has made available a substantial amount of sequence data that can be used in various drug discovery projects. Additionally, increasing knowledge of biological structures, as well as increasing computer power have made it possible to use computational methods effectively in various phases of the drug discovery and development pipeline. The importance of in silico tools is greater than ever before and has advanced pharmaceutical research. Here we present an overview of computational methods used in different facets of drug discovery and highlight some of the recent successes. In this review, both structure-based and ligand-based drug discovery methods are discussed. Advances in virtual high-throughput screening, protein structure prediction methods, protein–ligand docking, pharmacophore modeling and QSAR techniques are reviewed. PMID:28144341

  6. Computational methods in drug discovery.

    PubMed

    Leelananda, Sumudu P; Lindert, Steffen

    2016-01-01

    The process for drug discovery and development is challenging, time consuming and expensive. Computer-aided drug discovery (CADD) tools can act as a virtual shortcut, assisting in the expedition of this long process and potentially reducing the cost of research and development. Today CADD has become an effective and indispensable tool in therapeutic development. The human genome project has made available a substantial amount of sequence data that can be used in various drug discovery projects. Additionally, increasing knowledge of biological structures, as well as increasing computer power have made it possible to use computational methods effectively in various phases of the drug discovery and development pipeline. The importance of in silico tools is greater than ever before and has advanced pharmaceutical research. Here we present an overview of computational methods used in different facets of drug discovery and highlight some of the recent successes. In this review, both structure-based and ligand-based drug discovery methods are discussed. Advances in virtual high-throughput screening, protein structure prediction methods, protein-ligand docking, pharmacophore modeling and QSAR techniques are reviewed.

  7. Applying an instance selection method to an evolutionary neural classifier design

    NASA Astrophysics Data System (ADS)

    Khritonenko, Dmitrii; Stanovov, Vladimir; Semenkin, Eugene

    2017-02-01

    In this paper the application of an instance selection algorithm to the design of a neural classifier is considered. A number of existing instance selection methods are presented. A new wrapper-method, whose main difference compared to other approaches is an iterative procedure for selecting training subsets from the dataset, is described. The approach is based on using training subsample selection probabilities for every instance. The value of these probabilities depends on the classification success for each measurement. An evolutionary algorithm for the design of a neural classifier is presented, which was used to test the efficiency of the presented approach. The described approach has been implemented and tested on a set of classification problems. The testing has shown that the presented algorithm allows the computational complexity to be decreased and the quality of the obtained classifiers to be increased. Compared to analogues found in scientific literature, it was shown that the presented algorithm is an effective tool for classification problem solving.

  8. Design of a dynamic model of genes with multiple autonomous regulatory modules by evolutionary computations

    PubMed Central

    Spirov, Alexander V.; Holloway, David M.

    2010-01-01

    A new approach to design a dynamic model of genes with multiple autonomous regulatory modules by evolutionary computations is proposed. The approach is based on Genetic Algorithms (GA), with new crossover operators especially designed for these purposes. The new operators use local homology between parental strings to preserve building blocks found by the algorithm. The approach exploits the subbasin-portal architecture of the fitness functions suitable for this kind of evolutionary modeling. This architecture is significant for Royal Road class fitness functions. Two real-life Systems Biology problems with such fitness functions are implemented here: evolution of the bacterial promoter rrnPl and of the enhancer of the Drosophila even-skipped gene. The effectiveness of the approach compared to standard GA is demonstrated on several benchmark and real-life tasks. PMID:20930945

  9. Computational methods for stellerator configurations

    SciTech Connect

    Betancourt, O.

    1992-01-01

    This project had two main objectives. The first one was to continue to develop computational methods for the study of three dimensional magnetic confinement configurations. The second one was to collaborate and interact with researchers in the field who can use these techniques to study and design fusion experiments. The first objective has been achieved with the development of the spectral code BETAS and the formulation of a new variational approach for the study of magnetic island formation in a self consistent fashion. The code can compute the correct island width corresponding to the saturated island, a result shown by comparing the computed island with the results of unstable tearing modes in Tokamaks and with experimental results in the IMS Stellarator. In addition to studying three dimensional nonlinear effects in Tokamaks configurations, these self consistent computed island equilibria will be used to study transport effects due to magnetic island formation and to nonlinearly bifurcated equilibria. The second objective was achieved through direct collaboration with Steve Hirshman at Oak Ridge, D. Anderson and R. Talmage at Wisconsin as well as through participation in the Sherwood and APS meetings.

  10. Recombination in viruses: mechanisms, methods of study, and evolutionary consequences.

    PubMed

    Pérez-Losada, Marcos; Arenas, Miguel; Galán, Juan Carlos; Palero, Ferran; González-Candelas, Fernando

    2015-03-01

    Recombination is a pervasive process generating diversity in most viruses. It joins variants that arise independently within the same molecule, creating new opportunities for viruses to overcome selective pressures and to adapt to new environments and hosts. Consequently, the analysis of viral recombination attracts the interest of clinicians, epidemiologists, molecular biologists and evolutionary biologists. In this review we present an overview of three major areas related to viral recombination: (i) the molecular mechanisms that underlie recombination in model viruses, including DNA-viruses (Herpesvirus) and RNA-viruses (Human Influenza Virus and Human Immunodeficiency Virus), (ii) the analytical procedures to detect recombination in viral sequences and to determine the recombination breakpoints, along with the conceptual and methodological tools currently used and a brief overview of the impact of new sequencing technologies on the detection of recombination, and (iii) the major areas in the evolutionary analysis of viral populations on which recombination has an impact. These include the evaluation of selective pressures acting on viral populations, the application of evolutionary reconstructions in the characterization of centralized genes for vaccine design, and the evaluation of linkage disequilibrium and population structure. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. A computer lab exploring evolutionary aspects of chromatin structure and dynamics for an undergraduate chromatin course*.

    PubMed

    Eirín-López, José M

    2013-01-01

    The study of chromatin constitutes one of the most active research fields in life sciences, being subject to constant revisions that continuously redefine the state of the art in its knowledge. As every other rapidly changing field, chromatin biology requires clear and straightforward educational strategies able to efficiently translate such a vast body of knowledge to the classroom. With this aim, the present work describes a multidisciplinary computer lab designed to introduce undergraduate students to the dynamic nature of chromatin, within the context of the one semester course "Chromatin: Structure, Function and Evolution." This exercise is organized in three parts including (a) molecular evolutionary biology of histone families (using the H1 family as example), (b) histone structure and variation across different animal groups, and (c) effect of histone diversity on nucleosome structure and chromatin dynamics. By using freely available bioinformatic tools that can be run on common computers, the concept of chromatin dynamics is interactively illustrated from a comparative/evolutionary perspective. At the end of this computer lab, students are able to translate the bioinformatic information into a biochemical context in which the relevance of histone primary structure on chromatin dynamics is exposed. During the last 8 years this exercise has proven to be a powerful approach for teaching chromatin structure and dynamics, allowing students a higher degree of independence during the processes of learning and self-assessment.

  12. Geometric methods in quantum computation

    NASA Astrophysics Data System (ADS)

    Zhang, Jun

    Recent advances in the physical sciences and engineering have created great hopes for new computational paradigms and substrates. One such new approach is the quantum computer, which holds the promise of enhanced computational power. Analogous to the way a classical computer is built from electrical circuits containing wires and logic gates, a quantum computer is built from quantum circuits containing quantum wires and elementary quantum gates to transport and manipulate quantum information. Therefore, design of quantum gates and quantum circuits is a prerequisite for any real application of quantum computation. In this dissertation we apply geometric control methods from differential geometry and Lie group representation theory to analyze the properties of quantum gates and to design optimal quantum circuits. Using the Cartan decomposition and the Weyl group, we show that the geometric structure of nonlocal two-qubit gates is a 3-Torus. After further reducing the symmetry, the geometric representation of nonlocal gates is seen to be conveniently visualized as a tetrahedron. Each point in this tetrahedron except on the base corresponds to a different equivalent class of nonlocal gates. This geometric representation is one of the cornerstones for the discussion on quantum computation in this dissertation. We investigate the properties of those two-qubit operations that can generate maximal entanglement. It is an astonishing finding that if we randomly choose a two-qubit operation, the probability that we obtain a perfect entangler is exactly one half. We prove that given a two-body interaction Hamiltonian, it is always possible to explicitly construct a quantum circuit for exact simulation of any arbitrary nonlocal two-qubit gate by turning on the two-body interaction for at most three times, together with at most four local gates. We also provide an analytic approach to construct a universal quantum circuit from any entangling gate supplemented with local gates

  13. Evolutionary computing for the design search and optimization of space vehicle power subsystems

    NASA Technical Reports Server (NTRS)

    Kordon, Mark; Klimeck, Gerhard; Hanks, David; Hua, Hook

    2004-01-01

    Evolutionary computing has proven to be a straightforward and robust approach for optimizing a wide range of difficult analysis and design problems. This paper discusses the application of these techniques to an existing space vehicle power subsystem resource and performance analysis simulation in a parallel processing environment. Out preliminary results demonstrate that this approach has the potential to improve the space system trade study process by allowing engineers to statistically weight subsystem goals of mass, cost and performance then automatically size power elements based on anticipated performance of the subsystem rather than on worst-case estimates.

  14. An exploration of computer-simulated evolution and small group discussion on pre-service science teachers' perceptions of evolutionary concepts

    NASA Astrophysics Data System (ADS)

    MacDonald, Ronald Douglas

    The primary goal of this study was to explore how the use of a computer simulation of basic evolutionary processes, in combination with small-group discussions, affected Intermediate/Senior pre-service science teachers' perspectives of basic evolutionary concepts. Qualitative and quantitative methods were used in a case study approach with 19 pre-service Intermediate/Senior science teachers at an Ontario university. Several sub-goals were explored. The first sub-goal was to assess Intermediate/Senior pre-service science teachers' current conceptions of evolution. The results indicated that approximately two-thirds of the participants had a poor understanding of basic evolutionary concepts, with only 2 of the 19 participants demonstrating a strong comprehension. These results were found to be very similar to comparable samples of subjects from other research. The second sub-goal was to explore the relationships among Intermediate/Senior pre-service science teachers' understanding of contemporary evolutionary concepts, their perspectives of the nature of science, and their intentions to teach evolutionary concepts in the classroom. Participants' knowledge of evolutionary concepts was found to be associated strongly with their intentions to teach evolution by natural selection (r = .42). However, knowledge of evolutionary concepts was not found to be associated with any particular science epistemology perspective. The third sub-goal was to analyze and to interpret the small-group discussions as members interacted with the simulation. The simulation was found to be highly engaging and a very effective method of encouraging participants to speculate, question, discuss and learn about important evolutionary concepts. Analyses of the discussions revealed that the simulation evoked a wide array of correct conceptions as well as misconceptions. The fourth sub-goal was to assess the extent to which creating a lesson plan on the topic of natural selection could affect

  15. Use of Evolutionary Computation for Localizing Surface Emissions from Mars Orbit

    NASA Astrophysics Data System (ADS)

    Allen, Mark; Mischna, M. A.; Lee, S.; Terrile, R.

    2008-09-01

    High-precision targeting of point sources of atmospheric species outgassed from the Martian surface may prove to be a key element in the exploration of locales of potential subsurface geological and/or biological activity. In general, the atmospheric distribution of a signature species will be much more extended than the surface area from which the gas was emitted. In addition, the spatial resolution of orbital instruments is more extended that the point source zones. With this in mind, we have developed a novel technique for deducing the surface locations of trace gas emission with an uncertainty of a few tens of kilometers using present-day observational capabilities combined with numerical modeling of the global distribution of the tracer species. This approach employs genetic algorithms to indirectly isolate plume source locations from limited data taken by a spacecraft instrument in orbit. We have coupled the Caltech/Cornell/JPL MarsWRF general circulation model (GCM) with an evolutionary computation model (ECM) developed at the JPL Center for Evolutionary Computation and Design (CECAD) to quickly and efficiently determine the plume source characteristics (latitude, longitude and duration) that best reproduce the spacecraft observations.

  16. Hardware platforms for MEMS gyroscope tuning based on evolutionary computation using open-loop and closed -loop frequency response

    NASA Technical Reports Server (NTRS)

    Keymeulen, Didier; Ferguson, Michael I.; Fink, Wolfgang; Oks, Boris; Peay, Chris; Terrile, Richard; Cheng, Yen; Kim, Dennis; MacDonald, Eric; Foor, David

    2005-01-01

    We propose a tuning method for MEMS gyroscopes based on evolutionary computation to efficiently increase the sensitivity of MEMS gyroscopes through tuning. The tuning method was tested for the second generation JPL/Boeing Post-resonator MEMS gyroscope using the measurement of the frequency response of the MEMS device in open-loop operation. We also report on the development of a hardware platform for integrated tuning and closed loop operation of MEMS gyroscopes. The control of this device is implemented through a digital design on a Field Programmable Gate Array (FPGA). The hardware platform easily transitions to an embedded solution that allows for the miniaturization of the system to a single chip.

  17. Hardware platforms for MEMS gyroscope tuning based on evolutionary computation using open-loop and closed -loop frequency response

    NASA Technical Reports Server (NTRS)

    Keymeulen, Didier; Ferguson, Michael I.; Fink, Wolfgang; Oks, Boris; Peay, Chris; Terrile, Richard; Cheng, Yen; Kim, Dennis; MacDonald, Eric; Foor, David

    2005-01-01

    We propose a tuning method for MEMS gyroscopes based on evolutionary computation to efficiently increase the sensitivity of MEMS gyroscopes through tuning. The tuning method was tested for the second generation JPL/Boeing Post-resonator MEMS gyroscope using the measurement of the frequency response of the MEMS device in open-loop operation. We also report on the development of a hardware platform for integrated tuning and closed loop operation of MEMS gyroscopes. The control of this device is implemented through a digital design on a Field Programmable Gate Array (FPGA). The hardware platform easily transitions to an embedded solution that allows for the miniaturization of the system to a single chip.

  18. Computational Methods in Continuum Mechanics

    DTIC Science & Technology

    1993-11-30

    ftruet11ft bwalch.Aq 0.4.oiqn 04ta tou.MtC’ gahimtc" n matod .nAfitang In@ data 01#04141. OAd co0noIDW~ng And tft@nq the ~OIWCI&Qn of 1,onjt~omt .nd~ml...AD-A27S 560 DTIC\\3\\Ul3 10 S ELECTE1 FEB 9 1994 I c I £ COMPUTATIONAL METHODS IN CONTINUUM MECHANICS By Bolindra N . Borah N.C. A&T State University...PAGE 0me No 0.704-0158 io (reorovtnq burden ’Of .t..i e’iortion of Information is estimted to ’Adoraw 1O4 .0 e~o- * n th.n I~c ot.. "o.vw.n. q

  19. Deep Space Network Scheduling Using Evolutionary Computational Methods

    NASA Technical Reports Server (NTRS)

    Guillaume, Alexandre; Lee, Seugnwon; Wang, Yeou-Fang; Terrile, Richard J.

    2007-01-01

    The paper presents the specific approach taken to formulate the problem in terms of gene encoding, fitness function, and genetic operations. The genome is encoded such that a subset of the scheduling constraints is automatically satisfied. Several fitness functions are formulated to emphasize different aspects of the scheduling problem. The optimal solutions of the different fitness functions demonstrate the trade-off of the scheduling problem and provide insight into a conflict resolution process.

  20. Explicit Building Block Multiobjective Evolutionary Computation: Methods and Applications

    DTIC Science & Technology

    2005-06-16

    Within the detailed description of the latest algorithm, MOMGA-IIa innovative enhancements are described enough to allow for reproduction of each...field model fitness function with a neural network. Although, it is decided that this not be implemented due to the low quality in solution reproduction ...The asexual mutation operator mutates the population member with a standard deviation that is obtained for each component of the variable vector as the

  1. Computational methods for image reconstruction.

    PubMed

    Chung, Julianne; Ruthotto, Lars

    2017-04-01

    Reconstructing images from indirect measurements is a central problem in many applications, including the subject of this special issue, quantitative susceptibility mapping (QSM). The process of image reconstruction typically requires solving an inverse problem that is ill-posed and large-scale and thus challenging to solve. Although the research field of inverse problems is thriving and very active with diverse applications, in this part of the special issue we will focus on recent advances in inverse problems that are specific to deconvolution problems, the class of problems to which QSM belongs. We will describe analytic tools that can be used to investigate underlying ill-posedness and apply them to the QSM reconstruction problem and the related extensively studied image deblurring problem. We will discuss state-of-the-art computational tools and methods for image reconstruction, including regularization approaches and regularization parameter selection methods. We finish by outlining some of the current trends and future challenges. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  2. Towards a Population Dynamics Theory for Evolutionary Computing: Learning from Biological Population Dynamics in Nature

    NASA Astrophysics Data System (ADS)

    Ma, Zhanshan (Sam)

    In evolutionary computing (EC), population size is one of the critical parameters that a researcher has to deal with. Hence, it was no surprise that the pioneers of EC, such as De Jong (1975) and Holland (1975), had already studied the population sizing from the very beginning of EC. What is perhaps surprising is that more than three decades later, we still largely depend on the experience or ad-hoc trial-and-error approach to set the population size. For example, in a recent monograph, Eiben and Smith (2003) indicated: "In almost all EC applications, the population size is constant and does not change during the evolutionary search." Despite enormous research on this issue in recent years, we still lack a well accepted theory for population sizing. In this paper, I propose to develop a population dynamics theory forEC with the inspiration from the population dynamics theory of biological populations in nature. Essentially, the EC population is considered as a dynamic system over time (generations) and space (search space or fitness landscape), similar to the spatial and temporal dynamics of biological populations in nature. With this conceptual mapping, I propose to 'transplant' the biological population dynamics theory to EC via three steps: (i) experimentally test the feasibility—whether or not emulating natural population dynamics improves the EC performance; (ii) comparatively study the underlying mechanisms—why there are improvements, primarily via statistical modeling analysis; (iii) conduct theoretical analysis with theoretical models such as percolation theory and extended evolutionary game theory that are generally applicable to both EC and natural populations. This article is a summary of a series of studies we have performed to achieve the general goal [27][30]-[32]. In the following, I start with an extremely brief introduction on the theory and models of natural population dynamics (Sections 1 & 2). In Sections 4 to 6, I briefly discuss three

  3. An evolutionary method for synthesizing technological planning and architectural advance

    NASA Astrophysics Data System (ADS)

    Cole, Bjorn Forstrom

    the appropriate technological antecedents are accounted for in developing the projection. The third chapter of the thesis compiles a series of observations and philosophical considerations into a series of research questions. Some research questions are then answered with further thought, observation, and reading, leading to conjectures on the problem. The remainder require some form of experimentation, and so are used to formulate hypotheses. Falsifiability conditions are then generated from those hypotheses, and used to get the development of experiments to be performed, in this case on a computer upon various conditions of use of a genetic algorithm. The fourth chapter of the thesis walks through the formulation of a method to attack the problem of strategically choosing an architecture. This method is designed to find the optimum architecture under multiple conditions, which is required for the ability to play the "what if" games typically undertaken in strategic situations. The chapter walks through a graph-based representation of architecture, provides the rationale for choosing a given technology forecasting technique, and lays out the implementation of the optimization algorithm, named Sindri, within a commercial analysis code, Pacelab. The fifth chapter of the thesis then tests the Sindri code. The first test applied is a series of standardized combinatorial spaces, which are meant to be analogous to test problems traditionally posed to optimizers (e.g., Rosenbrock's valley function). The results from this test assess the value of various operators used to transform the architecture graph in the course of conducting a genetic search. Finally, this method is employed on a test case involving the transition of a miniature helicopter from glow engine to battery propulsion, and finally to a design where the battery functions as both structure and power source. The final two chapters develop conclusions based on the body of work conducted within this thesis and

  4. An Evolutionary Examination of Telemedicine: A Health and Computer-Mediated Communication Perspective

    PubMed Central

    Breen, Gerald-Mark; Matusitz, Jonathan

    2009-01-01

    Telemedicine, the use of advanced communication technologies in the healthcare context, has a rich history and a clear evolutionary course. In this paper, the authors identify telemedicine as operationally defined, the services and technologies it comprises, the direction telemedicine has taken, along with its increased acceptance in the healthcare communities. The authors also describe some of the key pitfalls warred with by researchers and activists to advance telemedicine to its full potential and lead to an unobstructed team of technicians to identify telemedicine’s diverse utilities. A discussion and future directions section is included to provide fresh ideas to health communication and computer-mediated scholars wishing to delve into this area and make a difference to enhance public understanding of this field. PMID:20300559

  5. An evolutionary examination of telemedicine: a health and computer-mediated communication perspective.

    PubMed

    Breen, Gerald-Mark; Matusitz, Jonathan

    2010-01-01

    Telemedicine, the use of advanced communication technologies in the healthcare context, has a rich history and a clear evolutionary course. In this paper, the authors identify telemedicine as operationally defined, the services and technologies it comprises, the direction telemedicine has taken, along with its increased acceptance in the healthcare communities. The authors also describe some of the key pitfalls warred with by researchers and activists to advance telemedicine to its full potential and lead to an unobstructed team of technicians to identify telemedicine's diverse utilities. A discussion and future directions section is included to provide fresh ideas to health communication and computer-mediated scholars wishing to delve into this area and make a difference to enhance public understanding of this field.

  6. On the challenge of exploring the evolutionary trajectory from phosphotriesterase to arylesterase using computer simulations.

    PubMed

    Bora, Ram Prasad; Mills, Matthew J L; Frushicheva, Maria P; Warshel, Arieh

    2015-02-26

    The ability to design effective enzymes presents a fundamental challenge in biotechnology and also in biochemistry. Unfortunately, most of the progress on this field has been accomplished by bringing the reactants to a reasonable orientation relative to each other, rather than by rational optimization of the polar preorganization of the environment, which is the most important catalytic factor. True computer based enzyme design would require the ability to evaluate the catalytic power of designed active sites. This work considers the evolution from a phosphotriesterase (with the paraoxon substrate) to arylesterase (with the 2-naphthylhexanoate (2NH) substrate) catalysis. Both the original and the evolved enzymes involve two zinc ions and their ligands, making it hard to obtain a reliable quantum mechanical description and then to obtain an effective free energy sampling. Furthermore, the options for the reaction path are quite complicated. To progress in this direction we started with DFT calculations of the energetics of different mechanistic options of cluster models and then used the results to calibrate empirical valence bond (EVB) models and to generate properly sampled free energy surfaces for different mechanisms in the enzyme. Interestingly, it is found that the catalytic effect depends on the Zn-Zn distance making the mechanistic analysis somewhat complicated. Comparing the activation barriers of paraoxon and the 2NH ester at the beginning and end of the evolutionary path reproduced the observed evolutionary trend. However, although our findings provide an advance in exploring the nature of promiscuous enzymes, they also indicate that modeling the reaction mechanism in the case of enzymes with a binuclear zinc center is far from trivial and presents a challenge for computer-aided enzyme design.

  7. Optimization Methods for Computer Animation.

    ERIC Educational Resources Information Center

    Donkin, John Caldwell

    Emphasizing the importance of economy and efficiency in the production of computer animation, this master's thesis outlines methodologies that can be used to develop animated sequences with the highest quality images for the least expenditure. It is assumed that if computer animators are to be able to fully exploit the available resources, they…

  8. Estimation of the elastic parameters of human liver biomechanical models by means of medical images and evolutionary computation.

    PubMed

    Martínez-Martínez, F; Rupérez, M J; Martín-Guerrero, J D; Monserrat, C; Lago, M A; Pareja, E; Brugger, S; López-Andújar, R

    2013-09-01

    This paper presents a method to computationally estimate the elastic parameters of two biomechanical models proposed for the human liver. The method is aimed at avoiding the invasive measurement of its mechanical response. The chosen models are a second order Mooney-Rivlin model and an Ogden model. A novel error function, the geometric similarity function (GSF), is formulated using similarity coefficients widely applied in the field of medical imaging (Jaccard coefficient and Hausdorff coefficient). This function is used to compare two 3D images. One of them corresponds to a reference deformation carried out over a finite element (FE) mesh of a human liver from a computer tomography image, whilst the other one corresponds to the FE simulation of that deformation in which variations in the values of the model parameters are introduced. Several search strategies, based on GSF as cost function, are developed to accurately find the elastics parameters of the models, namely: two evolutionary algorithms (scatter search and genetic algorithm) and an iterative local optimization. The results show that GSF is a very appropriate function to estimate the elastic parameters of the biomechanical models since the mean of the relative mean absolute errors committed by the three algorithms is lower than 4%. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  9. Computational methods for probability of instability calculations

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.; Burnside, O. H.

    1990-01-01

    This paper summarizes the development of the methods and a computer program to compute the probability of instability of a dynamic system than can be represented by a system of second-order ordinary linear differential equations. Two instability criteria based upon the roots of the characteristics equation or Routh-Hurwitz test functions are investigated. Computational methods based on system reliability analysis methods and importance sampling concepts are proposed to perform efficient probabilistic analysis. Numerical examples are provided to demonstrate the methods.

  10. Incremental dental development: methods and applications in hominoid evolutionary studies.

    PubMed

    Smith, Tanya M

    2008-02-01

    This survey of dental microstructure studies reviews recent methods used to quantify developmental variables (daily secretion rate, periodicity of long-period lines, extension rate, formation time) and applications to the study of hominoid evolution. While requisite preparative and analytical methods are time consuming, benefits include more precise identification of tooth crown initiation and completion than conventional radiographic approaches. Furthermore, incremental features facilitate highly accurate estimates of the speed and duration of crown and root formation, stress experienced during development (including birth), and age at death. These approaches have provided insight into fossil hominin and Miocene hominoid life histories, and have also been applied to ontogenetic and taxonomic studies of fossil apes and humans. It is shown here that, due to the rapidly evolving nature of dental microstructure studies, numerous methods have been applied over the past few decades to characterize the rate and duration of dental development. Yet, it is often unclear whether data derived from different methods are comparable or which methods are the most accurate. Areas for future research are identified, including the need for validation and standardization of certain methods, and new methods for integrating nondestructive structural and developmental studies are highlighted.

  11. An evolutionary computational theory of prefrontal executive function in decision-making.

    PubMed

    Koechlin, Etienne

    2014-11-05

    The prefrontal cortex subserves executive control and decision-making, that is, the coordination and selection of thoughts and actions in the service of adaptive behaviour. We present here a computational theory describing the evolution of the prefrontal cortex from rodents to humans as gradually adding new inferential Bayesian capabilities for dealing with a computationally intractable decision problem: exploring and learning new behavioural strategies versus exploiting and adjusting previously learned ones through reinforcement learning (RL). We provide a principled account identifying three inferential steps optimizing this arbitration through the emergence of (i) factual reactive inferences in paralimbic prefrontal regions in rodents; (ii) factual proactive inferences in lateral prefrontal regions in primates and (iii) counterfactual reactive and proactive inferences in human frontopolar regions. The theory clarifies the integration of model-free and model-based RL through the notion of strategy creation. The theory also shows that counterfactual inferences in humans yield to the notion of hypothesis testing, a critical reasoning ability for approximating optimal adaptive processes and presumably endowing humans with a qualitative evolutionary advantage in adaptive behaviour.

  12. A sense of life: computational and experimental investigations with models of biochemical and evolutionary processes.

    PubMed

    Mishra, Bud; Daruwala, Raoul-Sam; Zhou, Yi; Ugel, Nadia; Policriti, Alberto; Antoniotti, Marco; Paxia, Salvatore; Rejali, Marc; Rudra, Archisman; Cherepinsky, Vera; Silver, Naomi; Casey, William; Piazza, Carla; Simeoni, Marta; Barbano, Paolo; Spivak, Marina; Feng, Jiawu; Gill, Ofer; Venkatesh, Mysore; Cheng, Fang; Sun, Bing; Ioniata, Iuliana; Anantharaman, Thomas; Hubbard, E Jane Albert; Pnueli, Amir; Harel, David; Chandru, Vijay; Hariharan, Ramesh; Wigler, Michael; Park, Frank; Lin, Shih-Chieh; Lazebnik, Yuri; Winkler, Franz; Cantor, Charles R; Carbone, Alessandra; Gromov, Mikhael

    2003-01-01

    We collaborate in a research program aimed at creating a rigorous framework, experimental infrastructure, and computational environment for understanding, experimenting with, manipulating, and modifying a diverse set of fundamental biological processes at multiple scales and spatio-temporal modes. The novelty of our research is based on an approach that (i) requires coevolution of experimental science and theoretical techniques and (ii) exploits a certain universality in biology guided by a parsimonious model of evolutionary mechanisms operating at the genomic level and manifesting at the proteomic, transcriptomic, phylogenic, and other higher levels. Our current program in "systems biology" endeavors to marry large-scale biological experiments with the tools to ponder and reason about large, complex, and subtle natural systems. To achieve this ambitious goal, ideas and concepts are combined from many different fields: biological experimentation, applied mathematical modeling, computational reasoning schemes, and large-scale numerical and symbolic simulations. From a biological viewpoint, the basic issues are many: (i) understanding common and shared structural motifs among biological processes; (ii) modeling biological noise due to interactions among a small number of key molecules or loss of synchrony; (iii) explaining the robustness of these systems in spite of such noise; and (iv) cataloging multistatic behavior and adaptation exhibited by many biological processes.

  13. How quickly do brains catch up with bodies? A comparative method for detecting evolutionary lag.

    PubMed Central

    Deaner, R O; Nunn, C L

    1999-01-01

    A trait may be at odds with theoretical expectation because it is still in the process of responding to a recent selective force. Such a situation can be termed evolutionary lag. Although many cases of evolutionary lag have been suggested, almost all of the arguments have focused on trait fitness. An alternative approach is to examine the prediction that trait expression is a function of the time over which the trait could evolve. Here we present a phylogenetic comparative method for using this 'time' approach and we apply the method to a long-standing lag hypothesis: evolutionary changes in brain size lag behind evolutionary changes in body size. We tested the prediction in primates that brain mass contrast residuals, calculated from a regression of pairwise brain mass contrasts on positive pairwise body mass contrasts, are correlated with the time since the paired species diverged. Contrary to the brain size lag hypothesis, time since divergence was not significantly correlated with brain mass contrast residuals. We found the same result when we accounted for socioecology, used alternative body mass estimates and used male rather than female values. These tests do not support the brain size lag hypothesis. Therefore, body mass need not be viewed as a suspect variable in comparative neuroanatomical studies and relative brain size should not be used to infer recent evolutionary changes in body size. PMID:10331289

  14. Rodgers' evolutionary concept analysis--a valid method for developing knowledge in nursing science.

    PubMed

    Tofthagen, Randi; Fagerstrøm, Lisbeth M

    2010-12-01

    In nursing science, concept development is a necessary prerequisite for meaningful basic research. Rodgers' evolutionary concept analysis is a method for developing knowledge in nursing science. The purpose of this article is to present Rodgers' evolutionary concept analysis as a valid scientific method. A brief description of the evolutionary process, from data collection to data analysis, with the concepts' context, surrogate and related terms, antecedents, attributes, examples and consequences, is presented. The phases used in evolutionary concept analysis are illustrated with eight actual studies (1999-2009) from nursing research. The strength of the method is that it is systematic, with a focus on clear-cut phases during the analysis process, and that it can contribute to clarifying, describing and explaining concepts central to nursing science by analysing how a chosen concept has been used both within the discipline itself and other health sciences. While an interdisciplinary perspective which stresses the similarities and dissimilarities of how a concept is used in various disciplines can increase knowledge of a concept, it is important to clarify the specific with the discipline. Nursing research should focus on the unambiguous use of concepts, for which Rodgers' method constitutes a possible method. The importance of using quality criteria to determine the inclusion of material should, however, be emphasised in the continued development of the method.

  15. A new method for modeling the behavior of finite population evolutionary algorithms.

    PubMed

    Motoki, Tatsuya

    2010-01-01

    As practitioners we are interested in the likelihood of the population containing a copy of the optimum. The dynamic systems approach, however, does not help us to calculate that quantity. Markov chain analysis can be used in principle to calculate the quantity. However, since the associated transition matrices are enormous even for modest problems, it follows that in practice these calculations are usually computationally infeasible. Therefore, some improvements on this situation are desirable. In this paper, we present a method for modeling the behavior of finite population evolutionary algorithms (EAs), and show that if the population size is greater than 1 and much less than the cardinality of the search space, the resulting exact model requires considerably less memory space for theoretically running the stochastic search process of the original EA than the Nix and Vose-style Markov chain model. We also present some approximate models that use still less memory space than the exact model. Furthermore, based on our models, we examine the selection pressure by fitness-proportionate selection, and observe that on average over all population trajectories, there is no such strong bias toward selecting the higher fitness individuals as the fitness landscape suggests.

  16. A hybrid neural learning algorithm using evolutionary learning and derivative free local search method.

    PubMed

    Ghosh, Ranadhir; Yearwood, John; Ghosh, Moumita; Bagirov, Adil

    2006-06-01

    In this paper we investigate a hybrid model based on the Discrete Gradient method and an evolutionary strategy for determining the weights in a feed forward artificial neural network. Also we discuss different variants for hybrid models using the Discrete Gradient method and an evolutionary strategy for determining the weights in a feed forward artificial neural network. The Discrete Gradient method has the advantage of being able to jump over many local minima and find very deep local minima. However, earlier research has shown that a good starting point for the discrete gradient method can improve the quality of the solution point. Evolutionary algorithms are best suited for global optimisation problems. Nevertheless they are cursed with longer training times and often unsuitable for real world application. For optimisation problems such as weight optimisation for ANNs in real world applications the dimensions are large and time complexity is critical. Hence the idea of a hybrid model can be a suitable option. In this paper we propose different fusion strategies for hybrid models combining the evolutionary strategy with the discrete gradient method to obtain an optimal solution much quicker. Three different fusion strategies are discussed: a linear hybrid model, an iterative hybrid model and a restricted local search hybrid model. Comparative results on a range of standard datasets are provided for different fusion hybrid models.

  17. Computational model for analyzing the evolutionary patterns of the neuraminidase gene of influenza A/H1N1.

    PubMed

    Ahn, Insung; Son, Hyeon Seok

    2012-02-01

    In this study, we performed computer simulations to evaluate the changes of selection potentials of codons in influenza A/H1N1 from 1999 to 2009. We artificially generated the sequences by using the transition matrices of positively selected codons over time, and their similarities against the database of influenzavirus A genus were determined by BLAST search. This is the first approach to predict the evolutionary direction of influenza A virus (H1N1) by simulating the codon substitutions over time. We observed that the BLAST results showed the high similarities with pandemic influenza A/H1N1 in 2009, suggesting that the classical human-origin influenza A/H1N1 isolated before 2009 might contain some selection potentials of swine-origin viruses. Computer simulations using the time series codon substitution patterns resulted dramatic changes of BLAST results in influenza A/H1N1, providing a possibility of developing a method for predicting the viral evolution in silico.

  18. COMPUTATIONAL METHODS FOR ASYNCHRONOUS BASINS

    PubMed Central

    Dinwoodie, Ian H

    2016-01-01

    For a Boolean network we consider asynchronous updates and define the exclusive asynchronous basin of attraction for any steady state or cyclic attractor. An algorithm based on commutative algebra is presented to compute the exclusive basin. Finally its use for targeting desirable attractors by selective intervention on network nodes is illustrated with two examples, one cell signalling network and one sensor network measuring human mobility. PMID:28154501

  19. Computational methods for unsteady transonic flows

    NASA Technical Reports Server (NTRS)

    Edwards, John W.; Thomas, J. L.

    1987-01-01

    Computational methods for unsteady transonic flows are surveyed with emphasis on prediction. Computational difficulty is discussed with respect to type of unsteady flow; attached, mixed (attached/separated) and separated. Significant early computations of shock motions, aileron buzz and periodic oscillations are discussed. The maturation of computational methods towards the capability of treating complete vehicles with reasonable computational resources is noted and a survey of recent comparisons with experimental results is compiled. The importance of mixed attached and separated flow modeling for aeroelastic analysis is discussed, and recent calculations of periodic aerodynamic oscillations for an 18 percent thick circular arc airfoil are given.

  20. Exploiting genomic knowledge in optimising molecular breeding programmes: algorithms from evolutionary computing.

    PubMed

    O'Hagan, Steve; Knowles, Joshua; Kell, Douglas B

    2012-01-01

    Comparatively few studies have addressed directly the question of quantifying the benefits to be had from using molecular genetic markers in experimental breeding programmes (e.g. for improved crops and livestock), nor the question of which organisms should be mated with each other to best effect. We argue that this requires in silico modelling, an approach for which there is a large literature in the field of evolutionary computation (EC), but which has not really been applied in this way to experimental breeding programmes. EC seeks to optimise measurable outcomes (phenotypic fitnesses) by optimising in silico the mutation, recombination and selection regimes that are used. We review some of the approaches from EC, and compare experimentally, using a biologically relevant in silico landscape, some algorithms that have knowledge of where they are in the (genotypic) search space (G-algorithms) with some (albeit well-tuned ones) that do not (F-algorithms). For the present kinds of landscapes, F- and G-algorithms were broadly comparable in quality and effectiveness, although we recognise that the G-algorithms were not equipped with any 'prior knowledge' of epistatic pathway interactions. This use of algorithms based on machine learning has important implications for the optimisation of experimental breeding programmes in the post-genomic era when we shall potentially have access to the full genome sequence of every organism in a breeding population. The non-proprietary code that we have used is made freely available (via Supplementary information).

  1. Exploiting Genomic Knowledge in Optimising Molecular Breeding Programmes: Algorithms from Evolutionary Computing

    PubMed Central

    O'Hagan, Steve; Knowles, Joshua; Kell, Douglas B.

    2012-01-01

    Comparatively few studies have addressed directly the question of quantifying the benefits to be had from using molecular genetic markers in experimental breeding programmes (e.g. for improved crops and livestock), nor the question of which organisms should be mated with each other to best effect. We argue that this requires in silico modelling, an approach for which there is a large literature in the field of evolutionary computation (EC), but which has not really been applied in this way to experimental breeding programmes. EC seeks to optimise measurable outcomes (phenotypic fitnesses) by optimising in silico the mutation, recombination and selection regimes that are used. We review some of the approaches from EC, and compare experimentally, using a biologically relevant in silico landscape, some algorithms that have knowledge of where they are in the (genotypic) search space (G-algorithms) with some (albeit well-tuned ones) that do not (F-algorithms). For the present kinds of landscapes, F- and G-algorithms were broadly comparable in quality and effectiveness, although we recognise that the G-algorithms were not equipped with any ‘prior knowledge’ of epistatic pathway interactions. This use of algorithms based on machine learning has important implications for the optimisation of experimental breeding programmes in the post-genomic era when we shall potentially have access to the full genome sequence of every organism in a breeding population. The non-proprietary code that we have used is made freely available (via Supplementary information). PMID:23185279

  2. Multiprocessor computer overset grid method and apparatus

    DOEpatents

    Barnette, Daniel W.; Ober, Curtis C.

    2003-01-01

    A multiprocessor computer overset grid method and apparatus comprises associating points in each overset grid with processors and using mapped interpolation transformations to communicate intermediate values between processors assigned base and target points of the interpolation transformations. The method allows a multiprocessor computer to operate with effective load balance on overset grid applications.

  3. Computational Methods for Biomolecular Electrostatics

    PubMed Central

    Dong, Feng; Olsen, Brett; Baker, Nathan A.

    2008-01-01

    An understanding of intermolecular interactions is essential for insight into how cells develop, operate, communicate and control their activities. Such interactions include several components: contributions from linear, angular, and torsional forces in covalent bonds, van der Waals forces, as well as electrostatics. Among the various components of molecular interactions, electrostatics are of special importance because of their long range and their influence on polar or charged molecules, including water, aqueous ions, and amino or nucleic acids, which are some of the primary components of living systems. Electrostatics, therefore, play important roles in determining the structure, motion and function of a wide range of biological molecules. This chapter presents a brief overview of electrostatic interactions in cellular systems with a particular focus on how computational tools can be used to investigate these types of interactions. PMID:17964951

  4. Computational methods in radionuclide dosimetry

    NASA Astrophysics Data System (ADS)

    Bardiès, M.; Myers, M. J.

    1996-10-01

    The various approaches in radionuclide dosimetry depend on the size and spatial relation of the sources and targets considered in conjunction with the emission range of the radionuclide used. We present some of the frequently reported computational techniques on the basis of the source/target size. For whole organs, or for sources or targets bigger than some centimetres, the acknowledged standard was introduced 30 years ago by the MIRD committee and is still being updated. That approach, based on the absorbed fraction concept, is mainly used for radioprotection purposes but has been updated to take into account the dosimetric challenge raised by therapeutic use of vectored radiopharmaceuticals. At this level, the most important computational effort is in the field of photon dosimetry. On the millimetre scale, photons can often be disregarded, and or electron dosimetry is generally reported. Heterogeneities at this level are mainly above the cell level, involving groups of cell or a part of an organ. The dose distribution pattern is often calculated by generalizing a point source dose distribution, but direct calculation by Monte Carlo techniques is also frequently reported because it allows media of inhomogeneous density to be considered. At the cell level, and electron (low-range or Auger) are the predominant emissions examined. Heterogeneities in the dose distribution are taken into account, mainly to determine the mean dose at the nucleus. At the DNA level, Auger electrons or -particles are considered from a microdosimetric point of view. These studies are often connected with radiobiological experiments on radionuclide toxicity.

  5. Computational and theoretical methods for protein folding.

    PubMed

    Compiani, Mario; Capriotti, Emidio

    2013-12-03

    A computational approach is essential whenever the complexity of the process under study is such that direct theoretical or experimental approaches are not viable. This is the case for protein folding, for which a significant amount of data are being collected. This paper reports on the essential role of in silico methods and the unprecedented interplay of computational and theoretical approaches, which is a defining point of the interdisciplinary investigations of the protein folding process. Besides giving an overview of the available computational methods and tools, we argue that computation plays not merely an ancillary role but has a more constructive function in that computational work may precede theory and experiments. More precisely, computation can provide the primary conceptual clues to inspire subsequent theoretical and experimental work even in a case where no preexisting evidence or theoretical frameworks are available. This is cogently manifested in the application of machine learning methods to come to grips with the folding dynamics. These close relationships suggested complementing the review of computational methods within the appropriate theoretical context to provide a self-contained outlook of the basic concepts that have converged into a unified description of folding and have grown in a synergic relationship with their computational counterpart. Finally, the advantages and limitations of current computational methodologies are discussed to show how the smart analysis of large amounts of data and the development of more effective algorithms can improve our understanding of protein folding.

  6. Simulation methods for advanced scientific computing

    SciTech Connect

    Booth, T.E.; Carlson, J.A.; Forster, R.A.

    1998-11-01

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The objective of the project was to create effective new algorithms for solving N-body problems by computer simulation. The authors concentrated on developing advanced classical and quantum Monte Carlo techniques. For simulations of phase transitions in classical systems, they produced a framework generalizing the famous Swendsen-Wang cluster algorithms for Ising and Potts models. For spin-glass-like problems, they demonstrated the effectiveness of an extension of the multicanonical method for the two-dimensional, random bond Ising model. For quantum mechanical systems, they generated a new method to compute the ground-state energy of systems of interacting electrons. They also improved methods to compute excited states when the diffusion quantum Monte Carlo method is used and to compute longer time dynamics when the stationary phase quantum Monte Carlo method is used.

  7. Computational Methods for Ideal Magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Kercher, Andrew D.

    Numerical schemes for the ideal magnetohydrodynamics (MHD) are widely used for modeling space weather and astrophysical flows. They are designed to resolve the different waves that propagate through a magnetohydro fluid, namely, the fast, Alfven, slow, and entropy waves. Numerical schemes for ideal magnetohydrodynamics that are based on the standard finite volume (FV) discretization exhibit pseudo-convergence in which non-regular waves no longer exist only after heavy grid refinement. A method is described for obtaining solutions for coplanar and near coplanar cases that consist of only regular waves, independent of grid refinement. The method, referred to as Compound Wave Modification (CWM), involves removing the flux associated with non-regular structures and can be used for simulations in two- and three-dimensions because it does not require explicitly tracking an Alfven wave. For a near coplanar case, and for grids with 213 points or less, we find root-mean-square-errors (RMSEs) that are as much as 6 times smaller. For the coplanar case, in which non-regular structures will exist at all levels of grid refinement for standard FV schemes, the RMSE is as much as 25 times smaller. A multidimensional ideal MHD code has been implemented for simulations on graphics processing units (GPUs). Performance measurements were conducted for both the NVIDIA GeForce GTX Titan and Intel Xeon E5645 processor. The GPU is shown to perform one to two orders of magnitude greater than the CPU when using a single core, and two to three times greater than when run in parallel with OpenMP. Performance comparisons are made for two methods of storing data on the GPU. The first approach stores data as an Array of Structures (AoS), e.g., a point coordinate array of size 3 x n is iterated over. The second approach stores data as a Structure of Arrays (SoA), e.g. three separate arrays of size n are iterated over simultaneously. For an AoS, coalescing does not occur, reducing memory efficiency

  8. Yeast ancestral genome reconstructions: the possibilities of computational methods II.

    PubMed

    Chauve, Cedric; Gavranovic, Haris; Ouangraoua, Aida; Tannier, Eric

    2010-09-01

    Since the availability of assembled eukaryotic genomes, the first one being a budding yeast, many computational methods for the reconstruction of ancestral karyotypes and gene orders have been developed. The difficulty has always been to assess their reliability, since we often miss a good knowledge of the true ancestral genomes to compare their results to, as well as a good knowledge of the evolutionary mechanisms to test them on realistic simulated data. In this study, we propose some measures of reliability of several kinds of methods, and apply them to infer and analyse the architectures of two ancestral yeast genomes, based on the sequence of seven assembled extant ones. The pre-duplication common ancestor of S. cerevisiae and C. glabrata has been inferred manually by Gordon et al. (Plos Genet. 2009). We show why, in this case, a good convergence of the methods is explained by some properties of the data, and why results are reliable. In another study, Jean et al. (J. Comput Biol. 2009) proposed an ancestral architecture of the last common ancestor of S. kluyveri, K. thermotolerans, K. lactis, A. gossypii, and Z. rouxii inferred by a computational method. In this case, we show that the dataset does not seem to contain enough information to infer a reliable architecture, and we construct a higher resolution dataset which gives a good reliability on a new ancestral configuration.

  9. Graphical method for analyzing digital computer efficiency

    NASA Technical Reports Server (NTRS)

    Chan, S. P.; Munoz, R. M.

    1971-01-01

    Analysis method utilizes graph-theoretic approach for evaluating computation cost and makes logical distinction between linear graph of a computation and linear graph of a program. It applies equally well to other processes which depend on quatitative edge nomenclature and precedence relationships between edges.

  10. A Computer-Assisted Method of Counseling.

    ERIC Educational Resources Information Center

    Parente, Frederick J.; And Others

    1981-01-01

    A computer-assisted method of counseling was applied to cases of stuttering and hypertension. Although both symptom complexes had previously resisted therapy, results indicated that computer-assisted counseling eliminated the stuttering and reduced diastolic blood pressure to normal levels. (Author)

  11. A Computer-Assisted Method of Counseling.

    ERIC Educational Resources Information Center

    Parente, Frederick J.; And Others

    1981-01-01

    A computer-assisted method of counseling was applied to cases of stuttering and hypertension. Although both symptom complexes had previously resisted therapy, results indicated that computer-assisted counseling eliminated the stuttering and reduced diastolic blood pressure to normal levels. (Author)

  12. Unscented Sampling Techniques For Evolutionary Computation With Applications To Astrodynamic Optimization

    DTIC Science & Technology

    2016-09-01

    to both genetic algorithms and evolution strategies to achieve these goals. The results of this research offer a promising new set of modified...historically difficult to solve using evolutionary algorithms. 14. SUBJECT TERMS evolutionary algorithm, evolution strategy, genetic algorithm, parallel...functions are developed and applied to both genetic algorithms and evolution strategies to achieve these goals. The results of this research offer a

  13. Spectral Methods for Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Zang, T. A.; Streett, C. L.; Hussaini, M. Y.

    1994-01-01

    As a tool for large-scale computations in fluid dynamics, spectral methods were prophesized in 1944, born in 1954, virtually buried in the mid-1960's, resurrected in 1969, evangalized in the 1970's, and catholicized in the 1980's. The use of spectral methods for meteorological problems was proposed by Blinova in 1944 and the first numerical computations were conducted by Silberman (1954). By the early 1960's computers had achieved sufficient power to permit calculations with hundreds of degrees of freedom. For problems of this size the traditional way of computing the nonlinear terms in spectral methods was expensive compared with finite-difference methods. Consequently, spectral methods fell out of favor. The expense of computing nonlinear terms remained a severe drawback until Orszag (1969) and Eliasen, Machenauer, and Rasmussen (1970) developed the transform methods that still form the backbone of many large-scale spectral computations. The original proselytes of spectral methods were meteorologists involved in global weather modeling and fluid dynamicists investigating isotropic turbulence. The converts who were inspired by the successes of these pioneers remained, for the most part, confined to these and closely related fields throughout the 1970's. During that decade spectral methods appeared to be well-suited only for problems governed by ordinary diSerential eqllations or by partial differential equations with periodic boundary conditions. And, of course, the solution itself needed to be smooth. Some of the obstacles to wider application of spectral methods were: (1) poor resolution of discontinuous solutions; (2) inefficient implementation of implicit methods; and (3) drastic geometric constraints. All of these barriers have undergone some erosion during the 1980's, particularly the latter two. As a result, the applicability and appeal of spectral methods for computational fluid dynamics has broadened considerably. The motivation for the use of spectral

  14. Spectral Methods for Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Zang, T. A.; Streett, C. L.; Hussaini, M. Y.

    1994-01-01

    As a tool for large-scale computations in fluid dynamics, spectral methods were prophesized in 1944, born in 1954, virtually buried in the mid-1960's, resurrected in 1969, evangalized in the 1970's, and catholicized in the 1980's. The use of spectral methods for meteorological problems was proposed by Blinova in 1944 and the first numerical computations were conducted by Silberman (1954). By the early 1960's computers had achieved sufficient power to permit calculations with hundreds of degrees of freedom. For problems of this size the traditional way of computing the nonlinear terms in spectral methods was expensive compared with finite-difference methods. Consequently, spectral methods fell out of favor. The expense of computing nonlinear terms remained a severe drawback until Orszag (1969) and Eliasen, Machenauer, and Rasmussen (1970) developed the transform methods that still form the backbone of many large-scale spectral computations. The original proselytes of spectral methods were meteorologists involved in global weather modeling and fluid dynamicists investigating isotropic turbulence. The converts who were inspired by the successes of these pioneers remained, for the most part, confined to these and closely related fields throughout the 1970's. During that decade spectral methods appeared to be well-suited only for problems governed by ordinary diSerential eqllations or by partial differential equations with periodic boundary conditions. And, of course, the solution itself needed to be smooth. Some of the obstacles to wider application of spectral methods were: (1) poor resolution of discontinuous solutions; (2) inefficient implementation of implicit methods; and (3) drastic geometric constraints. All of these barriers have undergone some erosion during the 1980's, particularly the latter two. As a result, the applicability and appeal of spectral methods for computational fluid dynamics has broadened considerably. The motivation for the use of spectral

  15. An improved approximate-Bayesian model-choice method for estimating shared evolutionary history

    PubMed Central

    2014-01-01

    Background To understand biological diversification, it is important to account for large-scale processes that affect the evolutionary history of groups of co-distributed populations of organisms. Such events predict temporally clustered divergences times, a pattern that can be estimated using genetic data from co-distributed species. I introduce a new approximate-Bayesian method for comparative phylogeographical model-choice that estimates the temporal distribution of divergences across taxa from multi-locus DNA sequence data. The model is an extension of that implemented in msBayes. Results By reparameterizing the model, introducing more flexible priors on demographic and divergence-time parameters, and implementing a non-parametric Dirichlet-process prior over divergence models, I improved the robustness, accuracy, and power of the method for estimating shared evolutionary history across taxa. Conclusions The results demonstrate the improved performance of the new method is due to (1) more appropriate priors on divergence-time and demographic parameters that avoid prohibitively small marginal likelihoods for models with more divergence events, and (2) the Dirichlet-process providing a flexible prior on divergence histories that does not strongly disfavor models with intermediate numbers of divergence events. The new method yields more robust estimates of posterior uncertainty, and thus greatly reduces the tendency to incorrectly estimate models of shared evolutionary history with strong support. PMID:24992937

  16. Evolutionary, computational, and biochemical studies of the salicylaldehyde dehydrogenases in the naphthalene degradation pathway

    PubMed Central

    Jia, Baolei; Jia, Xiaomeng; Hyun Kim, Kyung; Ji Pu, Zhong; Kang, Myung-Suk; Ok Jeon, Che

    2017-01-01

    Salicylaldehyde (SAL) dehydrogenase (SALD) is responsible for the oxidation of SAL to salicylate using nicotinamide adenine dinucleotide (NAD+) as a cofactor in the naphthalene degradation pathway. We report the use of a protein sequence similarity network to make functional inferences about SALDs. Network and phylogenetic analyses indicated that SALDs and the homologues are present in bacteria and fungi. The key residues in SALDs were analyzed by evolutionary methods and a molecular simulation analysis. The results showed that the catalytic residue is most highly conserved, followed by the residues binding NAD+ and then the residues binding SAL. A molecular simulation analysis demonstrated the binding energies of the amino acids to NAD+ and/or SAL and showed that a conformational change is induced by binding. A SALD from Alteromonas naphthalenivorans (SALDan) that undergoes trimeric oligomerization was characterized enzymatically. The results showed that SALDan could catalyze the oxidation of a variety of aromatic aldehydes. Site-directed mutagenesis of selected residues binding NAD+ and/or SAL affected the enzyme’s catalytic efficiency, but did not eliminate catalysis. Finally, the relationships among the evolution, catalytic mechanism, and functions of SALD are discussed. Taken together, this study provides an expanded understanding of the evolution, functions, and catalytic mechanism of SALD. PMID:28233868

  17. Cloud glaciation temperature estimation from passive remote sensing data with evolutionary computing

    NASA Astrophysics Data System (ADS)

    Carro-Calvo, L.; Hoose, C.; Stengel, M.; Salcedo-Sanz, S.

    2016-11-01

    The phase partitioning between supercooled liquid water and ice in clouds in the temperature range between 0 and -37°C influences their optical properties and the efficiency of precipitation formation. Passive remote sensing observations provide long-term records of the cloud top phase at a high spatial resolution. Based on the assumption of a cumulative Gaussian distribution of the ice cloud fraction as a function of temperature, we quantify the cloud glaciation temperature (CGT) as the 50th percentile of the fitted distribution function and its variance for different cloud top pressure intervals, obtained by applying an evolutionary algorithm (EA). EAs are metaheuristics approaches for optimization, used in difficult problems where standard approaches are either not applicable or show poor performance. In this case, the proposed EA is applied to 4 years of Pathfinder Atmospheres-Extended (PATMOS-x) data, aggregated into boxes of 1° × 1° and vertical layers of 5.5 hPa. The resulting vertical profile of CGT shows a characteristic sickle shape, indicating low CGTs close to homogeneous freezing in the upper troposphere and significantly higher values in the midtroposphere. In winter, a pronounced land-sea contrast is found at midlatitudes, with lower CGTs over land. Among this and previous studies, there is disagreement on the sign of the land-sea difference in CGT, suggesting that it is strongly sensitive to the detected and analyzed cloud types, the time of the day, and the phase retrieval method.

  18. Computational Chemistry Using Modern Electronic Structure Methods

    ERIC Educational Resources Information Center

    Bell, Stephen; Dines, Trevor J.; Chowdhry, Babur Z.; Withnall, Robert

    2007-01-01

    Various modern electronic structure methods are now days used to teach computational chemistry to undergraduate students. Such quantum calculations can now be easily used even for large size molecules.

  19. Computational Chemistry Using Modern Electronic Structure Methods

    ERIC Educational Resources Information Center

    Bell, Stephen; Dines, Trevor J.; Chowdhry, Babur Z.; Withnall, Robert

    2007-01-01

    Various modern electronic structure methods are now days used to teach computational chemistry to undergraduate students. Such quantum calculations can now be easily used even for large size molecules.

  20. General Methods for Evolutionary Quantitative Genetic Inference from Generalized Mixed Models.

    PubMed

    de Villemereuil, Pierre; Schielzeth, Holger; Nakagawa, Shinichi; Morrissey, Michael

    2016-11-01

    Methods for inference and interpretation of evolutionary quantitative genetic parameters, and for prediction of the response to selection, are best developed for traits with normal distributions. Many traits of evolutionary interest, including many life history and behavioral traits, have inherently nonnormal distributions. The generalized linear mixed model (GLMM) framework has become a widely used tool for estimating quantitative genetic parameters for nonnormal traits. However, whereas GLMMs provide inference on a statistically convenient latent scale, it is often desirable to express quantitative genetic parameters on the scale upon which traits are measured. The parameters of fitted GLMMs, despite being on a latent scale, fully determine all quantities of potential interest on the scale on which traits are expressed. We provide expressions for deriving each of such quantities, including population means, phenotypic (co)variances, variance components including additive genetic (co)variances, and parameters such as heritability. We demonstrate that fixed effects have a strong impact on those parameters and show how to deal with this by averaging or integrating over fixed effects. The expressions require integration of quantities determined by the link function, over distributions of latent values. In general cases, the required integrals must be solved numerically, but efficient methods are available and we provide an implementation in an R package, QGglmm. We show that known formulas for quantities such as heritability of traits with binomial and Poisson distributions are special cases of our expressions. Additionally, we show how fitted GLMM can be incorporated into existing methods for predicting evolutionary trajectories. We demonstrate the accuracy of the resulting method for evolutionary prediction by simulation and apply our approach to data from a wild pedigreed vertebrate population.

  1. General Methods for Evolutionary Quantitative Genetic Inference from Generalized Mixed Models

    PubMed Central

    de Villemereuil, Pierre; Schielzeth, Holger; Nakagawa, Shinichi; Morrissey, Michael

    2016-01-01

    Methods for inference and interpretation of evolutionary quantitative genetic parameters, and for prediction of the response to selection, are best developed for traits with normal distributions. Many traits of evolutionary interest, including many life history and behavioral traits, have inherently nonnormal distributions. The generalized linear mixed model (GLMM) framework has become a widely used tool for estimating quantitative genetic parameters for nonnormal traits. However, whereas GLMMs provide inference on a statistically convenient latent scale, it is often desirable to express quantitative genetic parameters on the scale upon which traits are measured. The parameters of fitted GLMMs, despite being on a latent scale, fully determine all quantities of potential interest on the scale on which traits are expressed. We provide expressions for deriving each of such quantities, including population means, phenotypic (co)variances, variance components including additive genetic (co)variances, and parameters such as heritability. We demonstrate that fixed effects have a strong impact on those parameters and show how to deal with this by averaging or integrating over fixed effects. The expressions require integration of quantities determined by the link function, over distributions of latent values. In general cases, the required integrals must be solved numerically, but efficient methods are available and we provide an implementation in an R package, QGglmm. We show that known formulas for quantities such as heritability of traits with binomial and Poisson distributions are special cases of our expressions. Additionally, we show how fitted GLMM can be incorporated into existing methods for predicting evolutionary trajectories. We demonstrate the accuracy of the resulting method for evolutionary prediction by simulation and apply our approach to data from a wild pedigreed vertebrate population. PMID:27591750

  2. Computational methods for global/local analysis

    NASA Technical Reports Server (NTRS)

    Ransom, Jonathan B.; Mccleary, Susan L.; Aminpour, Mohammad A.; Knight, Norman F., Jr.

    1992-01-01

    Computational methods for global/local analysis of structures which include both uncoupled and coupled methods are described. In addition, global/local analysis methodology for automatic refinement of incompatible global and local finite element models is developed. Representative structural analysis problems are presented to demonstrate the global/local analysis methods.

  3. Gene selection for microarray cancer classification using a new evolutionary method employing artificial intelligence concepts.

    PubMed

    Dashtban, M; Balafar, Mohammadali

    2017-03-01

    Gene selection is a demanding task for microarray data analysis. The diverse complexity of different cancers makes this issue still challenging. In this study, a novel evolutionary method based on genetic algorithms and artificial intelligence is proposed to identify predictive genes for cancer classification. A filter method was first applied to reduce the dimensionality of feature space followed by employing an integer-coded genetic algorithm with dynamic-length genotype, intelligent parameter settings, and modified operators. The algorithmic behaviors including convergence trends, mutation and crossover rate changes, and running time were studied, conceptually discussed, and shown to be coherent with literature findings. Two well-known filter methods, Laplacian and Fisher score, were examined considering similarities, the quality of selected genes, and their influences on the evolutionary approach. Several statistical tests concerning choice of classifier, choice of dataset, and choice of filter method were performed, and they revealed some significant differences between the performance of different classifiers and filter methods over datasets. The proposed method was benchmarked upon five popular high-dimensional cancer datasets; for each, top explored genes were reported. Comparing the experimental results with several state-of-the-art methods revealed that the proposed method outperforms previous methods in DLBCL dataset. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Hybridization of evolutionary algorithms and local search by means of a clustering method.

    PubMed

    Martínez-Estudillo, Alfonso C; Hervás-Martínez, César; Martínez-Estudillo, Francisco J; García-Pedrajas, Nicolás

    2006-06-01

    This paper presents a hybrid evolutionary algorithm (EA) to solve nonlinear-regression problems. Although EAs have proven their ability to explore large search spaces, they are comparatively inefficient in fine tuning the solution. This drawback is usually avoided by means of local optimization algorithms that are applied to the individuals of the population. The algorithms that use local optimization procedures are usually called hybrid algorithms. On the other hand, it is well known that the clustering process enables the creation of groups (clusters) with mutually close points that hopefully correspond to relevant regions of attraction. Local-search procedures can then be started once in every such region. This paper proposes the combination of an EA, a clustering process, and a local-search procedure to the evolutionary design of product-units neural networks. In the methodology presented, only a few individuals are subject to local optimization. Moreover, the local optimization algorithm is only applied at specific stages of the evolutionary process. Our results show a favorable performance when the regression method proposed is compared to other standard methods.

  5. Synthesis of porous-acoustic absorbing systems by an evolutionary optimization method

    NASA Astrophysics Data System (ADS)

    Silva, F. I.; Pavanello, R.

    2010-10-01

    Topology optimization is frequently used to design structures and acoustic systems in a large range of engineering applications. In this work, a method is proposed for maximizing the absorbing performance of acoustic panels by using a coupled finite element model and evolutionary strategies. The goal is to find the best distribution of porous material for sound absorbing panels. The absorbing performance of the porous material samples in a Kundt tube is simulated using a coupled porous-acoustic finite element model. The equivalent fluid model is used to represent the foam material. The porous material model is coupled to a wave guide using a modal superposition technique. A sensitivity number indicating the optimum locations for porous material to be removed is derived and used in a numerical hard kill scheme. The sensitivity number is used to form an evolutionary porous material optimization algorithm which is verified through examples.

  6. Merging molecular mechanism and evolution: theory and computation at the interface of biophysics and evolutionary population genetics

    PubMed Central

    Serohijos, Adrian W.R.; Shakhnovich, Eugene I.

    2014-01-01

    The variation among sequences and structures in nature is both determined by physical laws and by evolutionary history. However, these two factors are traditionally investigated by disciplines with different emphasis and philosophy—molecular biophysics on one hand and evolutionary population genetics in another. Here, we review recent theoretical and computational approaches that address the critical need to integrate these two disciplines. We first articulate the elements of these integrated approaches. Then, we survey their contribution to our mechanistic understanding of molecular evolution, the polymorphisms in coding region, the distribution of fitness effects (DFE) of mutations, the observed folding stability of proteins in nature, and the distribution of protein folds in genomes. PMID:24952216

  7. Assessing Computational Methods of Cis-Regulatory Module Prediction

    PubMed Central

    Su, Jing; Teichmann, Sarah A.; Down, Thomas A.

    2010-01-01

    Computational methods attempting to identify instances of cis-regulatory modules (CRMs) in the genome face a challenging problem of searching for potentially interacting transcription factor binding sites while knowledge of the specific interactions involved remains limited. Without a comprehensive comparison of their performance, the reliability and accuracy of these tools remains unclear. Faced with a large number of different tools that address this problem, we summarized and categorized them based on search strategy and input data requirements. Twelve representative methods were chosen and applied to predict CRMs from the Drosophila CRM database REDfly, and across the human ENCODE regions. Our results show that the optimal choice of method varies depending on species and composition of the sequences in question. When discriminating CRMs from non-coding regions, those methods considering evolutionary conservation have a stronger predictive power than methods designed to be run on a single genome. Different CRM representations and search strategies rely on different CRM properties, and different methods can complement one another. For example, some favour homotypical clusters of binding sites, while others perform best on short CRMs. Furthermore, most methods appear to be sensitive to the composition and structure of the genome to which they are applied. We analyze the principal features that distinguish the methods that performed well, identify weaknesses leading to poor performance, and provide a guide for users. We also propose key considerations for the development and evaluation of future CRM-prediction methods. PMID:21152003

  8. Methods of computing Campbell-Hausdorff formula

    NASA Astrophysics Data System (ADS)

    Sogo, Kiyoshi

    2016-11-01

    A new method computing Campbell-Hausdorff formula is proposed by using quantum moment-cumulant relations, which is given by Weyl ordering symmetrization of classical moment-cumulant relations. The method enables one to readily use symbolic language software to compute arbitrary terms in the formula, and explicit expressions up to the 6-th order are obtained by the way of illustration. Further the symmetry Codd(A, B) = Codd(B, A), Ceven(A, B) = - Ceven(B, A) is found and proved. The operator differential method by Knapp is also examined for the comparison.

  9. Updated Panel-Method Computer Program

    NASA Technical Reports Server (NTRS)

    Ashby, Dale L.

    1995-01-01

    Panel code PMARC_12 (Panel Method Ames Research Center, version 12) computes potential-flow fields around complex three-dimensional bodies such as complete aircraft models. Contains several advanced features, including internal mathematical modeling of flow, time-stepping wake model for simulating either steady or unsteady motions, capability for Trefftz computation of drag induced by plane, and capability for computation of off-body and on-body streamlines, and capability of computation of boundary-layer parameters by use of two-dimensional integral boundary-layer method along surface streamlines. Investigators interested in visual representations of phenomena, may want to consider obtaining program GVS (ARC-13361), General visualization System. GVS is Silicon Graphics IRIS program created to support scientific-visualization needs of PMARC_12. GVS available separately from COSMIC. PMARC_12 written in standard FORTRAN 77, with exception of NAMELIST extension used for input.

  10. Updated Panel-Method Computer Program

    NASA Technical Reports Server (NTRS)

    Ashby, Dale L.

    1995-01-01

    Panel code PMARC_12 (Panel Method Ames Research Center, version 12) computes potential-flow fields around complex three-dimensional bodies such as complete aircraft models. Contains several advanced features, including internal mathematical modeling of flow, time-stepping wake model for simulating either steady or unsteady motions, capability for Trefftz computation of drag induced by plane, and capability for computation of off-body and on-body streamlines, and capability of computation of boundary-layer parameters by use of two-dimensional integral boundary-layer method along surface streamlines. Investigators interested in visual representations of phenomena, may want to consider obtaining program GVS (ARC-13361), General visualization System. GVS is Silicon Graphics IRIS program created to support scientific-visualization needs of PMARC_12. GVS available separately from COSMIC. PMARC_12 written in standard FORTRAN 77, with exception of NAMELIST extension used for input.

  11. Computing discharge using the index velocity method

    USGS Publications Warehouse

    Levesque, Victor A.; Oberg, Kevin A.

    2012-01-01

    Application of the index velocity method for computing continuous records of discharge has become increasingly common, especially since the introduction of low-cost acoustic Doppler velocity meters (ADVMs) in 1997. Presently (2011), the index velocity method is being used to compute discharge records for approximately 470 gaging stations operated and maintained by the U.S. Geological Survey. The purpose of this report is to document and describe techniques for computing discharge records using the index velocity method. Computing discharge using the index velocity method differs from the traditional stage-discharge method by separating velocity and area into two ratings—the index velocity rating and the stage-area rating. The outputs from each of these ratings, mean channel velocity (V) and cross-sectional area (A), are then multiplied together to compute a discharge. For the index velocity method, V is a function of such parameters as streamwise velocity, stage, cross-stream velocity, and velocity head, and A is a function of stage and cross-section shape. The index velocity method can be used at locations where stage-discharge methods are used, but it is especially appropriate when more than one specific discharge can be measured for a specific stage. After the ADVM is selected, installed, and configured, the stage-area rating and the index velocity rating must be developed. A standard cross section is identified and surveyed in order to develop the stage-area rating. The standard cross section should be surveyed every year for the first 3 years of operation and thereafter at a lesser frequency, depending on the susceptibility of the cross section to change. Periodic measurements of discharge are used to calibrate and validate the index rating for the range of conditions experienced at the gaging station. Data from discharge measurements, ADVMs, and stage sensors are compiled for index-rating analysis. Index ratings are developed by means of regression

  12. Method and system for benchmarking computers

    DOEpatents

    Gustafson, John L.

    1993-09-14

    A testing system and method for benchmarking computer systems. The system includes a store containing a scalable set of tasks to be performed to produce a solution in ever-increasing degrees of resolution as a larger number of the tasks are performed. A timing and control module allots to each computer a fixed benchmarking interval in which to perform the stored tasks. Means are provided for determining, after completion of the benchmarking interval, the degree of progress through the scalable set of tasks and for producing a benchmarking rating relating to the degree of progress for each computer.

  13. Evolutionary Local Search of Fuzzy Rules through a novel Neuro-Fuzzy encoding method.

    PubMed

    Carrascal, A; Manrique, D; Ríos, J; Rossi, C

    2003-01-01

    This paper proposes a new approach for constructing fuzzy knowledge bases using evolutionary methods. We have designed a genetic algorithm that automatically builds neuro-fuzzy architectures based on a new indirect encoding method. The neuro-fuzzy architecture represents the fuzzy knowledge base that solves a given problem; the search for this architecture takes advantage of a local search procedure that improves the chromosomes at each generation. Experiments conducted both on artificially generated and real world problems confirm the effectiveness of the proposed approach.

  14. EvoluZion: A Computer Simulator for Teaching Genetic and Evolutionary Concepts

    ERIC Educational Resources Information Center

    Zurita, Adolfo R.

    2017-01-01

    EvoluZion is a forward-in-time genetic simulator developed in Java and designed to perform real time simulations on the evolutionary history of virtual organisms. These model organisms harbour a set of 13 genes that codify an equal number of phenotypic features. These genes change randomly during replication, and mutant genes can have null,…

  15. Efficient discovery of anti-inflammatory small-molecule combinations using evolutionary computing.

    PubMed

    Small, Ben G; McColl, Barry W; Allmendinger, Richard; Pahle, Jürgen; López-Castejón, Gloria; Rothwell, Nancy J; Knowles, Joshua; Mendes, Pedro; Brough, David; Kell, Douglas B

    2011-10-23

    The control of biochemical fluxes is distributed, and to perturb complex intracellular networks effectively it is often necessary to modulate several steps simultaneously. However, the number of possible permutations leads to a combinatorial explosion in the number of experiments that would have to be performed in a complete analysis. We used a multiobjective evolutionary algorithm to optimize reagent combinations from a dynamic chemical library of 33 compounds with established or predicted targets in the regulatory network controlling IL-1β expression. The evolutionary algorithm converged on excellent solutions within 11 generations, during which we studied just 550 combinations out of the potential search space of ~9 billion. The top five reagents with the greatest contribution to combinatorial effects throughout the evolutionary algorithm were then optimized pairwise. A p38 MAPK inhibitor together with either an inhibitor of IκB kinase or a chelator of poorly liganded iron yielded synergistic inhibition of macrophage IL-1β expression. Evolutionary searches provide a powerful and general approach to the discovery of new combinations of pharmacological agents with therapeutic indices potentially greater than those of single drugs.

  16. Toward a method for tracking virus evolutionary trajectory applied to the pandemic H1N1 2009 influenza virus.

    PubMed

    Squires, R Burke; Pickett, Brett E; Das, Sajal; Scheuermann, Richard H

    2014-12-01

    In 2009 a novel pandemic H1N1 influenza virus (H1N1pdm09) emerged as the first official influenza pandemic of the 21st century. Early genomic sequence analysis pointed to the swine origin of the virus. Here we report a novel computational approach to determine the evolutionary trajectory of viral sequences that uses data-driven estimations of nucleotide substitution rates to track the gradual accumulation of observed sequence alterations over time. Phylogenetic analysis and multiple sequence alignments show that sequences belonging to the resulting evolutionary trajectory of the H1N1pdm09 lineage exhibit a gradual accumulation of sequence variations and tight temporal correlations in the topological structure of the phylogenetic trees. These results suggest that our evolutionary trajectory analysis (ETA) can more effectively pinpoint the evolutionary history of viruses, including the host and geographical location traversed by each segment, when compared against either BLAST or traditional phylogenetic analysis alone.

  17. Meshless methods for computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Katz, Aaron Jon

    While the generation of meshes has always posed challenges for computational scientists, the problem has become more acute in recent years. Increased computational power has enabled scientists to tackle problems of increasing size and complexity. While algorithms have seen great advances, mesh generation has lagged behind, creating a computational bottleneck. For industry and government looking to impact current and future products with simulation technology, mesh generation imposes great challenges. Many generation procedures often lack automation, requiring many man-hours, which are becoming far more expensive than computer hardware. More automated methods are less reliable for complex geometry with sharp corners, concavity, or otherwise complex features. Most mesh generation methods to date require a great deal of user expertise to obtain accurate simulation results. Since the application of computational methods to real world problems appears to be paced by mesh generation, alleviating this bottleneck potentially impacts an enormous field of problems. Meshless methods applied to computational fluid dynamics is a relatively new area of research designed to help alleviate the burden of mesh generation. Despite their recent inception, there exists no shortage of formulations and algorithms for meshless schemes in the literature. A brief survey of the field reveals varied approaches arising from diverse mathematical backgrounds applied to a wide variety of applications. All meshless schemes attempt to bypass the use of a conventional mesh entirely or in part by discretizing governing partial differential equations on scattered clouds of points. A goal of the present thesis is to develop a meshless scheme for computational fluid dynamics and evaluate its performance compared with conventional methods. The meshless schemes developed in this work compare favorably with conventional finite volume methods in terms of accuracy and efficiency for the Euler and Navier

  18. ADVANCED COMPUTATIONAL METHODS IN DOSE MODELING: APPLICATION OF COMPUTATIONAL BIOPHYSICAL TRANSPORT, COMPUTATIONAL CHEMISTRY, AND COMPUTATIONAL BIOLOGY

    EPA Science Inventory

    Computational toxicology (CompTox) leverages the significant gains in computing power and computational techniques (e.g., numerical approaches, structure-activity relationships, bioinformatics) realized over the last few years, thereby reducing costs and increasing efficiency i...

  19. ADVANCED COMPUTATIONAL METHODS IN DOSE MODELING: APPLICATION OF COMPUTATIONAL BIOPHYSICAL TRANSPORT, COMPUTATIONAL CHEMISTRY, AND COMPUTATIONAL BIOLOGY

    EPA Science Inventory

    Computational toxicology (CompTox) leverages the significant gains in computing power and computational techniques (e.g., numerical approaches, structure-activity relationships, bioinformatics) realized over the last few years, thereby reducing costs and increasing efficiency i...

  20. A computational kinematics and evolutionary approach to model molecular flexibility for bionanotechnology

    NASA Astrophysics Data System (ADS)

    Brintaki, Athina N.

    Modeling molecular structures is critical for understanding the principles that govern the behavior of molecules and for facilitating the exploration of potential pharmaceutical drugs and nanoscale designs. Biological molecules are flexible bodies that can adopt many different shapes (or conformations) until they reach a stable molecular state that is usually described by the minimum internal energy. A major challenge in modeling flexible molecules is the exponential explosion in computational complexity as the molecular size increases and many degrees of freedom are considered to represent the molecules' flexibility. This research work proposes a novel generic computational geometric approach called enhanced BioGeoFilter (g.eBGF) that geometrically interprets inter-atomic interactions to impose geometric constraints during molecular conformational search to reduce the time for identifying chemically-feasible conformations. Two new methods called Kinematics-Based Differential Evolution ( kDE) and Biological Differential Evolution ( BioDE) are also introduced to direct the molecular conformational search towards low energy (stable) conformations. The proposed kDE method kinematically describes a molecule's deformation mechanism while it uses differential evolution to minimize the intra-molecular energy. On the other hand, the proposed BioDE utilizes our developed g.eBGF data structure as a surrogate approximation model to reduce the number of exact evaluations and to speed the molecular conformational search. This research work will be extremely useful in enabling the modeling of flexible molecules and in facilitating the exploration of nanoscale designs through the virtual assembly of molecules. Our research work can also be used in areas such as molecular docking, protein folding, and nanoscale computer-aided design where rapid collision detection scheme for highly deformable objects is essential.

  1. Computational analysis of fitness landscapes and evolutionary networks from in vitro evolution experiments.

    PubMed

    Xulvi-Brunet, Ramon; Campbell, Gregory W; Rajamani, Sudha; Jiménez, José I; Chen, Irene A

    2016-08-15

    In vitro selection experiments in biochemistry allow for the discovery of novel molecules capable of specific desired biochemical functions. However, this is not the only benefit we can obtain from such selection experiments. Since selection from a random library yields an unprecedented, and sometimes comprehensive, view of how a particular biochemical function is distributed across sequence space, selection experiments also provide data for creating and analyzing molecular fitness landscapes, which directly map function (phenotypes) to sequence information (genotypes). Given the importance of understanding the relationship between sequence and functional activity, reliable methods to build and analyze fitness landscapes are needed. Here, we present some statistical methods to extract this information from pools of RNA molecules. We also provide new computational tools to construct and study molecular fitness landscapes.

  2. An Efficient Method for Computing All Reducts

    NASA Astrophysics Data System (ADS)

    Bao, Yongguang; Du, Xiaoyong; Deng, Mingrong; Ishii, Naohiro

    In the process of data mining of decision table using Rough Sets methodology, the main computational effort is associated with the determination of the reducts. Computing all reducts is a combinatorial NP-hard computational problem. Therefore the only way to achieve its faster execution is by providing an algorithm, with a better constant factor, which may solve this problem in reasonable time for real-life data sets. The purpose of this presentation is to propose two new efficient algorithms to compute reducts in information systems. The proposed algorithms are based on the proposition of reduct and the relation between the reduct and discernibility matrix. Experiments have been conducted on some real world domains in execution time. The results show it improves the execution time when compared with the other methods. In real application, we can combine the two proposed algorithms.

  3. At the crossroads of evolutionary computation and music: self-programming synthesizers, swarm orchestras and the origins of melody.

    PubMed

    Miranda, Eduardo Reck

    2004-01-01

    This paper introduces three approaches to using Evolutionary Computation (EC) in Music (namely, engineering, creative and musicological approaches) and discusses examples of representative systems that have been developed within the last decade, with emphasis on more recent and innovative works. We begin by reviewing engineering applications of EC in Music Technology such as Genetic Algorithms and Cellular Automata sound synthesis, followed by an introduction to applications where EC has been used to generate musical compositions. Next, we introduce ongoing research into EC models to study the origins of music and detail our own research work on modelling the evolution of melody.

  4. Jacobi method for signal subspace computation

    NASA Astrophysics Data System (ADS)

    Paul, Steffen; Goetze, Juergen

    1997-10-01

    The Jacobi method for singular value decomposition is well-suited for parallel architectures. Its application to signal subspace computations is well known. Basically the subspace spanned by singular vectors of large singular values are separated from subspace spanned by those of small singular values. The Jacobi algorithm computes the singular values and the corresponding vectors in random order. This requires sorting the result after convergence of the algorithm to select the signal subspace. A modification of the Jacobi method based on a linear objective function merges the sorting into the SVD-algorithm at little extra cost. In fact, the complexity of the diagonal processor cells in a triangular array get slightly larger. In this paper we present these extensions, in particular the modified algorithm for computing the rotation angles and give an example of its usefulness for subspace separation.

  5. A Multi Agent System for Flow-Based Intrusion Detection Using Reputation and Evolutionary Computation

    DTIC Science & Technology

    2011-03-01

    effectiveness [116]: 12. Constants, parameters, numbers (e.g. subsidies, taxes, standards) 11. The sizes of buffers and other stabilizing stocks , relative to...their flows 10. The structure of material stocks and flows (such as transport networks, popu- lation age structures) 9. The lengths of delays...combination of evolutionary raw material–a highly variable stock of information from which to select possible patterns–and a means for experimentation

  6. Accurate method for computing correlated color temperature.

    PubMed

    Li, Changjun; Cui, Guihua; Melgosa, Manuel; Ruan, Xiukai; Zhang, Yaoju; Ma, Long; Xiao, Kaida; Luo, M Ronnier

    2016-06-27

    For the correlated color temperature (CCT) of a light source to be estimated, a nonlinear optimization problem must be solved. In all previous methods available to compute CCT, the objective function has only been approximated, and their predictions have achieved limited accuracy. For example, different unacceptable CCT values have been predicted for light sources located on the same isotemperature line. In this paper, we propose to compute CCT using the Newton method, which requires the first and second derivatives of the objective function. Following the current recommendation by the International Commission on Illumination (CIE) for the computation of tristimulus values (summations at 1 nm steps from 360 nm to 830 nm), the objective function and its first and second derivatives are explicitly given and used in our computations. Comprehensive tests demonstrate that the proposed method, together with an initial estimation of CCT using Robertson's method [J. Opt. Soc. Am. 58, 1528-1535 (1968)], gives highly accurate predictions below 0.0012 K for light sources with CCTs ranging from 500 K to 106 K.

  7. Efficient Methods to Compute Genomic Predictions

    USDA-ARS?s Scientific Manuscript database

    Efficient methods for processing genomic data were developed to increase reliability of estimated breeding values and simultaneously estimate thousands of marker effects. Algorithms were derived and computer programs tested on simulated data for 50,000 markers and 2,967 bulls. Accurate estimates of ...

  8. Applying Human Computation Methods to Information Science

    ERIC Educational Resources Information Center

    Harris, Christopher Glenn

    2013-01-01

    Human Computation methods such as crowdsourcing and games with a purpose (GWAP) have each recently drawn considerable attention for their ability to synergize the strengths of people and technology to accomplish tasks that are challenging for either to do well alone. Despite this increased attention, much of this transformation has been focused on…

  9. Applying Human Computation Methods to Information Science

    ERIC Educational Resources Information Center

    Harris, Christopher Glenn

    2013-01-01

    Human Computation methods such as crowdsourcing and games with a purpose (GWAP) have each recently drawn considerable attention for their ability to synergize the strengths of people and technology to accomplish tasks that are challenging for either to do well alone. Despite this increased attention, much of this transformation has been focused on…

  10. Computational Methods for Structural Mechanics and Dynamics

    NASA Technical Reports Server (NTRS)

    Stroud, W. Jefferson (Editor); Housner, Jerrold M. (Editor); Tanner, John A. (Editor); Hayduk, Robert J. (Editor)

    1989-01-01

    Topics addressed include: transient dynamics; transient finite element method; transient analysis in impact and crash dynamic studies; multibody computer codes; dynamic analysis of space structures; multibody mechanics and manipulators; spatial and coplanar linkage systems; flexible body simulation; multibody dynamics; dynamical systems; and nonlinear characteristics of joints.

  11. Optimising operational amplifiers by evolutionary algorithms and gm/Id method

    NASA Astrophysics Data System (ADS)

    Tlelo-Cuautle, E.; Sanabria-Borbon, A. C.

    2016-10-01

    The evolutionary algorithm called non-dominated sorting genetic algorithm (NSGA-II) is applied herein in the optimisation of operational transconductance amplifiers. NSGA-II is accelerated by applying the gm/Id method to estimate reduced search spaces associated to widths (W) and lengths (L) of the metal-oxide-semiconductor field-effect-transistor (MOSFETs), and to guarantee their appropriate bias levels conditions. In addition, we introduce an integer encoding for the W/L sizes of the MOSFETs to avoid a post-processing step for rounding-off their values to be multiples of the integrated circuit fabrication technology. Finally, from the feasible solutions generated by NSGA-II, we introduce a second optimisation stage to guarantee that the final feasible W/L sizes solutions support process, voltage and temperature (PVT) variations. The optimisation results lead us to conclude that the gm/Id method and integer encoding are quite useful to accelerate the convergence of the evolutionary algorithm NSGA-II, while the second optimisation stage guarantees robustness of the feasible solutions to PVT variations.

  12. Computational Thermochemistry and Benchmarking of Reliable Methods

    SciTech Connect

    Feller, David F.; Dixon, David A.; Dunning, Thom H.; Dupuis, Michel; McClemore, Doug; Peterson, Kirk A.; Xantheas, Sotiris S.; Bernholdt, David E.; Windus, Theresa L.; Chalasinski, Grzegorz; Fosada, Rubicelia; Olguim, Jorge; Dobbs, Kerwin D.; Frurip, Donald; Stevens, Walter J.; Rondan, Nelson; Chase, Jared M.; Nichols, Jeffrey A.

    2006-06-20

    During the first and second years of the Computational Thermochemistry and Benchmarking of Reliable Methods project, we completed several studies using the parallel computing capabilities of the NWChem software and Molecular Science Computing Facility (MSCF), including large-scale density functional theory (DFT), second-order Moeller-Plesset (MP2) perturbation theory, and CCSD(T) calculations. During the third year, we continued to pursue the computational thermodynamic and benchmarking studies outlined in our proposal. With the issues affecting the robustness of the coupled cluster part of NWChem resolved, we pursued studies of the heats-of-formation of compounds containing 5 to 7 first- and/or second-row elements and approximately 10 to 14 hydrogens. The size of these systems, when combined with the large basis sets (cc-pVQZ and aug-cc-pVQZ) that are necessary for extrapolating to the complete basis set limit, creates a formidable computational challenge, for which NWChem on NWMPP1 is well suited.

  13. Shifted power method for computing tensor eigenpairs.

    SciTech Connect

    Mayo, Jackson R.; Kolda, Tamara Gibson

    2010-10-01

    Recent work on eigenvalues and eigenvectors for tensors of order m {>=} 3 has been motivated by applications in blind source separation, magnetic resonance imaging, molecular conformation, and more. In this paper, we consider methods for computing real symmetric-tensor eigenpairs of the form Ax{sup m-1} = {lambda}x subject to {parallel}x{parallel} = 1, which is closely related to optimal rank-1 approximation of a symmetric tensor. Our contribution is a novel shifted symmetric higher-order power method (SS-HOPM), which we showis guaranteed to converge to a tensor eigenpair. SS-HOPM can be viewed as a generalization of the power iteration method for matrices or of the symmetric higher-order power method. Additionally, using fixed point analysis, we can characterize exactly which eigenpairs can and cannot be found by the method. Numerical examples are presented, including examples from an extension of the method to fnding complex eigenpairs.

  14. Computational Methods for MOF/Polymer Membranes.

    PubMed

    Erucar, Ilknur; Keskin, Seda

    2016-04-01

    Metal-organic framework (MOF)/polymer mixed matrix membranes (MMMs) have received significant interest in the last decade. MOFs are incorporated into polymers to make MMMs that exhibit improved gas permeability and selectivity compared with pure polymer membranes. The fundamental challenge in this area is to choose the appropriate MOF/polymer combinations for a gas separation of interest. Even if a single polymer is considered, there are thousands of MOFs that could potentially be used as fillers in MMMs. As a result, there has been a large demand for computational studies that can accurately predict the gas separation performance of MOF/polymer MMMs prior to experiments. We have developed computational approaches to assess gas separation potentials of MOF/polymer MMMs and used them to identify the most promising MOF/polymer pairs. In this Personal Account, we aim to provide a critical overview of current computational methods for modeling MOF/polymer MMMs. We give our perspective on the background, successes, and failures that led to developments in this area and discuss the opportunities and challenges of using computational methods for MOF/polymer MMMs. © 2016 The Chemical Society of Japan & Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Improving hospital bed occupancy and resource utilization through queuing modeling and evolutionary computation.

    PubMed

    Belciug, Smaranda; Gorunescu, Florin

    2015-02-01

    Scarce healthcare resources require carefully made policies ensuring optimal bed allocation, quality healthcare service, and adequate financial support. This paper proposes a complex analysis of the resource allocation in a hospital department by integrating in the same framework a queuing system, a compartmental model, and an evolutionary-based optimization. The queuing system shapes the flow of patients through the hospital, the compartmental model offers a feasible structure of the hospital department in accordance to the queuing characteristics, and the evolutionary paradigm provides the means to optimize the bed-occupancy management and the resource utilization using a genetic algorithm approach. The paper also focuses on a "What-if analysis" providing a flexible tool to explore the effects on the outcomes of the queuing system and resource utilization through systematic changes in the input parameters. The methodology was illustrated using a simulation based on real data collected from a geriatric department of a hospital from London, UK. In addition, the paper explores the possibility of adapting the methodology to different medical departments (surgery, stroke, and mental illness). Moreover, the paper also focuses on the practical use of the model from the healthcare point of view, by presenting a simulated application.

  16. Fitness distributions in evolutionary computation: motivation and examples in the continuous domain.

    PubMed

    Chellapilla, K; Fogel, D B

    1999-12-01

    Evolutionary algorithms are, fundamentally, stochastic search procedures. Each next population is a probabilistic function of the current population. Various controls are available to adjust the probability mass function that is used to sample the space of candidate solutions at each generation. For example, the step size of a single-parent variation operator can be adjusted with a corresponding effect on the probability of finding improved solutions and the expected improvement that will be obtained. Examining these statistics as a function of the step size leads to a 'fitness distribution', a function that trades off the expected improvement at each iteration for the probability of that improvement. This paper analyzes the effects of adjusting the step size of Gaussian and Cauchy mutations, as well as a mutation that is a convolution of these two distributions. The results indicate that fitness distributions can be effective in identifying suitable parameter settings for these operators. Some comments on the utility of extending this protocol toward the general diagnosis of evolutionary algorithms is also offered.

  17. [Evolutionary myology as a research method in the morphological evolution of human muscles].

    PubMed

    Kaneff, A

    1977-01-01

    The author presents the evolutionary myology as a complex research method by which the morphological transformation of human muscles could be proved. This process of muscle transformation is elucidated by 3 investigation types: 1. Morphological macroscopic investigation of the variations of certain human muscle. 2. Comparative anatomic investigation of the same muscle. 3. Muscle organogenetic study of human embryos and fetuses. The macroscopic morphological investigation of the variations of any human muscle enables the examination of the variability in its complete versatility and volume if a sufficient number of preparations are investigated. A line of successive muscle variations could be composed from the established variants, arranged one after another. Furthermore, the frequency of each variation could be determined in per cents. The material for comparative-anatomic investigation must be selected according to the contemporary zoology. The variation line of human material can be properly directed due to that examination. Now it is possible to understand which is the initial form, the transitional forms and the final form of the transformation process. Thus the direction of transformation process could be understand. The muscle organogenetic investigation must be carried out on human embryos and fetuses of different ages. In this way muscle and tendion primordium could be observed directly and in the same time the important factors about the primordium maturity and its eventual shifting could be established. The example described refers to the transformation of m. abductor pollicis longus. It reveals how the evolutionary myology can be used to prove the morphological evolution of any muscle.

  18. Evolutionary Analysis of Dengue Serotype 2 Viruses Using Phylogenetic and Bayesian Methods from New Delhi, India

    PubMed Central

    Afreen, Nazia; Naqvi, Irshad H.; Broor, Shobha; Ahmed, Anwar; Kazim, Syed Naqui; Dohare, Ravins; Kumar, Manoj; Parveen, Shama

    2016-01-01

    Dengue fever is the most important arboviral disease in the tropical and sub-tropical countries of the world. Delhi, the metropolitan capital state of India, has reported many dengue outbreaks, with the last outbreak occurring in 2013. We have recently reported predominance of dengue virus serotype 2 during 2011–2014 in Delhi. In the present study, we report molecular characterization and evolutionary analysis of dengue serotype 2 viruses which were detected in 2011–2014 in Delhi. Envelope genes of 42 DENV-2 strains were sequenced in the study. All DENV-2 strains grouped within the Cosmopolitan genotype and further clustered into three lineages; Lineage I, II and III. Lineage III replaced lineage I during dengue fever outbreak of 2013. Further, a novel mutation Thr404Ile was detected in the stem region of the envelope protein of a single DENV-2 strain in 2014. Nucleotide substitution rate and time to the most recent common ancestor were determined by molecular clock analysis using Bayesian methods. A change in effective population size of Indian DENV-2 viruses was investigated through Bayesian skyline plot. The study will be a vital road map for investigation of epidemiology and evolutionary pattern of dengue viruses in India. PMID:26977703

  19. Evolutionary Analysis of Dengue Serotype 2 Viruses Using Phylogenetic and Bayesian Methods from New Delhi, India.

    PubMed

    Afreen, Nazia; Naqvi, Irshad H; Broor, Shobha; Ahmed, Anwar; Kazim, Syed Naqui; Dohare, Ravins; Kumar, Manoj; Parveen, Shama

    2016-03-01

    Dengue fever is the most important arboviral disease in the tropical and sub-tropical countries of the world. Delhi, the metropolitan capital state of India, has reported many dengue outbreaks, with the last outbreak occurring in 2013. We have recently reported predominance of dengue virus serotype 2 during 2011-2014 in Delhi. In the present study, we report molecular characterization and evolutionary analysis of dengue serotype 2 viruses which were detected in 2011-2014 in Delhi. Envelope genes of 42 DENV-2 strains were sequenced in the study. All DENV-2 strains grouped within the Cosmopolitan genotype and further clustered into three lineages; Lineage I, II and III. Lineage III replaced lineage I during dengue fever outbreak of 2013. Further, a novel mutation Thr404Ile was detected in the stem region of the envelope protein of a single DENV-2 strain in 2014. Nucleotide substitution rate and time to the most recent common ancestor were determined by molecular clock analysis using Bayesian methods. A change in effective population size of Indian DENV-2 viruses was investigated through Bayesian skyline plot. The study will be a vital road map for investigation of epidemiology and evolutionary pattern of dengue viruses in India.

  20. Parallel computer methods for eigenvalue extraction

    NASA Technical Reports Server (NTRS)

    Akl, Fred

    1988-01-01

    A new numerical algorithm for the solution of large-order eigenproblems typically encountered in linear elastic finite element systems is presented. The architecture of parallel processing is used in the algorithm to achieve increased speed and efficiency of calculations. The algorithm is based on the frontal technique for the solution of linear simultaneous equations and the modified subspace eigenanalysis method for the solution of the eigenproblem. The advantages of this new algorithm in parallel computer architecture are discussed.

  1. Allosaurus, crocodiles, and birds: evolutionary clues from spiral computed tomography of an endocast.

    PubMed

    Rogers, S W

    1999-10-15

    Because the brain does not usually leave direct evidence of its existence in the fossil record, our view of this structure in extinct species has relied upon inferences drawn from comparisons between parts of the skeleton that do fossilize or with modern-day relatives that survived extinction. However, soft-tissue structure preservation may indeed occasionally occur, particularly in the endocranial space. By applying modern imaging and analysis methods to such natural cranial "endocasts," we can now learn more than ever thought possible about the brains of extinct species. I will discuss one such example in which spiral computed tomography (CT) scanning analysis has been successfully applied to reveal preserved internal structures of a naturally occurring endocranial cast of Allosaurus fragilis, the dominant carnivorous dinosaur of the late Jurassic period. The ability to directly examine the neuroanatomy of an extinct dinosaur, whose modern-day relatives are birds and crocodiles, has exciting implications about Allosaurus' behavior, its adaptive responses to its environment, and its eventual extinction.

  2. Computational methods for vortex dominated compressible flows

    NASA Technical Reports Server (NTRS)

    Murman, Earll M.

    1987-01-01

    The principal objectives were to: understand the mechanisms by which Euler equation computations model leading edge vortex flows; understand the vortical and shock wave structures that may exist for different wing shapes, angles of incidence, and Mach numbers; and compare calculations with experiments in order to ascertain the limitations and advantages of Euler equation models. The initial approach utilized the cell centered finite volume Jameson scheme. The final calculation utilized a cell vertex finite volume method on an unstructured grid. Both methods used Runge-Kutta four stage schemes for integrating the equations. The principal findings are briefly summarized.

  3. Computations of entropy bounds: Multidimensional geometric methods

    SciTech Connect

    Makaruk, H.E.

    1998-02-01

    The entropy bounds for constructive upper bound on the needed number-of-bits for solving a dichotomy is represented by the quotient of two multidimensional solid volumes. For minimization of this upper bound exact calculation of the volume of this quotient is needed. Three methods for exact computing of the volume of a given nD volume are presented: (1) general method for calculation any nD volume by slicing it into volumes of decreasing dimension is presented; (2) a method applying appropriate curvilinear coordinate system is described for volume bounded by symmetrical curvilinear hypersurfaces (spheres, cones, hyperboloids, ellipsoids, cylinders, etc.); and (3) an algorithm for dividing any nD complex into simplices and computing of the volume of the simplices is presented, supplemented by a general formula for calculation of volume of an nD simplex. These mathematical methods enable exact calculation of volume of any complicated multidimensional solids. The methods allow for the calculation of the minimal volume and lead to tighter bounds on the needed number-of-bits.

  4. Analytic Method for Computing Instrument Pointing Jitter

    NASA Technical Reports Server (NTRS)

    Bayard, David

    2003-01-01

    A new method of calculating the root-mean-square (rms) pointing jitter of a scientific instrument (e.g., a camera, radar antenna, or telescope) is introduced based on a state-space concept. In comparison with the prior method of calculating the rms pointing jitter, the present method involves significantly less computation. The rms pointing jitter of an instrument (the square root of the jitter variance shown in the figure) is an important physical quantity which impacts the design of the instrument, its actuators, controls, sensory components, and sensor- output-sampling circuitry. Using the Sirlin, San Martin, and Lucke definition of pointing jitter, the prior method of computing the rms pointing jitter involves a frequency-domain integral of a rational polynomial multiplied by a transcendental weighting function, necessitating the use of numerical-integration techniques. In practice, numerical integration complicates the problem of calculating the rms pointing error. In contrast, the state-space method provides exact analytic expressions that can be evaluated without numerical integration.

  5. Multiscale filtering method for derivative computation

    NASA Astrophysics Data System (ADS)

    Li, Bingcheng; Ma, Songde

    1994-09-01

    In this paper, we propose a multiscale filtering method to compute derivatives with any orders. As a special case, we consider the computation of the second derivatives, and show that the difference of two smoothers with the same kernel, but different scales constructs a Laplacian operator and has a zero crossing at a step edge. Selecting a Gaussian function as the smoother, we show the DOG (difference of Gaussian) itself is a zero crossing edge extractor, and it needn't approximate to LoG (Laplacian of Gaussian). At the same time, we show that even though DOG for bandwidth ratio 0.625 (1:1.6) is the optimal approximation to LoG, it is not optimal for edge detection. Finally, selecting an exponential function as the smoothing kernel, we obtain a Laplacian of exponential (LoE) operator, and it is shown theoretically and experimentally that the LoE has a high edge detection performance, furthermore its computation is efficient and its computational complexity is independent of the filter kernel bandwidths.

  6. New phases of osmium carbide from evolutionary algorithm and ab initio computations

    NASA Astrophysics Data System (ADS)

    Fadda, Alessandro; Fadda, Giuseppe

    2017-09-01

    New crystal phases of osmium carbide are presented in this work. These results were found with the CA code, an evolutionary algorithm (EA) presented in a previous paper which takes full advantage of crystal symmetry by using an ad hoc search space and genetic operators. The new OsC2 and Os2C structures have a lower enthalpy than any known so far. Moreover, the layered pattern of OsC2 serves as a blueprint for building new crystals by adding or removing layers of carbon and/or osmium and generating many other Os  +  C structures like Os2C, OsC, OsC2 and OsC4. These again have a lower enthalpy than all the investigated structures, including those of the present work. The mechanical, vibrational and electronic properties are discussed as well.

  7. Delamination detection using methods of computational intelligence

    NASA Astrophysics Data System (ADS)

    Ihesiulor, Obinna K.; Shankar, Krishna; Zhang, Zhifang; Ray, Tapabrata

    2012-11-01

    Abstract Reliable delamination prediction scheme is indispensable in order to prevent potential risks of catastrophic failures in composite structures. The existence of delaminations changes the vibration characteristics of composite laminates and hence such indicators can be used to quantify the health characteristics of laminates. An approach for online health monitoring of in-service composite laminates is presented in this paper that relies on methods based on computational intelligence. Typical changes in the observed vibration characteristics (i.e. change in natural frequencies) are considered as inputs to identify the existence, location and magnitude of delaminations. The performance of the proposed approach is demonstrated using numerical models of composite laminates. Since this identification problem essentially involves the solution of an optimization problem, the use of finite element (FE) methods as the underlying tool for analysis turns out to be computationally expensive. A surrogate assisted optimization approach is hence introduced to contain the computational time within affordable limits. An artificial neural network (ANN) model with Bayesian regularization is used as the underlying approximation scheme while an improved rate of convergence is achieved using a memetic algorithm. However, building of ANN surrogate models usually requires large training datasets. K-means clustering is effectively employed to reduce the size of datasets. ANN is also used via inverse modeling to determine the position, size and location of delaminations using changes in measured natural frequencies. The results clearly highlight the efficiency and the robustness of the approach.

  8. Numerical methods for problems in computational aeroacoustics

    NASA Astrophysics Data System (ADS)

    Mead, Jodi Lorraine

    1998-12-01

    A goal of computational aeroacoustics is the accurate calculation of noise from a jet in the far field. This work concerns the numerical aspects of accurately calculating acoustic waves over large distances and long time. More specifically, the stability, efficiency, accuracy, dispersion and dissipation in spatial discretizations, time stepping schemes, and absorbing boundaries for the direct solution of wave propagation problems are determined. Efficient finite difference methods developed by Tam and Webb, which minimize dispersion and dissipation, are commonly used for the spatial and temporal discretization. Alternatively, high order pseudospectral methods can be made more efficient by using the grid transformation introduced by Kosloff and Tal-Ezer. Work in this dissertation confirms that the grid transformation introduced by Kosloff and Tal-Ezer is not spectrally accurate because, in the limit, the grid transformation forces zero derivatives at the boundaries. If a small number of grid points are used, it is shown that approximations with the Chebyshev pseudospectral method with the Kosloff and Tal-Ezer grid transformation are as accurate as with the Chebyshev pseudospectral method. This result is based on the analysis of the phase and amplitude errors of these methods, and their use for the solution of a benchmark problem in computational aeroacoustics. For the grid transformed Chebyshev method with a small number of grid points it is, however, more appropriate to compare its accuracy with that of high- order finite difference methods. This comparison, for an order of accuracy 10-3 for a benchmark problem in computational aeroacoustics, is performed for the grid transformed Chebyshev method and the fourth order finite difference method of Tam. Solutions with the finite difference method are as accurate. and the finite difference method is more efficient than, the Chebyshev pseudospectral method with the grid transformation. The efficiency of the Chebyshev

  9. Efficient Nondomination Level Update Method for Steady-State Evolutionary Multiobjective Optimization.

    PubMed

    Li, Ke; Deb, Kalyanmoy; Zhang, Qingfu; Zhang, Qiang

    2016-11-08

    Nondominated sorting (NDS), which divides a population into several nondomination levels (NDLs), is a basic step in many evolutionary multiobjective optimization (EMO) algorithms. It has been widely studied in a generational evolution model, where the environmental selection is performed after generating a whole population of offspring. However, in a steady-state evolution model, where a population is updated right after the generation of a new candidate, the NDS can be extremely time consuming. This is especially severe when the number of objectives and population size become large. In this paper, we propose an efficient NDL update method to reduce the cost for maintaining the NDL structure in steady-state EMO. Instead of performing the NDS from scratch, our method only updates the NDLs of a limited number of solutions by extracting the knowledge from the current NDL structure. Notice that our NDL update method is performed twice at each iteration. One is after the reproduction, the other is after the environmental selection. Extensive experiments fully demonstrate that, comparing to the other five state-of-the-art NDS methods, our proposed method avoids a significant amount of unnecessary comparisons, not only in the synthetic data sets, but also in some real optimization scenarios. Last but not least, we find that our proposed method is also useful for the generational evolution model.

  10. Soft Computing Methods for Disulfide Connectivity Prediction

    PubMed Central

    Márquez-Chamorro, Alfonso E.; Aguilar-Ruiz, Jesús S.

    2015-01-01

    The problem of protein structure prediction (PSP) is one of the main challenges in structural bioinformatics. To tackle this problem, PSP can be divided into several subproblems. One of these subproblems is the prediction of disulfide bonds. The disulfide connectivity prediction problem consists in identifying which nonadjacent cysteines would be cross-linked from all possible candidates. Determining the disulfide bond connectivity between the cysteines of a protein is desirable as a previous step of the 3D PSP, as the protein conformational search space is highly reduced. The most representative soft computing approaches for the disulfide bonds connectivity prediction problem of the last decade are summarized in this paper. Certain aspects, such as the different methodologies based on soft computing approaches (artificial neural network or support vector machine) or features of the algorithms, are used for the classification of these methods. PMID:26523116

  11. Comparison of Evolutionary (Genetic) Algorithm and Adjoint Methods for Multi-Objective Viscous Airfoil Optimizations

    NASA Technical Reports Server (NTRS)

    Pulliam, T. H.; Nemec, M.; Holst, T.; Zingg, D. W.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    A comparison between an Evolutionary Algorithm (EA) and an Adjoint-Gradient (AG) Method applied to a two-dimensional Navier-Stokes code for airfoil design is presented. Both approaches use a common function evaluation code, the steady-state explicit part of the code,ARC2D. The parameterization of the design space is a common B-spline approach for an airfoil surface, which together with a common griding approach, restricts the AG and EA to the same design space. Results are presented for a class of viscous transonic airfoils in which the optimization tradeoff between drag minimization as one objective and lift maximization as another, produces the multi-objective design space. Comparisons are made for efficiency, accuracy and design consistency.

  12. Assessment of the relative merits of a few methods to detect evolutionary trends.

    PubMed

    Laurin, Michel

    2010-12-01

    Some of the most basic questions about the history of life concern evolutionary trends. These include determining whether or not metazoans have become more complex over time, whether or not body size tends to increase over time (the Cope-Depéret rule), or whether or not brain size has increased over time in various taxa, such as mammals and birds. Despite the proliferation of studies on such topics, assessment of the reliability of results in this field is hampered by the variability of techniques used and the lack of statistical validation of these methods. To solve this problem, simulations are performed using a variety of evolutionary models (gradual Brownian motion, speciational Brownian motion, and Ornstein-Uhlenbeck), with or without a drift of variable amplitude, with variable variance of tips, and with bounds placed close or far from the starting values and final means of simulated characters. These are used to assess the relative merits (power, Type I error rate, bias, and mean absolute value of error on slope estimate) of several statistical methods that have recently been used to assess the presence of evolutionary trends in comparative data. Results show widely divergent performance of the methods. The simple, nonphylogenetic regression (SR) and variance partitioning using phylogenetic eigenvector regression (PVR) with a broken stick selection procedure have greatly inflated Type I error rate (0.123-0.180 at a 0.05 threshold), which invalidates their use in this context. However, they have the greatest power. Most variants of Felsenstein's independent contrasts (FIC; five of which are presented) have adequate Type I error rate, although two have a slightly inflated Type I error rate with at least one of the two reference trees (0.064-0.090 error rate at a 0.05 threshold). The power of all contrast-based methods is always much lower than that of SR and PVR, except under Brownian motion with a strong trend and distant bounds. Mean absolute value of error

  13. Evolutionary Science as a Method to Facilitate Higher Level Thinking and Reasoning in Medical Training.

    PubMed

    Graves, Joseph L; Reiber, Chris; Thanukos, Anna; Hurtado, Magdalena; Wolpaw, Terry

    2016-10-15

    Evolutionary science is indispensable for understanding biological processes. Effective medical treatment must be anchored in sound biology. However, currently the insights available from evolutionary science are not adequately incorporated in either pre-medical or medical school curricula. To illuminate how evolution may be helpful in these areas, examples in which the insights of evolutionary science are already improving medical treatment and ways in which evolutionary reasoning can be practiced in the context of medicine are provided. In order to facilitate the learning of evolutionary principles, concepts derived from evolutionary science that medical students and professionals should understand are outlined. These concepts are designed to be authoritative and at the same time easily accessible for anyone with the general biological knowledge of a first-year medical student. Thus we conclude that medical practice informed by evolutionary principles will be more effective and lead to better patient outcomes.Furthermore, it is argued that evolutionary medicine complements general medical training because it provides an additional means by which medical students can practice the critical thinking skills that will be important in their future practice. We argue that core concepts from evolutionary science have the potential to improve critical thinking and facilitate more effective learning in medical training.

  14. Evolutionary science as a method to facilitate higher level thinking and reasoning in medical training

    PubMed Central

    Graves, Joseph L.; Reiber, Chris; Thanukos, Anna; Hurtado, Magdalena; Wolpaw, Terry

    2016-01-01

    Evolutionary science is indispensable for understanding biological processes. Effective medical treatment must be anchored in sound biology. However, currently the insights available from evolutionary science are not adequately incorporated in either pre-medical or medical school curricula. To illuminate how evolution may be helpful in these areas, examples in which the insights of evolutionary science are already improving medical treatment and ways in which evolutionary reasoning can be practiced in the context of medicine are provided. To facilitate the learning of evolutionary principles, concepts derived from evolutionary science that medical students and professionals should understand are outlined. These concepts are designed to be authoritative and at the same time easily accessible for anyone with the general biological knowledge of a first-year medical student. Thus, we conclude that medical practice informed by evolutionary principles will be more effective and lead to better patient outcomes. Furthermore, it is argued that evolutionary medicine complements general medical training because it provides an additional means by which medical students can practice the critical thinking skills that will be important in their future practice. We argue that core concepts from evolutionary science have the potential to improve critical thinking and facilitate more effective learning in medical training. PMID:27744353

  15. Review of Computational Stirling Analysis Methods

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Wilson, Scott D.; Tew, Roy C.

    2004-01-01

    Nuclear thermal to electric power conversion carries the promise of longer duration missions and higher scientific data transmission rates back to Earth for both Mars rovers and deep space missions. A free-piston Stirling convertor is a candidate technology that is considered an efficient and reliable power conversion device for such purposes. While already very efficient, it is believed that better Stirling engines can be developed if the losses inherent its current designs could be better understood. However, they are difficult to instrument and so efforts are underway to simulate a complete Stirling engine numerically. This has only recently been attempted and a review of the methods leading up to and including such computational analysis is presented. And finally it is proposed that the quality and depth of Stirling loss understanding may be improved by utilizing the higher fidelity and efficiency of recently developed numerical methods. One such method, the Ultra HI-Fl technique is presented in detail.

  16. Computational Statistical Methods for Social Network Models

    PubMed Central

    Hunter, David R.; Krivitsky, Pavel N.; Schweinberger, Michael

    2013-01-01

    We review the broad range of recent statistical work in social network models, with emphasis on computational aspects of these methods. Particular focus is applied to exponential-family random graph models (ERGM) and latent variable models for data on complete networks observed at a single time point, though we also briefly review many methods for incompletely observed networks and networks observed at multiple time points. Although we mention far more modeling techniques than we can possibly cover in depth, we provide numerous citations to current literature. We illustrate several of the methods on a small, well-known network dataset, Sampson’s monks, providing code where possible so that these analyses may be duplicated. PMID:23828720

  17. Evolutionary Design in Biology

    NASA Astrophysics Data System (ADS)

    Wiese, Kay C.

    Much progress has been achieved in recent years in molecular biology and genetics. The sheer volume of data in the form of biological sequences has been enormous and efficient methods for dealing with these huge amounts of data are needed. In addition, the data alone does not provide information on the workings of biological systems; hence much research effort has focused on designing mathematical and computational models to address problems from molecular biology. Often, the terms bioinformatics and computational biology are used to refer to the research fields concerning themselves with designing solutions to molecular problems in biology. However, there is a slight distinction between bioinformatics and computational biology: the former is concerned with managing the enormous amounts of biological data and extracting information from it, while the latter is more concerned with the design and development of new algorithms to address problems such as protein or RNA folding. However, the boundary is blurry, and there is no consistent usage of the terms. We will use the term bioinformatics to encompass both fields. To cover all areas of research in bioinformatics is beyond the scope of this section and we refer the interested reader to [2] for a general introduction. A large part of what bioinformatics is concerned about is evolution and function of biological systems on a molecular level. Evolutionary computation and evolutionary design are concerned with developing computational systems that "mimic" certain aspects of natural evolution (mutation, crossover, selection, fitness). Much of the inner workings of natural evolutionary systems have been copied, sometimes in modified format into evolutionary computation systems. Artificial neural networks mimic the functioning of simple brain cell clusters. Fuzzy systems are concerned with the "fuzzyness" in decision making, similar to a human expert. These three computational paradigms fall into the category of

  18. Comparison of Methods of Height Anomaly Computation

    NASA Astrophysics Data System (ADS)

    Mazurova, E.; Lapshin, A.; Menshova, A.

    2012-04-01

    As of today, accurate determination of height anomaly is one of the most difficult problems of geodesy, even with sustainable perfection of mathematical methods, computer possibilities. The most effective methods of height anomaly computation are based on the methods of discrete linear transformations, such as the Fast Fourier Transform (FFT), Short-Time Fourier Transform (STFT), Fast Wavelet Transform (FWT). The main drawback of the classical FFT is weak localization in the time domain. If it is necessary to define the time interval of a frequency presence the STFT is used that allows one to detect the presence of any frequency signal and the interval of its presence. It expands the possibilities of the method in comparison with the classical Fourier Transform. However, subject to Heisenberg's uncertainty principle, it is impossible to tell precisely what frequency signal is present at a given moment of time (it is possible to speak only about the range of frequencies); and it is impossible to tell at what precisely moment of time the frequency signal is present (it is possible to speak only about a time span). A wavelet-transform gives the chance to reduce the influence of the Heisenberg's uncertainty principle on the obtained time-and-frequency representation of the signal. With its help low frequencies have more detailed representation relative to the time, and high frequencies - relative to the frequency. The paper summarizes the results of height anomaly calculations done by the FFT, STFT, FWT methods and represents 3-D models of calculation results. Key words: Fast Fourier Transform(FFT), Short-Time Fourier Transform (STFT), Fast Wavelet Transform(FWT), Heisenberg's uncertainty principle.

  19. SatDNA Analyzer: a computing tool for satellite-DNA evolutionary analysis.

    PubMed

    Navajas-Pérez, Rafael; Rubio-Escudero, Cristina; Aznarte, José Luis; Rejón, Manuel Ruiz; Garrido-Ramos, Manuel A

    2007-03-15

    satDNA Analyzer is a program, implemented in C++, for the analysis of the patterns of variation at each nucleotide position considered independently amongst all units of a given satellite-DNA family when comparing it between a pair of species. The program classifies each site accordingly as monomorphic or polymorphic, discriminates shared from non-shared polymorphisms and classifies each non-shared polymorphism according to the model proposed by Strachan et al. in six different stages of transition during the spread of a variant repeat unit toward its fixation. Furthermore, this program implements several other utilities for satellite-DNA analysis evolution such as the design of the average consensus sequences, the average base pair contents, the distribution of variant sites, the transition to transversion ratio and different estimates of intra-specific variation and inter-specific variation. Aprioristic hypotheses on factors influencing the molecular drive process and the rates and biases of concerted evolution can be tested with this program. Additionally, satDNA Analyzer generates an output file containing a sequence alignment without shared polymorphisms to be used for further evolutionary analysis by using different phylogenetic softwares. satDNA Analyzer is freely available at http://satdna.sourceforge.net/. SatDNA Analyzer has been designed to operate on Windows, Linux and Mac OS X.

  20. A new method for the asteroseismic determination of the evolutionary state of red-giant stars

    NASA Astrophysics Data System (ADS)

    Elsworth, Yvonne; Hekker, Saskia; Basu, Sarbani; Davies, Guy R.

    2017-04-01

    Determining the ages of red-giant stars is a key problem in stellar astrophysics. One of the difficulties in this determination is to know the evolutionary state of the individual stars - i.e. have they started to burn Helium in their cores? That is the topic of this paper. Asteroseismic data provide a route to achieving this information. What we present here is a highly autonomous way of determining the evolutionary state from an analysis of the power spectrum of the light curve. The method is fast and efficient and can provide results for a large number of stars. It uses the structure of the dipole-mode oscillations, which have a mixed character in red-giant stars, to determine some measures that are used in the categorization. It does not require that all the individual components of any given mode be separately characterized. Some 6604 red-giant stars have been classified. Of these, 3566 are determined to be on the red-giant branch, 2077 are red-clump and 439 are secondary-clump stars. We do not specifically identify the low-metallicity, horizontal-branch stars. The difference between red-clump and secondary-clump stars is dependent on the manner in which Helium burning is first initiated. We discuss that the way the boundary between these classifications is set may lead to mis-categorization in a small number of stars. The remaining 522 stars were not classified either because they lacked sufficient power in the dipole modes (so-called depressed dipole modes) or because of conflicting values in the parameters.

  1. Computer-Aided Drug Design Methods.

    PubMed

    Yu, Wenbo; MacKerell, Alexander D

    2017-01-01

    Computational approaches are useful tools to interpret and guide experiments to expedite the antibiotic drug design process. Structure-based drug design (SBDD) and ligand-based drug design (LBDD) are the two general types of computer-aided drug design (CADD) approaches in existence. SBDD methods analyze macromolecular target 3-dimensional structural information, typically of proteins or RNA, to identify key sites and interactions that are important for their respective biological functions. Such information can then be utilized to design antibiotic drugs that can compete with essential interactions involving the target and thus interrupt the biological pathways essential for survival of the microorganism(s). LBDD methods focus on known antibiotic ligands for a target to establish a relationship between their physiochemical properties and antibiotic activities, referred to as a structure-activity relationship (SAR), information that can be used for optimization of known drugs or guide the design of new drugs with improved activity. In this chapter, standard CADD protocols for both SBDD and LBDD will be presented with a special focus on methodologies and targets routinely studied in our laboratory for antibiotic drug discoveries.

  2. Computational Fluid Dynamics-Based Design Optimization Method for Archimedes Screw Blood Pumps.

    PubMed

    Yu, Hai; Janiga, Gábor; Thévenin, Dominique

    2016-04-01

    An optimization method suitable for improving the performance of Archimedes screw axial rotary blood pumps is described in the present article. In order to achieve a more robust design and to save computational resources, this method combines the advantages of the established pump design theory with modern computer-aided, computational fluid dynamics (CFD)-based design optimization (CFD-O) relying on evolutionary algorithms and computational fluid dynamics. The main purposes of this project are to: (i) integrate pump design theory within the already existing CFD-based optimization; (ii) demonstrate that the resulting procedure is suitable for optimizing an Archimedes screw blood pump in terms of efficiency. Results obtained in this study demonstrate that the developed tool is able to meet both objectives. Finally, the resulting level of hemolysis can be numerically assessed for the optimal design, as hemolysis is an issue of overwhelming importance for blood pumps.

  3. Monte Carlo methods on advanced computer architectures

    SciTech Connect

    Martin, W.R.

    1991-12-31

    Monte Carlo methods describe a wide class of computational methods that utilize random numbers to perform a statistical simulation of a physical problem, which itself need not be a stochastic process. For example, Monte Carlo can be used to evaluate definite integrals, which are not stochastic processes, or may be used to simulate the transport of electrons in a space vehicle, which is a stochastic process. The name Monte Carlo came about during the Manhattan Project to describe the new mathematical methods being developed which had some similarity to the games of chance played in the casinos of Monte Carlo. Particle transport Monte Carlo is just one application of Monte Carlo methods, and will be the subject of this review paper. Other applications of Monte Carlo, such as reliability studies, classical queueing theory, molecular structure, the study of phase transitions, or quantum chromodynamics calculations for basic research in particle physics, are not included in this review. The reference by Kalos is an introduction to general Monte Carlo methods and references to other applications of Monte Carlo can be found in this excellent book. For the remainder of this paper, the term Monte Carlo will be synonymous to particle transport Monte Carlo, unless otherwise noted. 60 refs., 14 figs., 4 tabs.

  4. A new spectral method to compute FCN

    NASA Astrophysics Data System (ADS)

    Zhang, M.; Huang, C. L.

    2014-12-01

    Free core nutation (FCN) is a rotational modes of the earth with fluid core. All traditional theoretical methods produce FCN period near 460 days with PREM, while the precise observations (VLBI + SG tides) say it should be near 430 days. In order to fill this big gap, astronomers and geophysicists give various assumptions, e.g., increasing core-mantle-boundary (CMB) flattening by about 5%, a strong coupling between nutation and geomagnetic field near CMB, viscous coupling, or topographical coupling etc. Do we really need these unproved assumptions? or is it only the problem of these traditional theoretical methods themselves? Earth models (e.g. PREM) provide accurate and robust profiles of physical parameters, like density and Lame parameters, but their radial derivatives, which are also used in all traditional methods to calculate normal modes (e.g.. FCN), nutation and tides of non-rigid earth theoretically, are not so trustable as the parameters themselves. A new multiple layer spectral method is proposed and applied to the computation of normal modes, to avoid these problems. This new method can solve not only one order ellipsoid but also irregular asymmetric 3D earth model. Our primary result of the FCN period is 435 sidereal days.

  5. An Exploratory Framework for Combining CFD Analysis and Evolutionary Optimization into a Single Integrated Computational Environment

    SciTech Connect

    McCorkle, Douglas S.; Bryden, Kenneth M.

    2011-01-01

    Several recent reports and workshops have identified integrated computational engineering as an emerging technology with the potential to transform engineering design. The goal is to integrate geometric models, analyses, simulations, optimization and decision-making tools, and all other aspects of the engineering process into a shared, interactive computer-generated environment that facilitates multidisciplinary and collaborative engineering. While integrated computational engineering environments can be constructed from scratch with high-level programming languages, the complexity of these proposed environments makes this type of approach prohibitively slow and expensive. Rather, a high-level software framework is needed to provide the user with the capability to construct an application in an intuitive manner using existing models and engineering tools with minimal programming. In this paper, we present an exploratory open source software framework that can be used to integrate the geometric models, computational fluid dynamics (CFD), and optimization tools needed for shape optimization of complex systems. This framework is demonstrated using the multiphase flow analysis of a complete coal transport system for an 800 MW pulverized coal power station. The framework uses engineering objects and three-dimensional visualization to enable the user to interactively design and optimize the performance of the coal transport system.

  6. Computational methods for optical molecular imaging

    PubMed Central

    Chen, Duan; Wei, Guo-Wei; Cong, Wen-Xiang; Wang, Ge

    2010-01-01

    Summary A new computational technique, the matched interface and boundary (MIB) method, is presented to model the photon propagation in biological tissue for the optical molecular imaging. Optical properties have significant differences in different organs of small animals, resulting in discontinuous coefficients in the diffusion equation model. Complex organ shape of small animal induces singularities of the geometric model as well. The MIB method is designed as a dimension splitting approach to decompose a multidimensional interface problem into one-dimensional ones. The methodology simplifies the topological relation near an interface and is able to handle discontinuous coefficients and complex interfaces with geometric singularities. In the present MIB method, both the interface jump condition and the photon flux jump conditions are rigorously enforced at the interface location by using only the lowest-order jump conditions. This solution near the interface is smoothly extended across the interface so that central finite difference schemes can be employed without the loss of accuracy. A wide range of numerical experiments are carried out to validate the proposed MIB method. The second-order convergence is maintained in all benchmark problems. The fourth-order convergence is also demonstrated for some three-dimensional problems. The robustness of the proposed method over the variable strength of the linear term of the diffusion equation is also examined. The performance of the present approach is compared with that of the standard finite element method. The numerical study indicates that the proposed method is a potentially efficient and robust approach for the optical molecular imaging. PMID:20485461

  7. A computational method for sharp interface advection

    PubMed Central

    Bredmose, Henrik; Jasak, Hrvoje

    2016-01-01

    We devise a numerical method for passive advection of a surface, such as the interface between two incompressible fluids, across a computational mesh. The method is called isoAdvector, and is developed for general meshes consisting of arbitrary polyhedral cells. The algorithm is based on the volume of fluid (VOF) idea of calculating the volume of one of the fluids transported across the mesh faces during a time step. The novelty of the isoAdvector concept consists of two parts. First, we exploit an isosurface concept for modelling the interface inside cells in a geometric surface reconstruction step. Second, from the reconstructed surface, we model the motion of the face–interface intersection line for a general polygonal face to obtain the time evolution within a time step of the submerged face area. Integrating this submerged area over the time step leads to an accurate estimate for the total volume of fluid transported across the face. The method was tested on simple two-dimensional and three-dimensional interface advection problems on both structured and unstructured meshes. The results are very satisfactory in terms of volume conservation, boundedness, surface sharpness and efficiency. The isoAdvector method was implemented as an OpenFOAM® extension and is published as open source. PMID:28018619

  8. Computational electromagnetic methods for transcranial magnetic stimulation

    NASA Astrophysics Data System (ADS)

    Gomez, Luis J.

    Transcranial magnetic stimulation (TMS) is a noninvasive technique used both as a research tool for cognitive neuroscience and as a FDA approved treatment for depression. During TMS, coils positioned near the scalp generate electric fields and activate targeted brain regions. In this thesis, several computational electromagnetics methods that improve the analysis, design, and uncertainty quantification of TMS systems were developed. Analysis: A new fast direct technique for solving the large and sparse linear system of equations (LSEs) arising from the finite difference (FD) discretization of Maxwell's quasi-static equations was developed. Following a factorization step, the solver permits computation of TMS fields inside realistic brain models in seconds, allowing for patient-specific real-time usage during TMS. The solver is an alternative to iterative methods for solving FD LSEs, often requiring run-times of minutes. A new integral equation (IE) method for analyzing TMS fields was developed. The human head is highly-heterogeneous and characterized by high-relative permittivities (107). IE techniques for analyzing electromagnetic interactions with such media suffer from high-contrast and low-frequency breakdowns. The novel high-permittivity and low-frequency stable internally combined volume-surface IE method developed. The method not only applies to the analysis of high-permittivity objects, but it is also the first IE tool that is stable when analyzing highly-inhomogeneous negative permittivity plasmas. Design: TMS applications call for electric fields to be sharply focused on regions that lie deep inside the brain. Unfortunately, fields generated by present-day Figure-8 coils stimulate relatively large regions near the brain surface. An optimization method for designing single feed TMS coil-arrays capable of producing more localized and deeper stimulation was developed. Results show that the coil-arrays stimulate 2.4 cm into the head while stimulating 3

  9. The emergence of mind and brain: an evolutionary, computational, and philosophical approach.

    PubMed

    Mainzer, Klaus

    2008-01-01

    Modern philosophy of mind cannot be understood without recent developments in computer science, artificial intelligence (AI), robotics, neuroscience, biology, linguistics, and psychology. Classical philosophy of formal languages as well as symbolic AI assume that all kinds of knowledge must explicitly be represented by formal or programming languages. This assumption is limited by recent insights into the biology of evolution and developmental psychology of the human organism. Most of our knowledge is implicit and unconscious. It is not formally represented, but embodied knowledge, which is learnt by doing and understood by bodily interacting with changing environments. That is true not only for low-level skills, but even for high-level domains of categorization, language, and abstract thinking. The embodied mind is considered an emergent capacity of the brain as a self-organizing complex system. Actually, self-organization has been a successful strategy of evolution to handle the increasing complexity of the world. Genetic programs are not sufficient and cannot prepare the organism for all kinds of complex situations in the future. Self-organization and emergence are fundamental concepts in the theory of complex dynamical systems. They are also applied in organic computing as a recent research field of computer science. Therefore, cognitive science, AI, and robotics try to model the embodied mind in an artificial evolution. The paper analyzes these approaches in the interdisciplinary framework of complex dynamical systems and discusses their philosophical impact.

  10. Computational predictive methods for fracture and fatigue

    NASA Technical Reports Server (NTRS)

    Cordes, J.; Chang, A. T.; Nelson, N.; Kim, Y.

    1994-01-01

    The damage-tolerant design philosophy as used by aircraft industries enables aircraft components and aircraft structures to operate safely with minor damage, small cracks, and flaws. Maintenance and inspection procedures insure that damages developed during service remain below design values. When damage is found, repairs or design modifications are implemented and flight is resumed. Design and redesign guidelines, such as military specifications MIL-A-83444, have successfully reduced the incidence of damage and cracks. However, fatigue cracks continue to appear in aircraft well before the design life has expired. The F16 airplane, for instance, developed small cracks in the engine mount, wing support, bulk heads, the fuselage upper skin, the fuel shelf joints, and along the upper wings. Some cracks were found after 600 hours of the 8000 hour design service life and design modifications were required. Tests on the F16 plane showed that the design loading conditions were close to the predicted loading conditions. Improvements to analytic methods for predicting fatigue crack growth adjacent to holes, when multiple damage sites are present, and in corrosive environments would result in more cost-effective designs, fewer repairs, and fewer redesigns. The overall objective of the research described in this paper is to develop, verify, and extend the computational efficiency of analysis procedures necessary for damage tolerant design. This paper describes an elastic/plastic fracture method and an associated fatigue analysis method for damage tolerant design. Both methods are unique in that material parameters such as fracture toughness, R-curve data, and fatigue constants are not required. The methods are implemented with a general-purpose finite element package. Several proof-of-concept examples are given. With further development, the methods could be extended for analysis of multi-site damage, creep-fatigue, and corrosion fatigue problems.

  11. Computational predictive methods for fracture and fatigue

    NASA Astrophysics Data System (ADS)

    Cordes, J.; Chang, A. T.; Nelson, N.; Kim, Y.

    1994-09-01

    The damage-tolerant design philosophy as used by aircraft industries enables aircraft components and aircraft structures to operate safely with minor damage, small cracks, and flaws. Maintenance and inspection procedures insure that damages developed during service remain below design values. When damage is found, repairs or design modifications are implemented and flight is resumed. Design and redesign guidelines, such as military specifications MIL-A-83444, have successfully reduced the incidence of damage and cracks. However, fatigue cracks continue to appear in aircraft well before the design life has expired. The F16 airplane, for instance, developed small cracks in the engine mount, wing support, bulk heads, the fuselage upper skin, the fuel shelf joints, and along the upper wings. Some cracks were found after 600 hours of the 8000 hour design service life and design modifications were required. Tests on the F16 plane showed that the design loading conditions were close to the predicted loading conditions. Improvements to analytic methods for predicting fatigue crack growth adjacent to holes, when multiple damage sites are present, and in corrosive environments would result in more cost-effective designs, fewer repairs, and fewer redesigns. The overall objective of the research described in this paper is to develop, verify, and extend the computational efficiency of analysis procedures necessary for damage tolerant design. This paper describes an elastic/plastic fracture method and an associated fatigue analysis method for damage tolerant design. Both methods are unique in that material parameters such as fracture toughness, R-curve data, and fatigue constants are not required. The methods are implemented with a general-purpose finite element package. Several proof-of-concept examples are given. With further development, the methods could be extended for analysis of multi-site damage, creep-fatigue, and corrosion fatigue problems.

  12. Computational methods applied to wind tunnel optimization

    NASA Astrophysics Data System (ADS)

    Lindsay, David

    This report describes computational methods developed for optimizing the nozzle of a three-dimensional subsonic wind tunnel. This requires determination of a shape that delivers flow to the test section, typically with a speed increase of 7 or more and a velocity uniformity of .25% or better, in a compact length without introducing boundary layer separation. The need for high precision, smooth solutions, and three-dimensional modeling required the development of special computational techniques. These include: (1) alternative formulations to Neumann and Dirichlet boundary conditions, to deal with overspecified, ill-posed, or cyclic problems, and to reduce the discrepancy between numerical solutions and boundary conditions; (2) modification of the Finite Element Method to obtain solutions with numerically exact conservation properties; (3) a Matlab implementation of general degree Finite Element solvers for various element designs in two and three dimensions, exploiting vector indexing to obtain optimal efficiency; (4) derivation of optimal quadrature formulas for integration over simplexes in two and three dimensions, and development of a program for semi-automated generation of formulas for any degree and dimension; (5) a modification of a two-dimensional boundary layer formulation to provide accurate flow conservation in three dimensions, and modification of the algorithm to improve stability; (6) development of multi-dimensional spline functions to achieve smoother solutions in three dimensions by post-processing, new three-dimensional elements for C1 basis functions, and a program to assist in the design of elements with higher continuity; and (7) a development of ellipsoidal harmonics and Lame's equation, with generalization to any dimension and a demonstration that Cartesian, cylindrical, spherical, spheroidal, and sphero-conical harmonics are all limiting cases. The report includes a description of the Finite Difference, Finite Volume, and domain remapping

  13. Domain decomposition methods in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Gropp, William D.; Keyes, David E.

    1991-01-01

    The divide-and-conquer paradigm of iterative domain decomposition, or substructuring, has become a practical tool in computational fluid dynamic applications because of its flexibility in accommodating adaptive refinement through locally uniform (or quasi-uniform) grids, its ability to exploit multiple discretizations of the operator equations, and the modular pathway it provides towards parallelism. These features are illustrated on the classic model problem of flow over a backstep using Newton's method as the nonlinear iteration. Multiple discretizations (second-order in the operator and first-order in the preconditioner) and locally uniform mesh refinement pay dividends separately, and they can be combined synergistically. Sample performance results are included from an Intel iPSC/860 hypercube implementation.

  14. Modules and methods for all photonic computing

    DOEpatents

    Schultz, David R.; Ma, Chao Hung

    2001-01-01

    A method for all photonic computing, comprising the steps of: encoding a first optical/electro-optical element with a two dimensional mathematical function representing input data; illuminating the first optical/electro-optical element with a collimated beam of light; illuminating a second optical/electro-optical element with light from the first optical/electro-optical element, the second optical/electro-optical element having a characteristic response corresponding to an iterative algorithm useful for solving a partial differential equation; iteratively recirculating the signal through the second optical/electro-optical element with light from the second optical/electro-optical element for a predetermined number of iterations; and, after the predetermined number of iterations, optically and/or electro-optically collecting output data representing an iterative optical solution from the second optical/electro-optical element.

  15. Domain decomposition methods in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Gropp, William D.; Keyes, David E.

    1992-01-01

    The divide-and-conquer paradigm of iterative domain decomposition, or substructuring, has become a practical tool in computational fluid dynamic applications because of its flexibility in accommodating adaptive refinement through locally uniform (or quasi-uniform) grids, its ability to exploit multiple discretizations of the operator equations, and the modular pathway it provides towards parallelism. These features are illustrated on the classic model problem of flow over a backstep using Newton's method as the nonlinear iteration. Multiple discretizations (second-order in the operator and first-order in the preconditioner) and locally uniform mesh refinement pay dividends separately, and they can be combined synergistically. Sample performance results are included from an Intel iPSC/860 hypercube implementation.

  16. An efficient numerical method for orbit computations

    NASA Astrophysics Data System (ADS)

    Palacios, M.; Abad, A.; Elipe, A.

    1992-08-01

    A nonstandard formulation of perturbed Keplerian motion is set forth based on the analysis by Deprit (1975) and incorporating quaternions to integrate the equations of motion. The properties of quaternions are discussed and applied to the portion of the equations of motion describing the rotations between the space frame and the departure frame. Angular momentum is assumed to be constant, and a redundant set of variables is introduced to test the equations of motion for different step sizes. The method is analyzed for the cases of artificial satellites in Keplerian circular orbits, Keplerian elliptical orbits, and zonal harmonics. The present formulation is shown to adequately represent the dynamical behavior while avoiding small inclinations. The rotations described by quaternions require less arithmetic operations and therefore save computation time, and the accuracy of the solutions are improved by at least two significant digits.

  17. 47 CFR 80.771 - Method of computing coverage.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 5 2010-10-01 2010-10-01 false Method of computing coverage. 80.771 Section 80... STATIONS IN THE MARITIME SERVICES Standards for Computing Public Coast Station VHF Coverage § 80.771 Method of computing coverage. Compute the +17 dBu contour as follows: (a) Determine the effective...

  18. Computer optimization techniques for NASA Langley's CSI evolutionary model's real-time control system

    NASA Technical Reports Server (NTRS)

    Elliott, Kenny B.; Ugoletti, Roberto; Sulla, Jeff

    1992-01-01

    The evolution and optimization of a real-time digital control system is presented. The control system is part of a testbed used to perform focused technology research on the interactions of spacecraft platform and instrument controllers with the flexible-body dynamics of the platform and platform appendages. The control system consists of Computer Automated Measurement and Control (CAMAC) standard data acquisition equipment interfaced to a workstation computer. The goal of this work is to optimize the control system's performance to support controls research using controllers with up to 50 states and frame rates above 200 Hz. The original system could support a 16-state controller operating at a rate of 150 Hz. By using simple yet effective software improvements, Input/Output (I/O) latencies and contention problems are reduced or eliminated in the control system. The final configuration can support a 16-state controller operating at 475 Hz. Effectively the control system's performance was increased by a factor of 3.

  19. Computational Evaluation of the Traceback Method

    ERIC Educational Resources Information Center

    Kol, Sheli; Nir, Bracha; Wintner, Shuly

    2014-01-01

    Several models of language acquisition have emerged in recent years that rely on computational algorithms for simulation and evaluation. Computational models are formal and precise, and can thus provide mathematically well-motivated insights into the process of language acquisition. Such models are amenable to robust computational evaluation,…

  20. Computational Evaluation of the Traceback Method

    ERIC Educational Resources Information Center

    Kol, Sheli; Nir, Bracha; Wintner, Shuly

    2014-01-01

    Several models of language acquisition have emerged in recent years that rely on computational algorithms for simulation and evaluation. Computational models are formal and precise, and can thus provide mathematically well-motivated insights into the process of language acquisition. Such models are amenable to robust computational evaluation,…

  1. Situational Transitions and Military Nurses: A Concept Analysis Using the Evolutionary Method.

    PubMed

    Chargualaf, Katie A

    2016-04-01

    Situational transitions in nursing remain a significant issue for both new graduates and experienced nurses. Although frequently discussed in current nursing literature, nursing research has exclusively focused on the transition experience of civilian (nonmilitary) nurses. With differing role and practice expectations, altered practice environments, and the risk of deployment, the outcome of negative transition experiences for military nurses is significant. The purpose of this analysis is to clarify the concept of transition, in a situational context, as it relates to military nurses by investigating the attributes, antecedents, and consequences. Rodgers' evolutionary method served as the framework to this study. The sample included 41 studies, published in English, between 2000 and 2013. Data were retrieved from the Cumulative Index to Nursing and Allied Health Literature (CINAHL), Medline, ProQuest, Ovid, and PsycINFO databases. Antecedents of situational transitions include any change in work roles or work environments. Attributes of situational transitions include journey, disequilibrium, finding balance, conditional, and pervasive. Consequences of transition range from successful to unsuccessful. Additional research that investigates the specific needs and challenges unique to nurses practicing in a military environment is needed. © 2015 Wiley Periodicals, Inc.

  2. Critical thinking: concept analysis from the perspective of Rodger's evolutionary method of concept analysis

    PubMed Central

    Carbogim, Fábio da Costa; de Oliveira, Larissa Bertacchini; Püschel, Vilanice Alves de Araújo

    2016-01-01

    ABSTRACT Objective: to analyze the concept of critical thinking (CT) in Rodger's evolutionary perspective. Method: documentary research undertaken in the Cinahl, Lilacs, Bdenf and Dedalus databases, using the keywords of 'critical thinking' and 'Nursing', without limitation based on year of publication. The data were analyzed in accordance with the stages of Rodger's conceptual model. The following were included: books and articles in full, published in Portuguese, English or Spanish, which addressed CT in the teaching and practice of Nursing; articles which did not address aspects related to the concept of CT were excluded. Results: the sample was made up of 42 works. As a substitute term, emphasis is placed on 'analytical thinking', and, as a related factor, decision-making. In order, the most frequent preceding and consequent attributes were: ability to analyze, training of the student nurse, and clinical decision-making. As the implications of CT, emphasis is placed on achieving effective results in care for the patient, family and community. Conclusion: CT is a cognitive skill which involves analysis, logical reasoning and clinical judgment, geared towards the resolution of problems, and standing out in the training and practice of the nurse with a view to accurate clinical decision-making and the achieving of effective results. PMID:27598376

  3. Amino acid sequence and structural comparison of BACE1 and BACE2 using evolutionary trace method.

    PubMed

    Mirsafian, Hoda; Mat Ripen, Adiratna; Merican, Amir Feisal; Bin Mohamad, Saharuddin

    2014-01-01

    Beta-amyloid precursor protein cleavage enzyme 1 (BACE1) and beta-amyloid precursor protein cleavage enzyme 2 (BACE2), members of aspartyl protease family, are close homologues and have high similarity in their protein crystal structures. However, their enzymatic properties differ leading to disparate clinical consequences. In order to identify the residues that are responsible for such differences, we used evolutionary trace (ET) method to compare the amino acid conservation patterns of BACE1 and BACE2 in several mammalian species. We found that, in BACE1 and BACE2 structures, most of the ligand binding sites are conserved which indicate their enzymatic property of aspartyl protease family members. The other conserved residues are more or less randomly localized in other parts of the structures. Four group-specific residues were identified at the ligand binding site of BACE1 and BACE2. We postulated that these residues would be essential for selectivity of BACE1 and BACE2 biological functions and could be sites of interest for the design of selective inhibitors targeting either BACE1 or BACE2.

  4. A review on Monte Carlo simulation methods as they apply to mutation and selection as formulated in Wright-Fisher models of evolutionary genetics.

    PubMed

    Mode, Charles J; Gallop, Robert J

    2008-02-01

    A case has made for the use of Monte Carlo simulation methods when the incorporation of mutation and natural selection into Wright-Fisher gametic sampling models renders then intractable from the standpoint of classical mathematical analysis. The paper has been organized around five themes. Among these themes was that of scientific openness and a clear documentation of the mathematics underlying the software so that the results of any Monte Carlo simulation experiment may be duplicated by any interested investigator in a programming language of his choice. A second theme was the disclosure of the random number generator used in the experiments to provide critical insights as to whether the generated uniform random variables met the criterion of independence satisfactorily. A third theme was that of a review of recent literature in genetics on attempts to find signatures of evolutionary processes such as natural selection, among the millions of segments of DNA in the human genome, that may help guide the search for new drugs to treat diseases. A fourth theme involved formalization of Wright-Fisher processes in a simple form that expedited the writing of software to run Monte Carlo simulation experiments. Also included in this theme was the reporting of several illustrative Monte Carlo simulation experiments for the cases of two and three alleles at some autosomal locus, in which attempts were to made to apply the theory of Wright-Fisher models to gain some understanding as to how evolutionary signatures may have developed in the human genome and those of other diploid species. A fifth theme was centered on recommendations that more demographic factors, such as non-constant population size, be included in future attempts to develop computer models dealing with signatures of evolutionary process in genomes of various species. A brief review of literature on the incorporation of demographic factors into genetic evolutionary models was also included to expedite and

  5. Computational Studies of Protein Hydration Methods

    NASA Astrophysics Data System (ADS)

    Morozenko, Aleksandr

    It is widely appreciated that water plays a vital role in proteins' functions. The long-range proton transfer inside proteins is usually carried out by the Grotthuss mechanism and requires a chain of hydrogen bonds that is composed of internal water molecules and amino acid residues of the protein. In other cases, water molecules can facilitate the enzymes catalytic reactions by becoming a temporary proton donor/acceptor. Yet a reliable way of predicting water protein interior is still not available to the biophysics community. This thesis presents computational studies that have been performed to gain insights into the problems of fast and accurate prediction of potential water sites inside internal cavities of protein. Specifically, we focus on the task of attainment of correspondence between results obtained from computational experiments and experimental data available from X-ray structures. An overview of existing methods of predicting water molecules in the interior of a protein along with a discussion of the trustworthiness of these predictions is a second major subject of this thesis. A description of differences of water molecules in various media, particularly, gas, liquid and protein interior, and theoretical aspects of designing an adequate model of water for the protein environment are widely discussed in chapters 3 and 4. In chapter 5, we discuss recently developed methods of placement of water molecules into internal cavities of a protein. We propose a new methodology based on the principle of docking water molecules to a protein body which allows to achieve a higher degree of matching experimental data reported in protein crystal structures than other techniques available in the world of biophysical software. The new methodology is tested on a set of high-resolution crystal structures of oligopeptide-binding protein (OppA) containing a large number of resolved internal water molecules and applied to bovine heart cytochrome c oxidase in the fully

  6. An evolutionary computation based algorithm for calculating solar differential rotation by automatic tracking of coronal bright points

    NASA Astrophysics Data System (ADS)

    Shahamatnia, Ehsan; Dorotovič, Ivan; Fonseca, Jose M.; Ribeiro, Rita A.

    2016-03-01

    Developing specialized software tools is essential to support studies of solar activity evolution. With new space missions such as Solar Dynamics Observatory (SDO), solar images are being produced in unprecedented volumes. To capitalize on that huge data availability, the scientific community needs a new generation of software tools for automatic and efficient data processing. In this paper a prototype of a modular framework for solar feature detection, characterization, and tracking is presented. To develop an efficient system capable of automatic solar feature tracking and measuring, a hybrid approach combining specialized image processing, evolutionary optimization, and soft computing algorithms is being followed. The specialized hybrid algorithm for tracking solar features allows automatic feature tracking while gathering characterization details about the tracked features. The hybrid algorithm takes advantages of the snake model, a specialized image processing algorithm widely used in applications such as boundary delineation, image segmentation, and object tracking. Further, it exploits the flexibility and efficiency of Particle Swarm Optimization (PSO), a stochastic population based optimization algorithm. PSO has been used successfully in a wide range of applications including combinatorial optimization, control, clustering, robotics, scheduling, and image processing and video analysis applications. The proposed tool, denoted PSO-Snake model, was already successfully tested in other works for tracking sunspots and coronal bright points. In this work, we discuss the application of the PSO-Snake algorithm for calculating the sidereal rotational angular velocity of the solar corona. To validate the results we compare them with published manual results performed by an expert.

  7. Computational and design methods for advanced imaging

    NASA Astrophysics Data System (ADS)

    Birch, Gabriel C.

    This dissertation merges the optical design and computational aspects of imaging systems to create novel devices that solve engineering problems in optical science and attempts to expand the solution space available to the optical designer. This dissertation is divided into two parts: the first discusses a new active illumination depth sensing modality, while the second part discusses a passive illumination system called plenoptic, or lightfield, imaging. The new depth sensing modality introduced in part one is called depth through controlled aberration. This technique illuminates a target with a known, aberrated projected pattern and takes an image using a traditional, unmodified imaging system. Knowing how the added aberration in the projected pattern changes as a function of depth, we are able to quantitatively determine depth of a series of points from the camera. A major advantage this method permits is the ability for illumination and imaging axes to be coincident. Plenoptic cameras capture both spatial and angular data simultaneously. This dissertation present a new set of parameters that permit the design and comparison of plenoptic devices outside the traditionally published plenoptic 1.0 and plenoptic 2.0 configurations. Additionally, a series of engineering advancements are presented, including full system raytraces of raw plenoptic images, Zernike compression techniques of raw image files, and non-uniform lenslet arrays to compensate for plenoptic system aberrations. Finally, a new snapshot imaging spectrometer is proposed based off the plenoptic configuration.

  8. An evolutionary computing approach for parameter estimation investigation of a model for cholera.

    PubMed

    Akman, Olcay; Schaefer, Elsa

    2015-01-01

    We consider the problem of using time-series data to inform a corresponding deterministic model and introduce the concept of genetic algorithms (GA) as a tool for parameter estimation, providing instructions for an implementation of the method that does not require access to special toolboxes or software. We give as an example a model for cholera, a disease for which there is much mechanistic uncertainty in the literature. We use GA to find parameter sets using available time-series data from the introduction of cholera in Haiti and we discuss the value of comparing multiple parameter sets with similar performances in describing the data.

  9. MAE-FMD: multi-agent evolutionary method for functional module detection in protein-protein interaction networks.

    PubMed

    Ji, Jun Zhong; Jiao, Lang; Yang, Cui Cui; Lv, Jia Wei; Zhang, Ai Dong

    2014-09-30

    Studies of functional modules in a Protein-Protein Interaction (PPI) network contribute greatly to the understanding of biological mechanisms. With the development of computing science, computational approaches have played an important role in detecting functional modules. We present a new approach using multi-agent evolution for detection of functional modules in PPI networks. The proposed approach consists of two stages: the solution construction for agents in a population and the evolutionary process of computational agents in a lattice environment, where each agent corresponds to a candidate solution to the detection problem of functional modules in a PPI network. First, the approach utilizes a connection-based encoding scheme to model an agent, and employs a random-walk behavior merged topological characteristics with functional information to construct a solution. Next, it applies several evolutionary operators, i.e., competition, crossover, and mutation, to realize information exchange among agents as well as solution evolution. Systematic experiments have been conducted on three benchmark testing sets of yeast networks. Experimental results show that the approach is more effective compared to several other existing algorithms. The algorithm has the characteristics of outstanding recall, F-measure, sensitivity and accuracy while keeping other competitive performances, so it can be applied to the biological study which requires high accuracy.

  10. Evolutionary Signatures amongst Disease Genes Permit Novel Methods for Gene Prioritization and Construction of Informative Gene-Based Networks

    PubMed Central

    Priedigkeit, Nolan; Wolfe, Nicholas; Clark, Nathan L.

    2015-01-01

    Genes involved in the same function tend to have similar evolutionary histories, in that their rates of evolution covary over time. This coevolutionary signature, termed Evolutionary Rate Covariation (ERC), is calculated using only gene sequences from a set of closely related species and has demonstrated potential as a computational tool for inferring functional relationships between genes. To further define applications of ERC, we first established that roughly 55% of genetic diseases posses an ERC signature between their contributing genes. At a false discovery rate of 5% we report 40 such diseases including cancers, developmental disorders and mitochondrial diseases. Given these coevolutionary signatures between disease genes, we then assessed ERC's ability to prioritize known disease genes out of a list of unrelated candidates. We found that in the presence of an ERC signature, the true disease gene is effectively prioritized to the top 6% of candidates on average. We then apply this strategy to a melanoma-associated region on chromosome 1 and identify MCL1 as a potential causative gene. Furthermore, to gain global insight into disease mechanisms, we used ERC to predict molecular connections between 310 nominally distinct diseases. The resulting “disease map” network associates several diseases with related pathogenic mechanisms and unveils many novel relationships between clinically distinct diseases, such as between Hirschsprung's disease and melanoma. Taken together, these results demonstrate the utility of molecular evolution as a gene discovery platform and show that evolutionary signatures can be used to build informative gene-based networks. PMID:25679399

  11. Evolutionary signatures amongst disease genes permit novel methods for gene prioritization and construction of informative gene-based networks.

    PubMed

    Priedigkeit, Nolan; Wolfe, Nicholas; Clark, Nathan L

    2015-02-01

    Genes involved in the same function tend to have similar evolutionary histories, in that their rates of evolution covary over time. This coevolutionary signature, termed Evolutionary Rate Covariation (ERC), is calculated using only gene sequences from a set of closely related species and has demonstrated potential as a computational tool for inferring functional relationships between genes. To further define applications of ERC, we first established that roughly 55% of genetic diseases posses an ERC signature between their contributing genes. At a false discovery rate of 5% we report 40 such diseases including cancers, developmental disorders and mitochondrial diseases. Given these coevolutionary signatures between disease genes, we then assessed ERC's ability to prioritize known disease genes out of a list of unrelated candidates. We found that in the presence of an ERC signature, the true disease gene is effectively prioritized to the top 6% of candidates on average. We then apply this strategy to a melanoma-associated region on chromosome 1 and identify MCL1 as a potential causative gene. Furthermore, to gain global insight into disease mechanisms, we used ERC to predict molecular connections between 310 nominally distinct diseases. The resulting "disease map" network associates several diseases with related pathogenic mechanisms and unveils many novel relationships between clinically distinct diseases, such as between Hirschsprung's disease and melanoma. Taken together, these results demonstrate the utility of molecular evolution as a gene discovery platform and show that evolutionary signatures can be used to build informative gene-based networks.

  12. Computational structural mechanics methods research using an evolving framework

    NASA Technical Reports Server (NTRS)

    Knight, N. F., Jr.; Lotts, C. G.; Gillian, R. E.

    1990-01-01

    Advanced structural analysis and computational methods that exploit high-performance computers are being developed in a computational structural mechanics research activity sponsored by the NASA Langley Research Center. These new methods are developed in an evolving framework and applied to representative complex structural analysis problems from the aerospace industry. An overview of the methods development environment is presented, and methods research areas are described. Selected application studies are also summarized.

  13. Computational structural mechanics methods research using an evolving framework

    NASA Technical Reports Server (NTRS)

    Knight, N. F., Jr.; Lotts, C. G.; Gillian, R. E.

    1990-01-01

    Advanced structural analysis and computational methods that exploit high-performance computers are being developed in a computational structural mechanics research activity sponsored by the NASA Langley Research Center. These new methods are developed in an evolving framework and applied to representative complex structural analysis problems from the aerospace industry. An overview of the methods development environment is presented, and methods research areas are described. Selected application studies are also summarized.

  14. Method of performing computational aeroelastic analyses

    NASA Technical Reports Server (NTRS)

    Silva, Walter A. (Inventor)

    2011-01-01

    Computational aeroelastic analyses typically use a mathematical model for the structural modes of a flexible structure and a nonlinear aerodynamic model that can generate a plurality of unsteady aerodynamic responses based on the structural modes for conditions defining an aerodynamic condition of the flexible structure. In the present invention, a linear state-space model is generated using a single execution of the nonlinear aerodynamic model for all of the structural modes where a family of orthogonal functions is used as the inputs. Then, static and dynamic aeroelastic solutions are generated using computational interaction between the mathematical model and the linear state-space model for a plurality of periodic points in time.

  15. Computational Methods Applied to Rational Drug Design.

    PubMed

    Ramírez, David

    2016-01-01

    Due to the synergic relationship between medical chemistry, bioinformatics and molecular simulation, the development of new accurate computational tools for small molecules drug design has been rising over the last years. The main result is the increased number of publications where computational techniques such as molecular docking, de novo design as well as virtual screening have been used to estimate the binding mode, site and energy of novel small molecules. In this work I review some tools, which enable the study of biological systems at the atomistic level, providing relevant information and thereby, enhancing the process of rational drug design.

  16. Computational Methods Applied to Rational Drug Design

    PubMed Central

    Ramírez, David

    2016-01-01

    Due to the synergic relationship between medical chemistry, bioinformatics and molecular simulation, the development of new accurate computational tools for small molecules drug design has been rising over the last years. The main result is the increased number of publications where computational techniques such as molecular docking, de novo design as well as virtual screening have been used to estimate the binding mode, site and energy of novel small molecules. In this work I review some tools, which enable the study of biological systems at the atomistic level, providing relevant information and thereby, enhancing the process of rational drug design. PMID:27708723

  17. Parallel computation with the spectral element method

    SciTech Connect

    Ma, Hong

    1995-12-01

    Spectral element models for the shallow water equations and the Navier-Stokes equations have been successfully implemented on a data parallel supercomputer, the Connection Machine model CM-5. The nonstaggered grid formulations for both models are described, which are shown to be especially efficient in data parallel computing environment.

  18. A method of billing third generation computer users

    NASA Technical Reports Server (NTRS)

    Anderson, P. N.; Hyter, D. R.

    1973-01-01

    A method is presented for charging users for the processing of their applications on third generation digital computer systems is presented. For background purposes, problems and goals in billing on third generation systems are discussed. Detailed formulas are derived based on expected utilization and computer component cost. These formulas are then applied to a specific computer system (UNIVAC 1108). The method, although possessing some weaknesses, is presented as a definite improvement over use of second generation billing methods.

  19. Evolutionary stability on graphs.

    PubMed

    Ohtsuki, Hisashi; Nowak, Martin A

    2008-04-21

    Evolutionary stability is a fundamental concept in evolutionary game theory. A strategy is called an evolutionarily stable strategy (ESS), if its monomorphic population rejects the invasion of any other mutant strategy. Recent studies have revealed that population structure can considerably affect evolutionary dynamics. Here we derive the conditions of evolutionary stability for games on graphs. We obtain analytical conditions for regular graphs of degree k > 2. Those theoretical predictions are compared with computer simulations for random regular graphs and for lattices. We study three different update rules: birth-death (BD), death-birth (DB), and imitation (IM) updating. Evolutionary stability on sparse graphs does not imply evolutionary stability in a well-mixed population, nor vice versa. We provide a geometrical interpretation of the ESS condition on graphs.

  20. Evolutionary stability on graphs

    PubMed Central

    Ohtsuki, Hisashi; Nowak, Martin A.

    2008-01-01

    Evolutionary stability is a fundamental concept in evolutionary game theory. A strategy is called an evolutionarily stable strategy (ESS), if its monomorphic population rejects the invasion of any other mutant strategy. Recent studies have revealed that population structure can considerably affect evolutionary dynamics. Here we derive the conditions of evolutionary stability for games on graphs. We obtain analytical conditions for regular graphs of degree k > 2. Those theoretical predictions are compared with computer simulations for random regular graphs and for lattices. We study three different update rules: birth-death (BD), death-birth (DB), and imitation (IM) updating. Evolutionary stability on sparse graphs does not imply evolutionary stability in a well-mixed population, nor vice versa. We provide a geometrical interpretation of the ESS condition on graphs. PMID:18295801

  1. LS³: A Method for Improving Phylogenomic Inferences When Evolutionary Rates Are Heterogeneous among Taxa

    PubMed Central

    Rivera-Rivera, Carlos J.; Montoya-Burgos, Juan I.

    2016-01-01

    Phylogenetic inference artifacts can occur when sequence evolution deviates from assumptions made by the models used to analyze them. The combination of strong model assumption violations and highly heterogeneous lineage evolutionary rates can become problematic in phylogenetic inference, and lead to the well-described long-branch attraction (LBA) artifact. Here, we define an objective criterion for assessing lineage evolutionary rate heterogeneity among predefined lineages: the result of a likelihood ratio test between a model in which the lineages evolve at the same rate (homogeneous model) and a model in which different lineage rates are allowed (heterogeneous model). We implement this criterion in the algorithm Locus Specific Sequence Subsampling (LS³), aimed at reducing the effects of LBA in multi-gene datasets. For each gene, LS³ sequentially removes the fastest-evolving taxon of the ingroup and tests for lineage rate homogeneity until all lineages have uniform evolutionary rates. The sequences excluded from the homogeneously evolving taxon subset are flagged as potentially problematic. The software implementation provides the user with the possibility to remove the flagged sequences for generating a new concatenated alignment. We tested LS³ with simulations and two real datasets containing LBA artifacts: a nucleotide dataset regarding the position of Glires within mammals and an amino-acid dataset concerning the position of nematodes within bilaterians. The initially incorrect phylogenies were corrected in all cases upon removing data flagged by LS³. PMID:26912812

  2. Ideal and computer mathematics applied to meshfree methods

    NASA Astrophysics Data System (ADS)

    Kansa, E.

    2016-10-01

    Early numerical methods to solve ordinary and partial differential relied upon human computers who used mechanical devices. The algorithms used changed little over the evolution of electronic computers having only low order convergence rates. A meshfree scheme was developed for problems that converges exponentially using the latest computational science toolkit.

  3. Soft computing methods for geoidal height transformation

    NASA Astrophysics Data System (ADS)

    Akyilmaz, O.; Özlüdemir, M. T.; Ayan, T.; Çelik, R. N.

    2009-07-01

    Soft computing techniques, such as fuzzy logic and artificial neural network (ANN) approaches, have enabled researchers to create precise models for use in many scientific and engineering applications. Applications that can be employed in geodetic studies include the estimation of earth rotation parameters and the determination of mean sea level changes. Another important field of geodesy in which these computing techniques can be applied is geoidal height transformation. We report here our use of a conventional polynomial model, the Adaptive Network-based Fuzzy (or in some publications, Adaptive Neuro-Fuzzy) Inference System (ANFIS), an ANN and a modified ANN approach to approximate geoid heights. These approximation models have been tested on a number of test points. The results obtained through the transformation processes from ellipsoidal heights into local levelling heights have also been compared.

  4. Training in Methods in Computational Neuroscience

    DTIC Science & Technology

    1992-08-29

    and synaptic transmission. 5 Bruce McNaughton Long Term Potentiation (LTP) - experimental facts and mathematical and computer models. John Lisman...connected to the MBL ethernet. Faculty Affiliations 1993 Course Directors DAVID KLEINFELD, AT&T Bell Laboratories, Murray Hill, NJ DAVID W . TANK, AT&T Bell...University of Rochester, Rochester, NY DAVID A. McCORMICH, Yale University School of Medicine, New Haven, CT BRUCE L. McNAUGHTON, University of Arizona

  5. Soft computing methods in design of superalloys

    NASA Technical Reports Server (NTRS)

    Cios, K. J.; Berke, L.; Vary, A.; Sharma, S.

    1995-01-01

    Soft computing techniques of neural networks and genetic algorithms are used in the design of superalloys. The cyclic oxidation attack parameter K(sub a), generated from tests at NASA Lewis Research Center, is modeled as a function of the superalloy chemistry and test temperature using a neural network. This model is then used in conjunction with a genetic algorithm to obtain an optimized superalloy composition resulting in low K(sub a) values.

  6. Soft Computing Methods in Design of Superalloys

    NASA Technical Reports Server (NTRS)

    Cios, K. J.; Berke, L.; Vary, A.; Sharma, S.

    1996-01-01

    Soft computing techniques of neural networks and genetic algorithms are used in the design of superalloys. The cyclic oxidation attack parameter K(sub a), generated from tests at NASA Lewis Research Center, is modelled as a function of the superalloy chemistry and test temperature using a neural network. This model is then used in conjunction with a genetic algorithm to obtain an optimized superalloy composition resulting in low K(sub a) values.

  7. Computational Methods for Material Failure Processes

    DTIC Science & Technology

    1994-02-01

    Belytschko, "Advances in Computational Mechanics," Nuclear Engineering and Design, 134, pp. 1-22, 1992. T. Belytschko and N. D. Gilbertsen, "Implementtion...band along the normal direction. 50 N4 N3 N4 N? N3 N4 N3 .41 Fision NS ’, Mz 46 Fusion NI NZ NI NS NZ NI NZ iM MI MZ Mi NI NZ Ni N3 NZ NI NZ Fig. 2.1

  8. Statistical methods and computing for big data

    PubMed Central

    Wang, Chun; Chen, Ming-Hui; Schifano, Elizabeth; Wu, Jing

    2016-01-01

    Big data are data on a massive scale in terms of volume, intensity, and complexity that exceed the capacity of standard analytic tools. They present opportunities as well as challenges to statisticians. The role of computational statisticians in scientific discovery from big data analyses has been under-recognized even by peer statisticians. This article summarizes recent methodological and software developments in statistics that address the big data challenges. Methodologies are grouped into three classes: subsampling-based, divide and conquer, and online updating for stream data. As a new contribution, the online updating approach is extended to variable selection with commonly used criteria, and their performances are assessed in a simulation study with stream data. Software packages are summarized with focuses on the open source R and R packages, covering recent tools that help break the barriers of computer memory and computing power. Some of the tools are illustrated in a case study with a logistic regression for the chance of airline delay. PMID:27695593

  9. Computational Methods for Analyzing Health News Coverage

    ERIC Educational Resources Information Center

    McFarlane, Delano J.

    2011-01-01

    Researchers that investigate the media's coverage of health have historically relied on keyword searches to retrieve relevant health news coverage, and manual content analysis methods to categorize and score health news text. These methods are problematic. Manual content analysis methods are labor intensive, time consuming, and inherently…

  10. Computing methods in applied sciences and engineering. VII

    SciTech Connect

    Glowinski, R.; Lions, J.L.

    1986-01-01

    The design of computers with fast memories, capable of up to one billion floating point operations per second, is important for the attempts being made to solve problems in Scientific Computing. The role of numerical algorithm designers is important due to the architectures and programming necessary to utilize the full potential of these machines. Efficient use of such computers requires sophisticated programming tools, and in the case of parallel computers algorithmic concepts have to be introduced. These new methods and concepts are presented.

  11. Computational Analyses of an Evolutionary Arms Race between Mammalian Immunity Mediated by Immunoglobulin A and Its Subversion by Bacterial Pathogens

    PubMed Central

    Pinheiro, Ana; Woof, Jenny M.; Abi-Rached, Laurent; Parham, Peter; Esteves, Pedro J.

    2013-01-01

    IgA is the predominant immunoglobulin isotype in mucosal tissues and external secretions, playing important roles both in defense against pathogens and in maintenance of commensal microbiota. Considering the complexity of its interactions with the surrounding environment, IgA is a likely target for diversifying or positive selection. To investigate this possibility, the action of natural selection on IgA was examined in depth with six different methods: CODEML from the PAML package and the SLAC, FEL, REL, MEME and FUBAR methods implemented in the Datamonkey webserver. In considering just primate IgA, these analyses show that diversifying selection targeted five positions of the Cα1 and Cα2 domains of IgA. Extending the analysis to include other mammals identified 18 positively selected sites: ten in Cα1, five in Cα2 and three in Cα3. All but one of these positions display variation in polarity and charge. Their structural locations suggest they indirectly influence the conformation of sites on IgA that are critical for interaction with host IgA receptors and also with proteins produced by mucosal pathogens that prevent their elimination by IgA-mediated effector mechanisms. Demonstrating the plasticity of IgA in the evolution of different groups of mammals, only two of the eighteen selected positions in all mammals are included in the five selected positions in primates. That IgA residues subject to positive selection impact sites targeted both by host receptors and subversive pathogen ligands highlights the evolutionary arms race playing out between mammals and pathogens, and further emphasizes the importance of IgA in protection against mucosal pathogens. PMID:24019941

  12. Integral Deferred Correction methods for scientific computing

    NASA Astrophysics Data System (ADS)

    Morton, Maureen Marilla

    Since high order numerical methods frequently can attain accurate solutions more efficiently than low order methods, we develop and analyze new high order numerical integrators for the time discretization of ordinary and partial differential equations. Our novel methods address some of the issues surrounding high order numerical time integration, such as the difficulty of many popular methods' construction and handling the effects of disparate behaviors produce by different terms in the equations to be solved. We are motivated by the simplicity of how Deferred Correction (DC) methods achieve high order accuracy [72, 27]. DC methods are numerical time integrators that, rather than calculating tedious coefficients for order conditions, instead construct high order accurate solutions by iteratively improving a low order preliminary numerical solution. With each iteration, an error equation is solved, the error decreases, and the order of accuracy increases. Later, DC methods were adjusted to include an integral formulation of the residual, which stabilizes the method. These Spectral Deferred Correction (SDC) methods [25] motivated Integral Deferred Corrections (IDC) methods. Typically, SDC methods are limited to increasing the order of accuracy by one with each iteration due to smoothness properties imposed by the gridspacing. However, under mild assumptions, explicit IDC methods allow for any explicit rth order Runge-Kutta (RK) method to be used within each iteration, and then an order of accuracy increase of r is attained after each iteration [18]. We extend these results to the construction of implicit IDC methods that use implicit RK methods, and we prove analogous results for order of convergence. One means of solving equations with disparate parts is by semi-implicit integrators, handling a "fast" part implicitly and a "slow" part explicitly. We incorporate additive RK (ARK) integrators into the iterations of IDC methods in order to construct new arbitrary order

  13. Computational methods for aerodynamic design using numerical optimization

    NASA Technical Reports Server (NTRS)

    Peeters, M. F.

    1983-01-01

    Five methods to increase the computational efficiency of aerodynamic design using numerical optimization, by reducing the computer time required to perform gradient calculations, are examined. The most promising method consists of drastically reducing the size of the computational domain on which aerodynamic calculations are made during gradient calculations. Since a gradient calculation requires the solution of the flow about an airfoil whose geometry was slightly perturbed from a base airfoil, the flow about the base airfoil is used to determine boundary conditions on the reduced computational domain. This method worked well in subcritical flow.

  14. Computational Methods for Jet Noise Simulation

    NASA Technical Reports Server (NTRS)

    Goodrich, John W. (Technical Monitor); Hagstrom, Thomas

    2003-01-01

    The purpose of our project is to develop, analyze, and test novel numerical technologies central to the long term goal of direct simulations of subsonic jet noise. Our current focus is on two issues: accurate, near-field domain truncations and high-order, single-step discretizations of the governing equations. The Direct Numerical Simulation (DNS) of jet noise poses a number of extreme challenges to computational technique. In particular, the problem involves multiple temporal and spatial scales as well as flow instabilities and is posed on an unbounded spatial domain. Moreover, the basic phenomenon of interest, the radiation of acoustic waves to the far field, involves only a minuscule fraction of the total energy. The best current simulations of jet noise are at low Reynolds number. It is likely that an increase of one to two orders of magnitude will be necessary to reach a regime where the separation between the energy-containing and dissipation scales is sufficient to make the radiated noise essentially independent of the Reynolds number. Such an increase in resolution cannot be obtained in the near future solely through increases in computing power. Therefore, new numerical methodologies of maximal efficiency and accuracy are required.

  15. An experimental unification of reservoir computing methods.

    PubMed

    Verstraeten, D; Schrauwen, B; D'Haene, M; Stroobandt, D

    2007-04-01

    Three different uses of a recurrent neural network (RNN) as a reservoir that is not trained but instead read out by a simple external classification layer have been described in the literature: Liquid State Machines (LSMs), Echo State Networks (ESNs) and the Backpropagation Decorrelation (BPDC) learning rule. Individual descriptions of these techniques exist, but a overview is still lacking. Here, we present a series of experimental results that compares all three implementations, and draw conclusions about the relation between a broad range of reservoir parameters and network dynamics, memory, node complexity and performance on a variety of benchmark tests with different characteristics. Next, we introduce a new measure for the reservoir dynamics based on Lyapunov exponents. Unlike previous measures in the literature, this measure is dependent on the dynamics of the reservoir in response to the inputs, and in the cases we tried, it indicates an optimal value for the global scaling of the weight matrix, irrespective of the standard measures. We also describe the Reservoir Computing Toolbox that was used for these experiments, which implements all the types of Reservoir Computing and allows the easy simulation of a wide range of reservoir topologies for a number of benchmarks.

  16. Three parallel computation methods for structural vibration analysis

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf; Bostic, Susan; Patrick, Merrell; Mahajan, Umesh; Ma, Shing

    1988-01-01

    The Lanczos (1950), multisectioning, and subspace iteration sequential methods for vibration analysis presently used as bases for three parallel algorithms are noted, in the aftermath of three example problems, to maintain reasonable accuracy in the computation of vibration frequencies. Significant computation time reductions are obtained as the number of processors increases. An analysis is made of the performance of each method, in order to characterize relative strengths and weaknesses as well as to identify those parameters that most strongly affect computation efficiency.

  17. Statistical analysis and definition of blockages-prediction formulae for the wastewater network of Oslo by evolutionary computing.

    PubMed

    Ugarelli, Rita; Kristensen, Stig Morten; Røstum, Jon; Saegrov, Sveinung; Di Federico, Vittorio

    2009-01-01

    Oslo Vann og Avløpsetaten (Oslo VAV)-the water/wastewater utility in the Norwegian capital city of Oslo-is assessing future strategies for selection of most reliable materials for wastewater networks, taking into account not only material technical performance but also material performance, regarding operational condition of the system.The research project undertaken by SINTEF Group, the largest research organisation in Scandinavia, NTNU (Norges Teknisk-Naturvitenskapelige Universitet) and Oslo VAV adopts several approaches to understand reasons for failures that may impact flow capacity, by analysing historical data for blockages in Oslo.The aim of the study was to understand whether there is a relationship between the performance of the pipeline and a number of specific attributes such as age, material, diameter, to name a few. This paper presents the characteristics of the data set available and discusses the results obtained by performing two different approaches: a traditional statistical analysis by segregating the pipes into classes, each of which with the same explanatory variables, and a Evolutionary Polynomial Regression model (EPR), developed by Technical University of Bari and University of Exeter, to identify possible influence of pipe's attributes on the total amount of predicted blockages in a period of time.Starting from a detailed analysis of the available data for the blockage events, the most important variables are identified and a classification scheme is adopted.From the statistical analysis, it can be stated that age, size and function do seem to have a marked influence on the proneness of a pipeline to blockages, but, for the reduced sample available, it is difficult to say which variable it is more influencing. If we look at total number of blockages the oldest class seems to be the most prone to blockages, but looking at blockage rates (number of blockages per km per year), then it is the youngest class showing the highest blockage rate

  18. Discontinuous Galerkin Methods: Theory, Computation and Applications

    SciTech Connect

    Cockburn, B.; Karniadakis, G. E.; Shu, C-W

    2000-12-31

    This volume contains a survey article for Discontinuous Galerkin Methods (DGM) by the editors as well as 16 papers by invited speakers and 32 papers by contributed speakers of the First International Symposium on Discontinuous Galerkin Methods. It covers theory, applications, and implementation aspects of DGM.

  19. Domain identification in impedance computed tomography by spline collocation method

    NASA Technical Reports Server (NTRS)

    Kojima, Fumio

    1990-01-01

    A method for estimating an unknown domain in elliptic boundary value problems is considered. The problem is formulated as an inverse problem of integral equations of the second kind. A computational method is developed using a splice collocation scheme. The results can be applied to the inverse problem of impedance computed tomography (ICT) for image reconstruction.

  20. Classical versus Computer Algebra Methods in Elementary Geometry

    ERIC Educational Resources Information Center

    Pech, Pavel

    2005-01-01

    Computer algebra methods based on results of commutative algebra like Groebner bases of ideals and elimination of variables make it possible to solve complex, elementary and non elementary problems of geometry, which are difficult to solve using a classical approach. Computer algebra methods permit the proof of geometric theorems, automatic…

  1. Overview of computational structural methods for modern military aircraft

    NASA Technical Reports Server (NTRS)

    Kudva, J. N.

    1992-01-01

    Computational structural methods are essential for designing modern military aircraft. This briefing deals with computational structural methods (CSM) currently used. First a brief summary of modern day aircraft structural design procedures is presented. Following this, several ongoing CSM related projects at Northrop are discussed. Finally, shortcomings in this area, future requirements, and summary remarks are given.

  2. 12 CFR 227.25 - Unfair balance computation method.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 3 2010-01-01 2010-01-01 false Unfair balance computation method. 227.25... Practices Rule § 227.25 Unfair balance computation method. (a) General rule. Except as provided in paragraph (b) of this section, a bank must not impose finance charges on balances on a consumer credit card...

  3. Lattice gas methods for computational aeroacoustics

    NASA Technical Reports Server (NTRS)

    Sparrow, Victor W.

    1995-01-01

    This paper presents the lattice gas solution to the category 1 problems of the ICASE/LaRC Workshop on Benchmark Problems in Computational Aeroacoustics. The first and second problems were solved for Delta t = Delta x = 1, and additionally the second problem was solved for Delta t = 1/4 and Delta x = 1/2. The results are striking: even for these large time and space grids the lattice gas numerical solutions are almost indistinguishable from the analytical solutions. A simple bug in the Mathematica code was found in the solutions submitted for comparison, and the comparison plots shown at the end of this volume show the bug. An Appendix to the present paper shows an example lattice gas solution with and without the bug.

  4. An Automatic Method for Predicting Transmembrane Protein Structures Using Cryo-EM and Evolutionary Data

    PubMed Central

    Fleishman, Sarel J.; Harrington, Susan; Friesner, Richard A.; Honig, Barry; Ben-Tal, Nir

    2004-01-01

    The transmembrane (TM) domains of many integral membrane proteins are composed of α-helix bundles. Structure determination at high resolution (<4 Å) of TM domains is still exceedingly difficult experimentally. Hence, some TM-protein structures have only been solved at intermediate (5–10 Å) or low (>10 Å) resolutions using, for example, cryo-electron microscopy (cryo-EM). These structures reveal the packing arrangement of the TM domain, but cannot be used to determine the positions of individual amino acids. The observation that typically, the lipid-exposed faces of TM proteins are evolutionarily more variable and less charged than their core provides a simple rule for orienting their constituent helices. Based on this rule, we developed score functions and automated methods for orienting TM helices, for which locations and tilt angles have been determined using, e.g., cryo-EM data. The method was parameterized with the aim of retrieving the native structure of bacteriorhodopsin among near- and far-from-native templates. It was then tested on proteins that differ from bacteriorhodopsin in their sequences, architectures, and functions, such as the acetylcholine receptor and rhodopsin. The predicted structures were within 1.5–3.5 Å from the native state in all cases. We conclude that the computational method can be used in conjunction with cryo-EM data to obtain approximate model structures of TM domains of proteins for which a sufficiently heterogeneous set of homologs is available. We also show that in those proteins in which relatively short loops connect neighboring helices, the scoring functions can discriminate between near- and far-from-native conformations even without the constraints imposed on helix locations and tilt angles that are derived from cryo-EM. PMID:15339802

  5. Architectural Room Planning Support System using Methods of Generating Spatial Layout Plans and Evolutionary Multi-objective Optimization

    NASA Astrophysics Data System (ADS)

    Inoue, Makoto; Takagi, Hideyuki

    Firstly, we propose a spatial planning algorithm inspired by cellar automaton and spatial growth rules for spatial planning support, i.e. generating multiple subspaces and making their layouts. Their features are that there is less restrictions in the shapes, sizes, and positions of the generated subspaces and gap sizes among the subspaces are controllable. We also show the framework of our final spatial planning support system that consists of (1) a spatial layout generator including the mentioned algorithm and rules as main parts and a visualization part generating layout diagrams and (2) an optimization part which main components, i.e. evolutionary multi-objective optimization (EMO) and interactive evolutionary computation, optimize the generated spatial plans. Secondly, we make a concrete architectural room planning support system based on some parts of the said framework and confirm that the EMO makes the generated architectural room plans converge, experimentally. We confirm the performance of the system using two EMO's with four and six objectives, respectively. We also evaluate the effect of introducing a niche technique into the EMO to obtain the variety of architectural room plans. The experiments showed that the convergence of each objective over generations and variety of architectural room plans among individuals of higher scores. This experimental evaluation implies that the combination of our proposed spatial planning algorithms and spatial growth rules is applicable to spatial planning support systems.

  6. Computational Methods for Complex Flow Fields.

    DTIC Science & Technology

    1986-06-28

    James J. Riley Joel H . Ferziger "Turbulent Flow Simulation - Future Needs" Micha Wolfshtein " Numerical Calculation of the Reynolds Stress and Turbulent...July 1983. Also in RECENT ADVANCES IN NUMERICAL METHODS IN FLUIDS, Vol. 3, Editor W.G. Habashi, Pineridge Press. 2. Usab, W.J., "Embedded Mesh Solutions...ridiaconal matrices applicable to approximane factorization methods . E:xlicit algcrit-s are also easier to adapz to multiProcessor arcr.itectures as the

  7. COMSAC: Computational Methods for Stability and Control. Part 1

    NASA Technical Reports Server (NTRS)

    Fremaux, C. Michael (Compiler); Hall, Robert M. (Compiler)

    2004-01-01

    Work on stability and control included the following reports:Introductory Remarks; Introduction to Computational Methods for Stability and Control (COMSAC); Stability & Control Challenges for COMSAC: a NASA Langley Perspective; Emerging CFD Capabilities and Outlook A NASA Langley Perspective; The Role for Computational Fluid Dynamics for Stability and Control:Is it Time?; Northrop Grumman Perspective on COMSAC; Boeing Integrated Defense Systems Perspective on COMSAC; Computational Methods in Stability and Control:WPAFB Perspective; Perspective: Raytheon Aircraft Company; A Greybeard's View of the State of Aerodynamic Prediction; Computational Methods for Stability and Control: A Perspective; Boeing TacAir Stability and Control Issues for Computational Fluid Dynamics; NAVAIR S&C Issues for CFD; An S&C Perspective on CFD; Issues, Challenges & Payoffs: A Boeing User s Perspective on CFD for S&C; and Stability and Control in Computational Simulations for Conceptual and Preliminary Design: the Past, Today, and Future?

  8. An integrative method for testing form-function linkages and reconstructed evolutionary pathways of masticatory specialization.

    PubMed

    Tseng, Z Jack; Flynn, John J

    2015-06-06

    Morphology serves as a ubiquitous proxy in macroevolutionary studies to identify potential adaptive processes and patterns. Inferences of functional significance of phenotypes or their evolution are overwhelmingly based on data from living taxa. Yet, correspondence between form and function has been tested in only a few model species, and those linkages are highly complex. The lack of explicit methodologies to integrate form and function analyses within a deep-time and phylogenetic context weakens inferences of adaptive morphological evolution, by invoking but not testing form-function linkages. Here, we provide a novel approach to test mechanical properties at reconstructed ancestral nodes/taxa and the strength and direction of evolutionary pathways in feeding biomechanics, in a case study of carnivorous mammals. Using biomechanical profile comparisons that provide functional signals for the separation of feeding morphologies, we demonstrate, using experimental optimization criteria on estimation of strength and direction of functional changes on a phylogeny, that convergence in mechanical properties and degree of evolutionary optimization can be decoupled. This integrative approach is broadly applicable to other clades, by using quantitative data and model-based tests to evaluate interpretations of function from morphology and functional explanations for observed macroevolutionary pathways. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  9. An integrative method for testing form–function linkages and reconstructed evolutionary pathways of masticatory specialization

    PubMed Central

    Tseng, Z. Jack; Flynn, John J.

    2015-01-01

    Morphology serves as a ubiquitous proxy in macroevolutionary studies to identify potential adaptive processes and patterns. Inferences of functional significance of phenotypes or their evolution are overwhelmingly based on data from living taxa. Yet, correspondence between form and function has been tested in only a few model species, and those linkages are highly complex. The lack of explicit methodologies to integrate form and function analyses within a deep-time and phylogenetic context weakens inferences of adaptive morphological evolution, by invoking but not testing form–function linkages. Here, we provide a novel approach to test mechanical properties at reconstructed ancestral nodes/taxa and the strength and direction of evolutionary pathways in feeding biomechanics, in a case study of carnivorous mammals. Using biomechanical profile comparisons that provide functional signals for the separation of feeding morphologies, we demonstrate, using experimental optimization criteria on estimation of strength and direction of functional changes on a phylogeny, that convergence in mechanical properties and degree of evolutionary optimization can be decoupled. This integrative approach is broadly applicable to other clades, by using quantitative data and model-based tests to evaluate interpretations of function from morphology and functional explanations for observed macroevolutionary pathways. PMID:25994295

  10. Cancer Biomarkers from Genome-Scale DNA Methylation: Comparison of Evolutionary and Semantic Analysis Methods

    PubMed Central

    Valavanis, Ioannis; Pilalis, Eleftherios; Georgiadis, Panagiotis; Kyrtopoulos, Soterios; Chatziioannou, Aristotelis

    2015-01-01

    DNA methylation profiling exploits microarray technologies, thus yielding a wealth of high-volume data. Here, an intelligent framework is applied, encompassing epidemiological genome-scale DNA methylation data produced from the Illumina’s Infinium Human Methylation 450K Bead Chip platform, in an effort to correlate interesting methylation patterns with cancer predisposition and, in particular, breast cancer and B-cell lymphoma. Feature selection and classification are employed in order to select, from an initial set of ~480,000 methylation measurements at CpG sites, predictive cancer epigenetic biomarkers and assess their classification power for discriminating healthy versus cancer related classes. Feature selection exploits evolutionary algorithms or a graph-theoretic methodology which makes use of the semantics information included in the Gene Ontology (GO) tree. The selected features, corresponding to methylation of CpG sites, attained moderate-to-high classification accuracies when imported to a series of classifiers evaluated by resampling or blindfold validation. The semantics-driven selection revealed sets of CpG sites performing similarly with evolutionary selection in the classification tasks. However, gene enrichment and pathway analysis showed that it additionally provides more descriptive sets of GO terms and KEGG pathways regarding the cancer phenotypes studied here. Results support the expediency of this methodology regarding its application in epidemiological studies. PMID:27600245

  11. Assessment of gene order computing methods for Alzheimer's disease

    PubMed Central

    2013-01-01

    Background Computational genomics of Alzheimer disease (AD), the most common form of senile dementia, is a nascent field in AD research. The field includes AD gene clustering by computing gene order which generates higher quality gene clustering patterns than most other clustering methods. However, there are few available gene order computing methods such as Genetic Algorithm (GA) and Ant Colony Optimization (ACO). Further, their performance in gene order computation using AD microarray data is not known. We thus set forth to evaluate the performances of current gene order computing methods with different distance formulas, and to identify additional features associated with gene order computation. Methods Using different distance formulas- Pearson distance and Euclidean distance, the squared Euclidean distance, and other conditions, gene orders were calculated by ACO and GA (including standard GA and improved GA) methods, respectively. The qualities of the gene orders were compared, and new features from the calculated gene orders were identified. Results Compared to the GA methods tested in this study, ACO fits the AD microarray data the best when calculating gene order. In addition, the following features were revealed: different distance formulas generated a different quality of gene order, and the commonly used Pearson distance was not the best distance formula when used with both GA and ACO methods for AD microarray data. Conclusion Compared with Pearson distance and Euclidean distance, the squared Euclidean distance generated the best quality gene order computed by GA and ACO methods. PMID:23369541

  12. Method for transferring data from an unsecured computer to a secured computer

    DOEpatents

    Nilsen, Curt A.

    1997-01-01

    A method is described for transferring data from an unsecured computer to a secured computer. The method includes transmitting the data and then receiving the data. Next, the data is retransmitted and rereceived. Then, it is determined if errors were introduced when the data was transmitted by the unsecured computer or received by the secured computer. Similarly, it is determined if errors were introduced when the data was retransmitted by the unsecured computer or rereceived by the secured computer. A warning signal is emitted from a warning device coupled to the secured computer if (i) an error was introduced when the data was transmitted or received, and (ii) an error was introduced when the data was retransmitted or rereceived.

  13. Three-dimensional protein structure prediction: Methods and computational strategies.

    PubMed

    Dorn, Márcio; E Silva, Mariel Barbachan; Buriol, Luciana S; Lamb, Luis C

    2014-10-12

    A long standing problem in structural bioinformatics is to determine the three-dimensional (3-D) structure of a protein when only a sequence of amino acid residues is given. Many computational methodologies and algorithms have been proposed as a solution to the 3-D Protein Structure Prediction (3-D-PSP) problem. These methods can be divided in four main classes: (a) first principle methods without database information; (b) first principle methods with database information; (c) fold recognition and threading methods; and (d) comparative modeling methods and sequence alignment strategies. Deterministic computational techniques, optimization techniques, data mining and machine learning approaches are typically used in the construction of computational solutions for the PSP problem. Our main goal with this work is to review the methods and computational strategies that are currently used in 3-D protein prediction.

  14. Evolutionary Phylogenetic Networks: Models and Issues

    NASA Astrophysics Data System (ADS)

    Nakhleh, Luay

    Phylogenetic networks are special graphs that generalize phylogenetic trees to allow for modeling of non-treelike evolutionary histories. The ability to sequence multiple genetic markers from a set of organisms and the conflicting evolutionary signals that these markers provide in many cases, have propelled research and interest in phylogenetic networks to the forefront in computational phylogenetics. Nonetheless, the term 'phylogenetic network' has been generically used to refer to a class of models whose core shared property is tree generalization. Several excellent surveys of the different flavors of phylogenetic networks and methods for their reconstruction have been written recently. However, unlike these surveys, this chapte focuses specifically on one type of phylogenetic networks, namely evolutionary phylogenetic networks, which explicitly model reticulate evolutionary events. Further, this chapter focuses less on surveying existing tools, and addresses in more detail issues that are central to the accurate reconstruction of phylogenetic networks.

  15. Transonic Flow Computations Using Nonlinear Potential Methods

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.; Kwak, Dochan (Technical Monitor)

    2000-01-01

    This presentation describes the state of transonic flow simulation using nonlinear potential methods for external aerodynamic applications. The presentation begins with a review of the various potential equation forms (with emphasis on the full potential equation) and includes a discussion of pertinent mathematical characteristics and all derivation assumptions. Impact of the derivation assumptions on simulation accuracy, especially with respect to shock wave capture, is discussed. Key characteristics of all numerical algorithm types used for solving nonlinear potential equations, including steady, unsteady, space marching, and design methods, are described. Both spatial discretization and iteration scheme characteristics are examined. Numerical results for various aerodynamic applications are included throughout the presentation to highlight key discussion points. The presentation ends with concluding remarks and recommendations for future work. Overall. nonlinear potential solvers are efficient, highly developed and routinely used in the aerodynamic design environment for cruise conditions. Published by Elsevier Science Ltd. All rights reserved.

  16. Computational Methods for Probabilistic Target Tracking Problems

    DTIC Science & Technology

    2007-09-01

    Undergraduate Students: Ms. Angela Edwards, Mr. Bryahn Ivery, Mr. Dustin Lupton, Mr. James Pender, Mr. Terrell Felder , Ms. Krystal Knight Under...two more graduate students, Mr. Ricardo Bernal and Ms Alisha Williams, and two more undergraduate students, Ms Krystal Knight and Mr. Terrell Felder ...Technical State University, April 24, 2006 “Using Tree Based Methods to Classify Messages”, Terrell A. Felder , Math Awareness Mini-Conference

  17. Fast calculation method for spherical computer-generated holograms.

    PubMed

    Tachiki, Mark L; Sando, Yusuke; Itoh, Masahide; Yatagai, Toyohiko

    2006-05-20

    The synthesis of spherical computer-generated holograms is investigated. To deal with the staggering calculation times required to synthesize the hologram, a fast calculation method for approximating the hologram distribution is proposed. In this method, the diffraction integral is approximated as a convolution integral, allowing computation using the fast-Fourier-transform algorithm. The principles of the fast calculation method, the error in the approximation, and results from simulations are presented.

  18. The evolutionary relationships and age of Homo naledi: An assessment using dated Bayesian phylogenetic methods.

    PubMed

    Dembo, Mana; Radovčić, Davorka; Garvin, Heather M; Laird, Myra F; Schroeder, Lauren; Scott, Jill E; Brophy, Juliet; Ackermann, Rebecca R; Musiba, Chares M; de Ruiter, Darryl J; Mooers, Arne Ø; Collard, Mark

    2016-08-01

    Homo naledi is a recently discovered species of fossil hominin from South Africa. A considerable amount is already known about H. naledi but some important questions remain unanswered. Here we report a study that addressed two of them: "Where does H. naledi fit in the hominin evolutionary tree?" and "How old is it?" We used a large supermatrix of craniodental characters for both early and late hominin species and Bayesian phylogenetic techniques to carry out three analyses. First, we performed a dated Bayesian analysis to generate estimates of the evolutionary relationships of fossil hominins including H. naledi. Then we employed Bayes factor tests to compare the strength of support for hypotheses about the relationships of H. naledi suggested by the best-estimate trees. Lastly, we carried out a resampling analysis to assess the accuracy of the age estimate for H. naledi yielded by the dated Bayesian analysis. The analyses strongly supported the hypothesis that H. naledi forms a clade with the other Homo species and Australopithecus sediba. The analyses were more ambiguous regarding the position of H. naledi within the (Homo, Au. sediba) clade. A number of hypotheses were rejected, but several others were not. Based on the available craniodental data, Homo antecessor, Asian Homo erectus, Homo habilis, Homo floresiensis, Homo sapiens, and Au. sediba could all be the sister taxon of H. naledi. According to the dated Bayesian analysis, the most likely age for H. naledi is 912 ka. This age estimate was supported by the resampling analysis. Our findings have a number of implications. Most notably, they support the assignment of the new specimens to Homo, cast doubt on the claim that H. naledi is simply a variant of H. erectus, and suggest H. naledi is younger than has been previously proposed.

  19. The causal pie model: an epidemiological method applied to evolutionary biology and ecology.

    PubMed

    Wensink, Maarten; Westendorp, Rudi G J; Baudisch, Annette

    2014-05-01

    A general concept for thinking about causality facilitates swift comprehension of results, and the vocabulary that belongs to the concept is instrumental in cross-disciplinary communication. The causal pie model has fulfilled this role in epidemiology and could be of similar value in evolutionary biology and ecology. In the causal pie model, outcomes result from sufficient causes. Each sufficient cause is made up of a "causal pie" of "component causes". Several different causal pies may exist for the same outcome. If and only if all component causes of a sufficient cause are present, that is, a causal pie is complete, does the outcome occur. The effect of a component cause hence depends on the presence of the other component causes that constitute some causal pie. Because all component causes are equally and fully causative for the outcome, the sum of causes for some outcome exceeds 100%. The causal pie model provides a way of thinking that maps into a number of recurrent themes in evolutionary biology and ecology: It charts when component causes have an effect and are subject to natural selection, and how component causes affect selection on other component causes; which partitions of outcomes with respect to causes are feasible and useful; and how to view the composition of a(n apparently homogeneous) population. The diversity of specific results that is directly understood from the causal pie model is a test for both the validity and the applicability of the model. The causal pie model provides a common language in which results across disciplines can be communicated and serves as a template along which future causal analyses can be made.

  20. The causal pie model: an epidemiological method applied to evolutionary biology and ecology

    PubMed Central

    Wensink, Maarten; Westendorp, Rudi G J; Baudisch, Annette

    2014-01-01

    A general concept for thinking about causality facilitates swift comprehension of results, and the vocabulary that belongs to the concept is instrumental in cross-disciplinary communication. The causal pie model has fulfilled this role in epidemiology and could be of similar value in evolutionary biology and ecology. In the causal pie model, outcomes result from sufficient causes. Each sufficient cause is made up of a “causal pie” of “component causes”. Several different causal pies may exist for the same outcome. If and only if all component causes of a sufficient cause are present, that is, a causal pie is complete, does the outcome occur. The effect of a component cause hence depends on the presence of the other component causes that constitute some causal pie. Because all component causes are equally and fully causative for the outcome, the sum of causes for some outcome exceeds 100%. The causal pie model provides a way of thinking that maps into a number of recurrent themes in evolutionary biology and ecology: It charts when component causes have an effect and are subject to natural selection, and how component causes affect selection on other component causes; which partitions of outcomes with respect to causes are feasible and useful; and how to view the composition of a(n apparently homogeneous) population. The diversity of specific results that is directly understood from the causal pie model is a test for both the validity and the applicability of the model. The causal pie model provides a common language in which results across disciplines can be communicated and serves as a template along which future causal analyses can be made. PMID:24963386

  1. Training in Methods in Computational Neuroscience

    DTIC Science & Technology

    1989-11-14

    this length of course in the future. Much improved over last’s years course was the existence of a a text: Methods in Neuronal Modeling, edited by...the Single Neuron a one-day workshop held on August 12, 1989 sponsored by the Office of Naval Research Participants: Thomas McKenna Office of Naval...IDAN SEGEV Introduction to cable theory; Rail’s model of neurons ; d3 / 2 law 11:15 am CLAY ARMSTRONG Relating stochastic single channels to

  2. Implicit methods for computing chemically reacting flow

    NASA Astrophysics Data System (ADS)

    Li, C. P.

    Modeling the inviscid air flow and its constituents over a hypersonically flying body requires a large system of Euler and chemical rate equations in three spatial coordinates. In most cases, the simplest approach to solve for the variables would be based on explicit integration of the governing equations. But the standard techniques are not suitable for this purpose because the integration step size must be inordinately small in order to maintain numerical stability. The difficulty is due to the stiff character of the difference equations, as there exists a large spectrum of spatial and temporal scales in the approximation of physical phenomena by numerical methods. For instance, in the calculation of gradients caused by shock and by cooled wall on a coarse grid, unchecked numerical errors eventually will lead to violent instability, and in calculations of species near chemical equilibrium, a small error in one species will give rise to a large error in the source term for other species. Despite the different nature of the stiffness in a complex system of equations, the most effective approach is believed to be implicit integration. The step increment is no longer dictated by the stability criteria for explicit methods, but instead is dictated by the degree of linearization introduced to the governing equations and by the order of desired accuracy. The linearization is enacted by means of Jacobian matrices, resulting from the differentiation of the flux as well as the rate production terms with respect to dependent variables. The backward Euler scheme is then applied to discretize the partial differential equations and to convert them into a system of linear difference equations in vector form. As this particular approach has the A-stable property, it is the one recommended by Lomax and Bailey(1) for one-dimensional nonequilibrium flow studies. However, in the practice of solving flow problems in multidimensions, it was not clear then how to deal with the mammoth

  3. Multiscale methods for computational RNA enzymology.

    PubMed

    Panteva, Maria T; Dissanayake, Thakshila; Chen, Haoyuan; Radak, Brian K; Kuechler, Erich R; Giambaşu, George M; Lee, Tai-Sung; York, Darrin M

    2015-01-01

    RNA catalysis is of fundamental importance to biology and yet remains ill-understood due to its complex nature. The multidimensional "problem space" of RNA catalysis includes both local and global conformational rearrangements, changes in the ion atmosphere around nucleic acids and metal ion binding, dependence on potentially correlated protonation states of key residues, and bond breaking/forming in the chemical steps of the reaction. The goal of this chapter is to summarize and apply multiscale modeling methods in an effort to target the different parts of the RNA catalysis problem space while also addressing the limitations and pitfalls of these methods. Classical molecular dynamics simulations, reference interaction site model calculations, constant pH molecular dynamics (CpHMD) simulations, Hamiltonian replica exchange molecular dynamics, and quantum mechanical/molecular mechanical simulations will be discussed in the context of the study of RNA backbone cleavage transesterification. This reaction is catalyzed by both RNA and protein enzymes, and here we examine the different mechanistic strategies taken by the hepatitis delta virus ribozyme and RNase A. © 2015 Elsevier Inc. All rights reserved.

  4. Coarse-graining methods for computational biology.

    PubMed

    Saunders, Marissa G; Voth, Gregory A

    2013-01-01

    Connecting the molecular world to biology requires understanding how molecular-scale dynamics propagate upward in scale to define the function of biological structures. To address this challenge, multiscale approaches, including coarse-graining methods, become necessary. We discuss here the theoretical underpinnings and history of coarse-graining and summarize the state of the field, organizing key methodologies based on an emerging paradigm for multiscale theory and modeling of biomolecular systems. This framework involves an integrated, iterative approach to couple information from different scales. The primary steps, which coincide with key areas of method development, include developing first-pass coarse-grained models guided by experimental results, performing numerous large-scale coarse-grained simulations, identifying important interactions that drive emergent behaviors, and finally reconnecting to the molecular scale by performing all-atom molecular dynamics simulations guided by the coarse-grained results. The coarse-grained modeling can then be extended and refined, with the entire loop repeated iteratively if necessary.

  5. Multiscale methods for computational RNA enzymology

    PubMed Central

    Panteva, Maria T.; Dissanayake, Thakshila; Chen, Haoyuan; Radak, Brian K.; Kuechler, Erich R.; Giambaşu, George M.; Lee, Tai-Sung; York, Darrin M.

    2016-01-01

    RNA catalysis is of fundamental importance to biology and yet remains ill-understood due to its complex nature. The multi-dimensional “problem space” of RNA catalysis includes both local and global conformational rearrangements, changes in the ion atmosphere around nucleic acids and metal ion binding, dependence on potentially correlated protonation states of key residues and bond breaking/forming in the chemical steps of the reaction. The goal of this article is to summarize and apply multiscale modeling methods in an effort to target the different parts of the RNA catalysis problem space while also addressing the limitations and pitfalls of these methods. Classical molecular dynamics (MD) simulations, reference interaction site model (RISM) calculations, constant pH molecular dynamics (CpHMD) simulations, Hamiltonian replica exchange molecular dynamics (HREMD) and quantum mechanical/molecular mechanical (QM/MM) simulations will be discussed in the context of the study of RNA backbone cleavage transesterification. This reaction is catalyzed by both RNA and protein enzymes, and here we examine the different mechanistic strategies taken by the hepatitis delta virus ribozyme (HDVr) and RNase A. PMID:25726472

  6. Testing and Validation of Computational Methods for Mass Spectrometry.

    PubMed

    Gatto, Laurent; Hansen, Kasper D; Hoopmann, Michael R; Hermjakob, Henning; Kohlbacher, Oliver; Beyer, Andreas

    2016-03-04

    High-throughput methods based on mass spectrometry (proteomics, metabolomics, lipidomics, etc.) produce a wealth of data that cannot be analyzed without computational methods. The impact of the choice of method on the overall result of a biological study is often underappreciated, but different methods can result in very different biological findings. It is thus essential to evaluate and compare the correctness and relative performance of computational methods. The volume of the data as well as the complexity of the algorithms render unbiased comparisons challenging. This paper discusses some problems and challenges in testing and validation of computational methods. We discuss the different types of data (simulated and experimental validation data) as well as different metrics to compare methods. We also introduce a new public repository for mass spectrometric reference data sets ( http://compms.org/RefData ) that contains a collection of publicly available data sets for performance evaluation for a wide range of different methods.

  7. Developing a multimodal biometric authentication system using soft computing methods.

    PubMed

    Malcangi, Mario

    2015-01-01

    Robust personal authentication is becoming ever more important in computer-based applications. Among a variety of methods, biometric offers several advantages, mainly in embedded system applications. Hard and soft multi-biometric, combined with hard and soft computing methods, can be applied to improve the personal authentication process and to generalize the applicability. This chapter describes the embedded implementation of a multi-biometric (voiceprint and fingerprint) multimodal identification system based on hard computing methods (DSP) for feature extraction and matching, an artificial neural network (ANN) for soft feature pattern matching, and a fuzzy logic engine (FLE) for data fusion and decision.

  8. Customizing computational methods for visual analytics with big data.

    PubMed

    Choo, Jaegul; Park, Haesun

    2013-01-01

    The volume of available data has been growing exponentially, increasing data problem's complexity and obscurity. In response, visual analytics (VA) has gained attention, yet its solutions haven't scaled well for big data. Computational methods can improve VA's scalability by giving users compact, meaningful information about the input data. However, the significant computation time these methods require hinders real-time interactive visualization of big data. By addressing crucial discrepancies between these methods and VA regarding precision and convergence, researchers have proposed ways to customize them for VA. These approaches, which include low-precision computation and iteration-level interactive visualization, ensure real-time interactive VA for big data.

  9. Computational Simulations and the Scientific Method

    NASA Technical Reports Server (NTRS)

    Kleb, Bil; Wood, Bill

    2005-01-01

    As scientific simulation software becomes more complicated, the scientific-software implementor's need for component tests from new model developers becomes more crucial. The community's ability to follow the basic premise of the Scientific Method requires independently repeatable experiments, and model innovators are in the best position to create these test fixtures. Scientific software developers also need to quickly judge the value of the new model, i.e., its cost-to-benefit ratio in terms of gains provided by the new model and implementation risks such as cost, time, and quality. This paper asks two questions. The first is whether other scientific software developers would find published component tests useful, and the second is whether model innovators think publishing test fixtures is a feasible approach.

  10. Computer systems and methods for visualizing data

    DOEpatents

    Stolte, Chris; Hanrahan, Patrick

    2010-07-13

    A method for forming a visual plot using a hierarchical structure of a dataset. The dataset comprises a measure and a dimension. The dimension consists of a plurality of levels. The plurality of levels form a dimension hierarchy. The visual plot is constructed based on a specification. A first level from the plurality of levels is represented by a first component of the visual plot. A second level from the plurality of levels is represented by a second component of the visual plot. The dataset is queried to retrieve data in accordance with the specification. The data includes all or a portion of the dimension and all or a portion of the measure. The visual plot is populated with the retrieved data in accordance with the specification.

  11. Computer systems and methods for visualizing data

    DOEpatents

    Stolte, Chris; Hanrahan, Patrick

    2013-01-29

    A method for forming a visual plot using a hierarchical structure of a dataset. The dataset comprises a measure and a dimension. The dimension consists of a plurality of levels. The plurality of levels form a dimension hierarchy. The visual plot is constructed based on a specification. A first level from the plurality of levels is represented by a first component of the visual plot. A second level from the plurality of levels is represented by a second component of the visual plot. The dataset is queried to retrieve data in accordance with the specification. The data includes all or a portion of the dimension and all or a portion of the measure. The visual plot is populated with the retrieved data in accordance with the specification.

  12. Method to Compute CT System MTF

    SciTech Connect

    Kallman, Jeffrey S.

    2016-05-03

    The modulation transfer function (MTF) is the normalized spatial frequency representation of the point spread function (PSF) of the system. Point objects are hard to come by, so typically the PSF is determined by taking the numerical derivative of the system's response to an edge. This is the method we use, and we typically use it with cylindrical objects. Given a cylindrical object, we first put an active contour around it, as shown in Figure 1(a). The active contour lets us know where the boundary of the test object is. We next set a threshold (Figure 1(b)) and determine the center of mass of the above threshold voxels. For the purposes of determining the center of mass, each voxel is weighted identically (not by voxel value).

  13. Computational methods to identify new antibacterial targets.

    PubMed

    McPhillie, Martin J; Cain, Ricky M; Narramore, Sarah; Fishwick, Colin W G; Simmons, Katie J

    2015-01-01

    The development of resistance to all current antibiotics in the clinic means there is an urgent unmet need for novel antibacterial agents with new modes of action. One of the best ways of finding these is to identify new essential bacterial enzymes to target. The advent of a number of in silico tools has aided classical methods of discovering new antibacterial targets, and these programs are the subject of this review. Many of these tools apply a cheminformatic approach, utilizing the structural information of either ligand or protein, chemogenomic databases, and docking algorithms to identify putative antibacterial targets. Considering the wealth of potential drug targets identified from genomic research, these approaches are perfectly placed to mine this rich resource and complement drug discovery programs.

  14. Computational methods for internal flows with emphasis on turbomachinery

    NASA Technical Reports Server (NTRS)

    Mcnally, W. D.; Sockol, P. M.

    1981-01-01

    Current computational methods for analyzing flows in turbomachinery and other related internal propulsion components are presented. The methods are divided into two classes. The inviscid methods deal specifically with turbomachinery applications. Viscous methods, deal with generalized duct flows as well as flows in turbomachinery passages. Inviscid methods are categorized into the potential, stream function, and Euler aproaches. Viscous methods are treated in terms of parabolic, partially parabolic, and elliptic procedures. Various grids used in association with these procedures are also discussed.

  15. Full Discretisations for Nonlinear Evolutionary Inequalities Based on Stiffly Accurate Runge-Kutta and hp-Finite Element Methods.

    PubMed

    Gwinner, J; Thalhammer, M

    The convergence of full discretisations by implicit Runge-Kutta and nonconforming Galerkin methods applied to nonlinear evolutionary inequalities is studied. The scope of applications includes differential inclusions governed by a nonlinear operator that is monotone and fulfills a certain growth condition. A basic assumption on the considered class of stiffly accurate Runge-Kutta time discretisations is a stability criterion which is in particular satisfied by the Radau IIA and Lobatto IIIC methods. In order to allow nonconforming hp-finite element approximations of unilateral constraints, set convergence of convex subsets in the sense of Glowinski-Mosco-Stummel is utilised. An appropriate formulation of the fully discrete variational inequality is deduced on the basis of a characteristic example of use, a Signorini-type initial-boundary value problem. Under hypotheses close to the existence theory of nonlinear first-order evolutionary equations and inequalities involving a monotone main part, a convergence result for the piecewise constant in time interpolant is established.

  16. 47 CFR 80.771 - Method of computing coverage.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 5 2011-10-01 2011-10-01 false Method of computing coverage. 80.771 Section 80.771 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Standards for Computing Public Coast Station VHF Coverage § 80.771...

  17. Computer program uses characteristics method for free-jet investigation

    NASA Technical Reports Server (NTRS)

    Craidon, C. B.

    1967-01-01

    Computer program computes the free-jet boundary contours and other flow properties within the exhaust plume from highly underexpanded nozzles operating in near-vacuum conditions. The calculations are made by the method of characteristics which makes use of three-dimensional irrotational equations of flow.

  18. METHODOLOGICAL NOTES: Computer viruses and methods of combatting them

    NASA Astrophysics Data System (ADS)

    Landsberg, G. L.

    1991-02-01

    This article examines the current virus situation for personal computers and time-sharing computers. Basic methods of combatting viruses are presented. Specific recommendations are given to eliminate the most widespread viruses. A short description is given of a universal antiviral system, PHENIX, which has been developed.

  19. GAP Noise Computation By The CE/SE Method

    NASA Technical Reports Server (NTRS)

    Loh, Ching Y.; Chang, Sin-Chung; Wang, Xiao Y.; Jorgenson, Philip C. E.

    2001-01-01

    A typical gap noise problem is considered in this paper using the new space-time conservation element and solution element (CE/SE) method. Implementation of the computation is straightforward. No turbulence model, LES (large eddy simulation) or a preset boundary layer profile is used, yet the computed frequency agrees well with the experimental one.

  20. Computational methods for the HZETRN code.

    PubMed

    Tweed, J; Walker, S A; Wilson, J W; Cucinotta, F A; Tripathi, R K; Blattnig, S; Mertens, C J

    2005-01-01

    Asymptotic expansion has been used to simplify the transport of high charge and energy ions for broad beam applications in the laboratory and space. The solution of the lowest order asymptotic term is then related to a Green's function for energy loss and straggling coupled to nuclear attenuation providing the lowest order term in a rapidly converging Neumann series for which higher order collisions terms are related to the fragmentation events including energy dispersion and downshift. The first and second Neumann corrections were evaluated numerically as a standard for further analytic approximation. The first Neumann correction is accurately evaluated over the saddle point whose width is determined by the energy dispersion and located at the downshifted ion collision energy. Introduction of the first Neumann correction leads to significant simplification of the second correction term allowing application of the mean value theorem and a second saddle point approximation. The regular dependence of the second correction spectral dependence lends hope to simple approximation to higher corrections. At sufficiently high energy nuclear cross-section variations are small allowing non-perturbative methods to all orders and renormalization of the second corrections allow accurate evaluation of the full Neumann series. c2005 COSPAR. Published by Elsevier Ltd. All rights reserved.

  1. Mathematical Methods in the Atmospheric Sciences and Related Computational Methods.

    DTIC Science & Technology

    1979-11-01

    AD-AO76 242 SOCIETY FOR INDUSTRIAL AND APPLIED MATHEMATICS PIILA--ETC F /B 12/1 MATHEMATICAL METHODS IN THE ATMOSPHERIC SCIENCES AND RELATED CO...Hydrodynamic Aspects of Turbulence" C.E. Leith, NCAR EC TE "Statistical Properties of Climate Systems" U 7 1980 E.N. Lorenz, MIT (Represented by R. Errico ) U...Differential Equations" D . Gottlieb, Tel-Aviv University "Spectral Methods for Partial Differential Equations" H.-O. Kreiss, CIT Initialization Methods for

  2. Multimodal neuroimaging computing: the workflows, methods, and platforms.

    PubMed

    Liu, Sidong; Cai, Weidong; Liu, Siqi; Zhang, Fan; Fulham, Michael; Feng, Dagan; Pujol, Sonia; Kikinis, Ron

    The last two decades have witnessed the explosive growth in the development and use of noninvasive neuroimaging technologies that advance the research on human brain under normal and pathological conditions. Multimodal neuroimaging has become a major driver of current neuroimaging research due to the recognition of the clinical benefits of multimodal data, and the better access to hybrid devices. Multimodal neuroimaging computing is very challenging, and requires sophisticated computing to address the variations in spatiotemporal resolution and merge the biophysical/biochemical information. We review the current workflows and methods for multimodal neuroimaging computing, and also demonstrate how to conduct research using the established neuroimaging computing packages and platforms.

  3. Multimodal neuroimaging computing: the workflows, methods, and platforms.

    PubMed

    Liu, Sidong; Cai, Weidong; Liu, Siqi; Zhang, Fan; Fulham, Michael; Feng, Dagan; Pujol, Sonia; Kikinis, Ron

    2015-09-01

    The last two decades have witnessed the explosive growth in the development and use of noninvasive neuroimaging technologies that advance the research on human brain under normal and pathological conditions. Multimodal neuroimaging has become a major driver of current neuroimaging research due to the recognition of the clinical benefits of multimodal data, and the better access to hybrid devices. Multimodal neuroimaging computing is very challenging, and requires sophisticated computing to address the variations in spatiotemporal resolution and merge the biophysical/biochemical information. We review the current workflows and methods for multimodal neuroimaging computing, and also demonstrate how to conduct research using the established neuroimaging computing packages and platforms.

  4. MOEPGA: A novel method to detect protein complexes in yeast protein-protein interaction networks based on MultiObjective Evolutionary Programming Genetic Algorithm.

    PubMed

    Cao, Buwen; Luo, Jiawei; Liang, Cheng; Wang, Shulin; Song, Dan

    2015-10-01

    The identification of protein complexes in protein-protein interaction (PPI) networks has greatly advanced our understanding of biological organisms. Existing computational methods to detect protein complexes are usually based on specific network topological properties of PPI networks. However, due to the inherent complexity of the network structures, the identification of protein complexes may not be fully addressed by using single network topological property. In this study, we propose a novel MultiObjective Evolutionary Programming Genetic Algorithm (MOEPGA) which integrates multiple network topological features to detect biologically meaningful protein complexes. Our approach first systematically analyzes the multiobjective problem in terms of identifying protein complexes from PPI networks, and then constructs the objective function of the iterative algorithm based on three common topological properties of protein complexes from the benchmark dataset, finally we describe our algorithm, which mainly consists of three steps, population initialization, subgraph mutation and subgraph selection operation. To show the utility of our method, we compared MOEPGA with several state-of-the-art algorithms on two yeast PPI datasets. The experiment results demonstrate that the proposed method can not only find more protein complexes but also achieve higher accuracy in terms of fscore. Moreover, our approach can cover a certain number of proteins in the input PPI network in terms of the normalized clustering score. Taken together, our method can serve as a powerful framework to detect protein complexes in yeast PPI networks, thereby facilitating the identification of the underlying biological functions. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Computer based safety training: an investigation of methods

    PubMed Central

    Wallen, E; Mulloy, K

    2005-01-01

    Background: Computer based methods are increasingly being used for training workers, although our understanding of how to structure this training has not kept pace with the changing abilities of computers. Information on a computer can be presented in many different ways and the style of presentation can greatly affect learning outcomes and the effectiveness of the learning intervention. Many questions about how adults learn from different types of presentations and which methods best support learning remain unanswered. Aims: To determine if computer based methods, which have been shown to be effective on younger students, can also be an effective method for older workers in occupational health and safety training. Methods: Three versions of a computer based respirator training module were developed and presented to manufacturing workers: one consisting of text only; one with text, pictures, and animation; and one with narration, pictures, and animation. After instruction, participants were given two tests: a multiple choice test measuring low level, rote learning; and a transfer test measuring higher level learning. Results: Participants receiving the concurrent narration with pictures and animation scored significantly higher on the transfer test than did workers receiving the other two types of instruction. There were no significant differences between groups on the multiple choice test. Conclusions: Narration with pictures and text may be a more effective method for training workers about respirator safety than other popular methods of computer based training. Further study is needed to determine the conditions for the effective use of this technology. PMID:15778259

  6. Method for computing coupled-channels Gamow-state energies

    SciTech Connect

    He, G.; Fink, P.; Landau, R.H. )

    1989-09-01

    The bound states and resonances of a two-particle system occur at the complex energies for which the system's {ital T} matrix has poles. Presented is a more efficient method of computing these energies for symmetric potential interactions.

  7. Computer method for identification of boiler transfer functions

    NASA Technical Reports Server (NTRS)

    Miles, J. H.

    1972-01-01

    Iterative computer aided procedure was developed which provides for identification of boiler transfer functions using frequency response data. Method uses frequency response data to obtain satisfactory transfer function for both high and low vapor exit quality data.

  8. Platform-independent method for computer aided schematic drawings

    DOEpatents

    Vell, Jeffrey L [Slingerlands, NY; Siganporia, Darius M [Clifton Park, NY; Levy, Arthur J [Fort Lauderdale, FL

    2012-02-14

    A CAD/CAM method is disclosed for a computer system to capture and interchange schematic drawing and associated design information. The schematic drawing and design information are stored in an extensible, platform-independent format.

  9. The evolutionary forest algorithm.

    PubMed

    Leman, Scotland C; Uyenoyama, Marcy K; Lavine, Michael; Chen, Yuguo

    2007-08-01

    Gene genealogies offer a powerful context for inferences about the evolutionary process based on presently segregating DNA variation. In many cases, it is the distribution of population parameters, marginalized over the effectively infinite-dimensional tree space, that is of interest. Our evolutionary forest (EF) algorithm uses Monte Carlo methods to generate posterior distributions of population parameters. A novel feature is the updating of parameter values based on a probability measure defined on an ensemble of histories (a forest of genealogies), rather than a single tree. The EF algorithm generates samples from the correct marginal distribution of population parameters. Applied to actual data from closely related fruit fly species, it rapidly converged to posterior distributions that closely approximated the exact posteriors generated through massive computational effort. Applied to simulated data, it generated credible intervals that covered the actual parameter values in accordance with the nominal probabilities. A C++ implementation of this method is freely accessible at http://www.isds.duke.edu/~scl13

  10. Method and computer program product for maintenance and modernization backlogging

    DOEpatents

    Mattimore, Bernard G; Reynolds, Paul E; Farrell, Jill M

    2013-02-19

    According to one embodiment, a computer program product for determining future facility conditions includes a computer readable medium having computer readable program code stored therein. The computer readable program code includes computer readable program code for calculating a time period specific maintenance cost, for calculating a time period specific modernization factor, and for calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. In another embodiment, a computer-implemented method for calculating future facility conditions includes calculating a time period specific maintenance cost, calculating a time period specific modernization factor, and calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. Other embodiments are also presented.

  11. Review of parallel computing methods and tools for FPGA technology

    NASA Astrophysics Data System (ADS)

    Cieszewski, Radosław; Linczuk, Maciej; Pozniak, Krzysztof; Romaniuk, Ryszard

    2013-10-01

    Parallel computing is emerging as an important area of research in computer architectures and software systems. Many algorithms can be greatly accelerated using parallel computing techniques. Specialized parallel computer architectures are used for accelerating speci c tasks. High-Energy Physics Experiments measuring systems often use FPGAs for ne-grained computation. FPGA combines many bene ts of both software and ASIC implementations. Like software, the mapped circuit is exible, and can be recon gured over the lifetime of the system. FPGAs therefore have the potential to achieve far greater performance than software as a result of bypassing the fetch-decode-execute operations of traditional processors, and possibly exploiting a greater level of parallelism. Creating parallel programs implemented in FPGAs is not trivial. This paper presents existing methods and tools for ne-grained computation implemented in FPGA using Behavioral Description and High Level Programming Languages.

  12. Lebesgue averaging method in serial computations of atmospheric radiation

    NASA Astrophysics Data System (ADS)

    Aristova, E. N.; Gertsev, M. N.; Shilkov, A. V.

    2017-06-01

    The Lebesgue averaging method was applied to the numerical simulation of the radiative transfer equation. It was found that the method ensures good accuracy, while the amount of computations with respect to the energy variable is reduced by more than three orders of magnitude. "Fast" simplified techniques for the Lebesgue processing of photon absorption cross sections in serial computations of atmospheric radiation were examined. Attention was given to the convenience of using the techniques, including by experienced users.

  13. Panel-Method Computer Code For Potential Flow

    NASA Technical Reports Server (NTRS)

    Ashby, Dale L.; Dudley, Michael R.; Iguchi, Steven K.

    1992-01-01

    Low-order panel method used to reduce computation time. Panel code PMARC (Panel Method Ames Research Center) numerically simulates flow field around or through complex three-dimensional bodies such as complete aircraft models or wind tunnel. Based on potential-flow theory. Facilitates addition of new features to code and tailoring of code to specific problems and computer-hardware constraints. Written in standard FORTRAN 77.

  14. Computer Simulation Methods for Defect Configurations and Nanoscale Structures

    SciTech Connect

    Gao, Fei

    2010-01-01

    This chapter will describe general computer simulation methods, including ab initio calculations, molecular dynamics and kinetic Monte-Carlo method, and their applications to the calculations of defect configurations in various materials (metals, ceramics and oxides) and the simulations of nanoscale structures due to ion-solid interactions. The multiscale theory, modeling, and simulation techniques (both time scale and space scale) will be emphasized, and the comparisons between computer simulation results and exprimental observations will be made.

  15. Methods for operating parallel computing systems employing sequenced communications

    DOEpatents

    Benner, R.E.; Gustafson, J.L.; Montry, G.R.

    1999-08-10

    A parallel computing system and method are disclosed having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system. 15 figs.

  16. Convergence acceleration of the Proteus computer code with multigrid methods

    NASA Technical Reports Server (NTRS)

    Demuren, A. O.; Ibraheem, S. O.

    1992-01-01

    Presented here is the first part of a study to implement convergence acceleration techniques based on the multigrid concept in the Proteus computer code. A review is given of previous studies on the implementation of multigrid methods in computer codes for compressible flow analysis. Also presented is a detailed stability analysis of upwind and central-difference based numerical schemes for solving the Euler and Navier-Stokes equations. Results are given of a convergence study of the Proteus code on computational grids of different sizes. The results presented here form the foundation for the implementation of multigrid methods in the Proteus code.

  17. Methods for operating parallel computing systems employing sequenced communications

    DOEpatents

    Benner, Robert E.; Gustafson, John L.; Montry, Gary R.

    1999-01-01

    A parallel computing system and method having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system.

  18. Computational Performance and Statistical Accuracy of *BEAST and Comparisons with Other Methods

    PubMed Central

    Ogilvie, Huw A.; Heled, Joseph; Xie, Dong; Drummond, Alexei J.

    2016-01-01

    Under the multispecies coalescent model of molecular evolution, gene trees have independent evolutionary histories within a shared species tree. In comparison, supermatrix concatenation methods assume that gene trees share a single common genealogical history, thereby equating gene coalescence with species divergence. The multispecies coalescent is supported by previous studies which found that its predicted distributions fit empirical data, and that concatenation is not a consistent estimator of the species tree. *BEAST, a fully Bayesian implementation of the multispecies coalescent, is popular but computationally intensive, so the increasing size of phylogenetic data sets is both a computational challenge and an opportunity for better systematics. Using simulation studies, we characterize the scaling behavior of *BEAST, and enable quantitative prediction of the impact increasing the number of loci has on both computational performance and statistical accuracy. Follow-up simulations over a wide range of parameters show that the statistical performance of *BEAST relative to concatenation improves both as branch length is reduced and as the number of loci is increased. Finally, using simulations based on estimated parameters from two phylogenomic data sets, we compare the performance of a range of species tree and concatenation methods to show that using *BEAST with tens of loci can be preferable to using concatenation with thousands of loci. Our results provide insight into the practicalities of Bayesian species tree estimation, the number of loci required to obtain a given level of accuracy and the situations in which supermatrix or summary methods will be outperformed by the fully Bayesian multispecies coalescent. PMID:26821913

  19. An efficient method for computation of the manipulator inertia matrix

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1989-01-01

    An efficient method of computation of the manipulator inertia matrix is presented. Using spatial notations, the method leads to the definition of the composite rigid-body spatial inertia, which is a spatial representation of the notion of augmented body. The previously proposed methods, the physical interpretations leading to their derivation, and their redundancies are analyzed. The proposed method achieves a greater efficiency by eliminating the redundancy in the intrinsic equations as well as by a better choice of coordinate frame for their projection. In this case, removing the redundancy leads to greater efficiency of the computation in both serial and parallel senses.

  20. Investigating preferences for color-shape combinations with gaze driven optimization method based on evolutionary algorithms

    PubMed Central

    Holmes, Tim; Zanker, Johannes M.

    2013-01-01

    Studying aesthetic preference is notoriously difficult because it targets individual experience. Eye movements provide a rich source of behavioral measures that directly reflect subjective choice. To determine individual preferences for simple composition rules we here use fixation duration as the fitness measure in a Gaze Driven Evolutionary Algorithm (GDEA), which has been demonstrated as a tool to identify aesthetic preferences (Holmes and Zanker, 2012). In the present study, the GDEA was used to investigate the preferred combination of color and shape which have been promoted in the Bauhaus arts school. We used the same three shapes (square, circle, triangle) used by Kandinsky (1923), with the three color palette from the original experiment (A), an extended seven color palette (B), and eight different shape orientation (C). Participants were instructed to look for their preferred circle, triangle or square in displays with eight stimuli of different shapes, colors and rotations, in an attempt to test for a strong preference for red squares, yellow triangles and blue circles in such an unbiased experimental design and with an extended set of possible combinations. We Tested six participants extensively on the different conditions and found consistent preferences for color-shape combinations for individuals, but little evidence at the group level for clear color/shape preference consistent with Kandinsky's claims, apart from some weak link between yellow and triangles. Our findings suggest substantial inter-individual differences in the presence of stable individual associations of color and shapes, but also that these associations are robust within a single individual. These individual differences go some way toward challenging the claims of the universal preference for color/shape combinations proposed by Kandinsky, but also indicate that a much larger sample size would be needed to confidently reject that hypothesis. Moreover, these experiments highlight the

  1. Investigating preferences for color-shape combinations with gaze driven optimization method based on evolutionary algorithms.

    PubMed

    Holmes, Tim; Zanker, Johannes M

    2013-01-01

    Studying aesthetic preference is notoriously difficult because it targets individual experience. Eye movements provide a rich source of behavioral measures that directly reflect subjective choice. To determine individual preferences for simple composition rules we here use fixation duration as the fitness measure in a Gaze Driven Evolutionary Algorithm (GDEA), which has been demonstrated as a tool to identify aesthetic preferences (Holmes and Zanker, 2012). In the present study, the GDEA was used to investigate the preferred combination of color and shape which have been promoted in the Bauhaus arts school. We used the same three shapes (square, circle, triangle) used by Kandinsky (1923), with the three color palette from the original experiment (A), an extended seven color palette (B), and eight different shape orientation (C). Participants were instructed to look for their preferred circle, triangle or square in displays with eight stimuli of different shapes, colors and rotations, in an attempt to test for a strong preference for red squares, yellow triangles and blue circles in such an unbiased experimental design and with an extended set of possible combinations. We Tested six participants extensively on the different conditions and found consistent preferences for color-shape combinations for individuals, but little evidence at the group level for clear color/shape preference consistent with Kandinsky's claims, apart from some weak link between yellow and triangles. Our findings suggest substantial inter-individual differences in the presence of stable individual associations of color and shapes, but also that these associations are robust within a single individual. These individual differences go some way toward challenging the claims of the universal preference for color/shape combinations proposed by Kandinsky, but also indicate that a much larger sample size would be needed to confidently reject that hypothesis. Moreover, these experiments highlight the

  2. Computational Methods for Configurational Entropy Using Internal and Cartesian Coordinates.

    PubMed

    Hikiri, Simon; Yoshidome, Takashi; Ikeguchi, Mitsunori

    2016-12-13

    The configurational entropy of solute molecules is a crucially important quantity to study various biophysical processes. Consequently, it is necessary to establish an efficient quantitative computational method to calculate configurational entropy as accurately as possible. In the present paper, we investigate the quantitative performance of the quasi-harmonic and related computational methods, including widely used methods implemented in popular molecular dynamics (MD) software packages, compared with the Clausius method, which is capable of accurately computing the change of the configurational entropy upon temperature change. Notably, we focused on the choice of the coordinate systems (i.e., internal or Cartesian coordinates). The Boltzmann-quasi-harmonic (BQH) method using internal coordinates outperformed all the six methods examined here. The introduction of improper torsions in the BQH method improves its performance, and anharmonicity of proper torsions in proteins is identified to be the origin of the superior performance of the BQH method. In contrast, widely used methods implemented in MD packages show rather poor performance. In addition, the enhanced sampling of replica-exchange MD simulations was found to be efficient for the convergent behavior of entropy calculations. Also in folding/unfolding transitions of a small protein, Chignolin, the BQH method was reasonably accurate. However, the independent term without the correlation term in the BQH method was most accurate for the folding entropy among the methods considered in this study, because the QH approximation of the correlation term in the BQH method was no longer valid for the divergent unfolded structures.

  3. Method for computing the optimal signal distribution and channel capacity.

    PubMed

    Shapiro, E G; Shapiro, D A; Turitsyn, S K

    2015-06-15

    An iterative method for computing the channel capacity of both discrete and continuous input, continuous output channels is proposed. The efficiency of new method is demonstrated in comparison with the classical Blahut - Arimoto algorithm for several known channels. Moreover, we also present a hybrid method combining advantages of both the Blahut - Arimoto algorithm and our iterative approach. The new method is especially efficient for the channels with a priory unknown discrete input alphabet.

  4. Research data collection methods: from paper to tablet computers.

    PubMed

    Wilcox, Adam B; Gallagher, Kathleen D; Boden-Albala, Bernadette; Bakken, Suzanne R

    2012-07-01

    Primary data collection is a critical activity in clinical research. Even with significant advances in technical capabilities, clear benefits of use, and even user preferences for using electronic systems for collecting primary data, paper-based data collection is still common in clinical research settings. However, with recent developments in both clinical research and tablet computer technology, the comparative advantages and disadvantages of data collection methods should be determined. To describe case studies using multiple methods of data collection, including next-generation tablets, and consider their various advantages and disadvantages. We reviewed 5 modern case studies using primary data collection, using methods ranging from paper to next-generation tablet computers. We performed semistructured telephone interviews with each project, which considered factors relevant to data collection. We address specific issues with workflow, implementation and security for these different methods, and identify differences in implementation that led to different technology considerations for each case study. There remain multiple methods for primary data collection, each with its own strengths and weaknesses. Two recent methods are electronic health record templates and next-generation tablet computers. Electronic health record templates can link data directly to medical records, but are notably difficult to use. Current tablet computers are substantially different from previous technologies with regard to user familiarity and software cost. The use of cloud-based storage for tablet computers, however, creates a specific challenge for clinical research that must be considered but can be overcome.

  5. Big data mining analysis method based on cloud computing

    NASA Astrophysics Data System (ADS)

    Cai, Qing Qiu; Cui, Hong Gang; Tang, Hao

    2017-08-01

    Information explosion era, large data super-large, discrete and non-(semi) structured features have gone far beyond the traditional data management can carry the scope of the way. With the arrival of the cloud computing era, cloud computing provides a new technical way to analyze the massive data mining, which can effectively solve the problem that the traditional data mining method cannot adapt to massive data mining. This paper introduces the meaning and characteristics of cloud computing, analyzes the advantages of using cloud computing technology to realize data mining, designs the mining algorithm of association rules based on MapReduce parallel processing architecture, and carries out the experimental verification. The algorithm of parallel association rule mining based on cloud computing platform can greatly improve the execution speed of data mining.

  6. Method for implementation of recursive hierarchical segmentation on parallel computers

    NASA Technical Reports Server (NTRS)

    Tilton, James C. (Inventor)

    2005-01-01

    A method, computer readable storage, and apparatus for implementing a recursive hierarchical segmentation algorithm on a parallel computing platform. The method includes setting a bottom level of recursion that defines where a recursive division of an image into sections stops dividing, and setting an intermediate level of recursion where the recursive division changes from a parallel implementation into a serial implementation. The segmentation algorithm is implemented according to the set levels. The method can also include setting a convergence check level of recursion with which the first level of recursion communicates with when performing a convergence check.

  7. 3D computational mechanics elucidate the evolutionary implications of orbit position and size diversity of early amphibians.

    PubMed

    Marcé-Nogué, Jordi; Fortuny, Josep; De Esteban-Trivigno, Soledad; Sánchez, Montserrat; Gil, Lluís; Galobart, Àngel

    2015-01-01

    For the first time in vertebrate palaeontology, the potential of joining Finite Element Analysis (FEA) and Parametrical Analysis (PA) is used to shed new light on two different cranial parameters from the orbits to evaluate their biomechanical role and evolutionary patterns. The early tetrapod group of Stereospondyls, one of the largest groups of Temnospondyls is used as a case study because its orbits position and size vary hugely within the members of this group. An adult skull of Edingerella madagascariensis was analysed using two different cases of boundary and loading conditions in order to quantify stress and deformation response under a bilateral bite and during skull raising. Firstly, the variation of the original geometry of its orbits was introduced in the models producing new FEA results, allowing the exploration of the ecomorphology, feeding strategy and evolutionary patterns of these top predators. Secondly, the quantitative results were analysed in order to check if the orbit size and position were correlated with different stress patterns. These results revealed that in most of the cases the stress distribution is not affected by changes in the size and position of the orbit. This finding supports the high mechanical plasticity of this group during the Triassic period. The absence of mechanical constraints regarding the orbit probably promoted the ecomorphological diversity acknowledged for this group, as well as its ecological niche differentiation in the terrestrial Triassic ecosystems in clades as lydekkerinids, trematosaurs, capitosaurs or metoposaurs.

  8. 3D Computational Mechanics Elucidate the Evolutionary Implications of Orbit Position and Size Diversity of Early Amphibians

    PubMed Central

    Marcé-Nogué, Jordi; Fortuny, Josep; De Esteban-Trivigno, Soledad; Sánchez, Montserrat; Gil, Lluís; Galobart, Àngel

    2015-01-01

    For the first time in vertebrate palaeontology, the potential of joining Finite Element Analysis (FEA) and Parametrical Analysis (PA) is used to shed new light on two different cranial parameters from the orbits to evaluate their biomechanical role and evolutionary patterns. The early tetrapod group of Stereospondyls, one of the largest groups of Temnospondyls is used as a case study because its orbits position and size vary hugely within the members of this group. An adult skull of Edingerella madagascariensis was analysed using two different cases of boundary and loading conditions in order to quantify stress and deformation response under a bilateral bite and during skull raising. Firstly, the variation of the original geometry of its orbits was introduced in the models producing new FEA results, allowing the exploration of the ecomorphology, feeding strategy and evolutionary patterns of these top predators. Secondly, the quantitative results were analysed in order to check if the orbit size and position were correlated with different stress patterns. These results revealed that in most of the cases the stress distribution is not affected by changes in the size and position of the orbit. This finding supports the high mechanical plasticity of this group during the Triassic period. The absence of mechanical constraints regarding the orbit probably promoted the ecomorphological diversity acknowledged for this group, as well as its ecological niche differentiation in the terrestrial Triassic ecosystems in clades as lydekkerinids, trematosaurs, capitosaurs or metoposaurs. PMID:26107295

  9. Solution-adaptive finite element method in computational fracture mechanics

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Bass, J. M.; Spradley, L. W.

    1993-01-01

    Some recent results obtained using solution-adaptive finite element method in linear elastic two-dimensional fracture mechanics problems are presented. The focus is on the basic issue of adaptive finite element method for validating the applications of new methodology to fracture mechanics problems by computing demonstration problems and comparing the stress intensity factors to analytical results.

  10. Computer Subroutines for Analytic Rotation by Two Gradient Methods.

    ERIC Educational Resources Information Center

    van Thillo, Marielle

    Two computer subroutine packages for the analytic rotation of a factor matrix, A(p x m), are described. The first program uses the Flectcher (1970) gradient method, and the second uses the Polak-Ribiere (Polak, 1971) gradient method. The calculations in both programs involve the optimization of a function of free parameters. The result is a…

  11. Calculating PI Using Historical Methods and Your Personal Computer.

    ERIC Educational Resources Information Center

    Mandell, Alan

    1989-01-01

    Provides a software program for determining PI to the 15th place after the decimal. Explores the history of determining the value of PI from Archimedes to present computer methods. Investigates Wallis's, Liebniz's, and Buffon's methods. Written for Tandy GW-BASIC (IBM compatible) with 384K. Suggestions for Apple II's are given. (MVL)

  12. Calculating PI Using Historical Methods and Your Personal Computer.

    ERIC Educational Resources Information Center

    Mandell, Alan

    1989-01-01

    Provides a software program for determining PI to the 15th place after the decimal. Explores the history of determining the value of PI from Archimedes to present computer methods. Investigates Wallis's, Liebniz's, and Buffon's methods. Written for Tandy GW-BASIC (IBM compatible) with 384K. Suggestions for Apple II's are given. (MVL)

  13. A Hybrid Soft-computing Method for Image Analysis of Digital Plantar Scanners

    PubMed Central

    Razjouyan, Javad; Khayat, Omid; Siahi, Mehdi; Mansouri, Ali Alizadeh

    2013-01-01

    Digital foot scanners have been developed in recent years to yield anthropometrists digital image of insole with pressure distribution and anthropometric information. In this paper, a hybrid algorithm containing gray level spatial correlation (GLSC) histogram and Shanbag entropy is presented for analysis of scanned foot images. An evolutionary algorithm is also employed to find the optimum parameters of GLSC and transform function of the membership values. Resulting binary images as the thresholded images are undergone anthropometric measurements taking in to account the scale factor of pixel size to metric scale. The proposed method is finally applied to plantar images obtained through scanning feet of randomly selected subjects by a foot scanner system as our experimental setup described in the paper. Running computation time and the effects of GLSC parameters are investigated in the simulation results. PMID:24083133

  14. Methods and systems for providing reconfigurable and recoverable computing resources

    NASA Technical Reports Server (NTRS)

    Stange, Kent (Inventor); Hess, Richard (Inventor); Kelley, Gerald B (Inventor); Rogers, Randy (Inventor)

    2010-01-01

    A method for optimizing the use of digital computing resources to achieve reliability and availability of the computing resources is disclosed. The method comprises providing one or more processors with a recovery mechanism, the one or more processors executing one or more applications. A determination is made whether the one or more processors needs to be reconfigured. A rapid recovery is employed to reconfigure the one or more processors when needed. A computing system that provides reconfigurable and recoverable computing resources is also disclosed. The system comprises one or more processors with a recovery mechanism, with the one or more processors configured to execute a first application, and an additional processor configured to execute a second application different than the first application. The additional processor is reconfigurable with rapid recovery such that the additional processor can execute the first application when one of the one more processors fails.

  15. A Lanczos eigenvalue method on a parallel computer

    NASA Technical Reports Server (NTRS)

    Bostic, Susan W.; Fulton, Robert E.

    1987-01-01

    Eigenvalue analyses of complex structures is a computationally intensive task which can benefit significantly from new and impending parallel computers. This study reports on a parallel computer implementation of the Lanczos method for free vibration analysis. The approach used here subdivides the major Lanczos calculation tasks into subtasks and introduces parallelism down to the subtask levels such as matrix decomposition and forward/backward substitution. The method was implemented on a commercial parallel computer and results were obtained for a long flexible space structure. While parallel computing efficiency for the Lanczos method was good for a moderate number of processors for the test problem, the greatest reduction in time was realized for the decomposition of the stiffness matrix, a calculation which took 70 percent of the time in the sequential program and which took 25 percent of the time on eight processors. For a sample calculation of the twenty lowest frequencies of a 486 degree of freedom problem, the total sequential computing time was reduced by almost a factor of ten using 16 processors.

  16. Water demand forecasting: review of soft computing methods.

    PubMed

    Ghalehkhondabi, Iman; Ardjmand, Ehsan; Young, William A; Weckman, Gary R

    2017-07-01

    Demand forecasting plays a vital role in resource management for governments and private companies. Considering the scarcity of water and its inherent constraints, demand management and forecasting in this domain are critically important. Several soft computing techniques have been developed over the last few decades for water demand forecasting. This study focuses on soft computing methods of water consumption forecasting published between 2005 and 2015. These methods include artificial neural networks (ANNs), fuzzy and neuro-fuzzy models, support vector machines, metaheuristics, and system dynamics. Furthermore, it was discussed that while in short-term forecasting, ANNs have been superior in many cases, but it is still very difficult to pick a single method as the overall best. According to the literature, various methods and their hybrids are applied to water demand forecasting. However, it seems soft computing has a lot more to contribute to water demand forecasting. These contribution areas include, but are not limited, to various ANN architectures, unsupervised methods, deep learning, various metaheuristics, and ensemble methods. Moreover, it is found that soft computing methods are mainly used for short-term demand forecasting.

  17. An Evolutionary Method for Financial Forecasting in Microscopic High-Speed Trading Environment.

    PubMed

    Huang, Chien-Feng; Li, Hsu-Chih

    2017-01-01

    The advancement of information technology in financial applications nowadays have led to fast market-driven events that prompt flash decision-making and actions issued by computer algorithms. As a result, today's markets experience intense activity in the highly dynamic environment where trading systems respond to others at a much faster pace than before. This new breed of technology involves the implementation of high-speed trading strategies which generate significant portion of activity in the financial markets and present researchers with a wealth of information not available in traditional low-speed trading environments. In this study, we aim at developing feasible computational intelligence methodologies, particularly genetic algorithms (GA), to shed light on high-speed trading research using price data of stocks on the microscopic level. Our empirical results show that the proposed GA-based system is able to improve the accuracy of the prediction significantly for price movement, and we expect this GA-based methodology to advance the current state of research for high-speed trading and other relevant financial applications.

  18. An Evolutionary Method for Financial Forecasting in Microscopic High-Speed Trading Environment

    PubMed Central

    Li, Hsu-Chih

    2017-01-01

    The advancement of information technology in financial applications nowadays have led to fast market-driven events that prompt flash decision-making and actions issued by computer algorithms. As a result, today's markets experience intense activity in the highly dynamic environment where trading systems respond to others at a much faster pace than before. This new breed of technology involves the implementation of high-speed trading strategies which generate significant portion of activity in the financial markets and present researchers with a wealth of information not available in traditional low-speed trading environments. In this study, we aim at developing feasible computational intelligence methodologies, particularly genetic algorithms (GA), to shed light on high-speed trading research using price data of stocks on the microscopic level. Our empirical results show that the proposed GA-based system is able to improve the accuracy of the prediction significantly for price movement, and we expect this GA-based methodology to advance the current state of research for high-speed trading and other relevant financial applications. PMID:28316618

  19. Pulmonary CT image classification with evolutionary programming.

    PubMed

    Madsen, M T; Uppaluri, R; Hoffman, E A; McLennan, G

    1999-12-01

    It is often difficult to classify information in medical images from derived features. The purpose of this research was to investigate the use of evolutionary programming as a tool for selecting important features and generating algorithms to classify computed tomographic (CT) images of the lung. Training and test sets consisting of 11 features derived from multiple lung CT images were generated, along with an indicator of the target area from which features originated. The images included five parameters based on histogram analysis, 11 parameters based on run length and co-occurrence matrix measures, and the fractal dimension. Two classification experiments were performed. In the first, the classification task was to distinguish between the subtle but known differences between anterior and posterior portions of transverse lung CT sections. The second classification task was to distinguish normal lung CT images from emphysematous images. The performance of the evolutionary programming approach was compared with that of three statistical classifiers that used the same training and test sets. Evolutionary programming produced solutions that compared favorably with those of the statistical classifiers. In separating the anterior from the posterior lung sections, the evolutionary programming results were better than two of the three statistical approaches. The evolutionary programming approach correctly identified all the normal and abnormal lung images and accomplished this by using less features than the best statistical method. The results of this study demonstrate the utility of evolutionary programming as a tool for developing classification algorithms.

  20. The Pharmaco –, Population and Evolutionary Dynamics of Multi-drug Therapy: Experiments with S. aureus and E. coli and Computer Simulations

    PubMed Central

    Ankomah, Peter; Johnson, Paul J. T.; Levin, Bruce R.

    2013-01-01

    There are both pharmacodynamic and evolutionary reasons to use multiple rather than single antibiotics to treat bacterial infections; in combination antibiotics can be more effective in killing target bacteria as well as in preventing the emergence of resistance. Nevertheless, with few exceptions like tuberculosis, combination therapy is rarely used for bacterial infections. One reason for this is a relative dearth of the pharmaco-, population- and evolutionary dynamic information needed for the rational design of multi-drug treatment protocols. Here, we use in vitro pharmacodynamic experiments, mathematical models and computer simulations to explore the relative efficacies of different two-drug regimens in clearing bacterial infections and the conditions under which multi-drug therapy will prevent the ascent of resistance. We estimate the parameters and explore the fit of Hill functions to compare the pharmacodynamics of antibiotics of four different classes individually and in pairs during cidal experiments with pathogenic strains of Staphylococcus aureus and Escherichia coli. We also consider the relative efficacy of these antibiotics and antibiotic pairs in reducing the level of phenotypically resistant but genetically susceptible, persister, subpopulations. Our results provide compelling support for the proposition that the nature and form of the interactions between drugs of different classes, synergy, antagonism, suppression and additivity, has to be determined empirically and cannot be inferred from what is known about the pharmacodynamics or mode of action of these drugs individually. Monte Carlo simulations of within-host treatment incorporating these pharmacodynamic results and clinically relevant refuge subpopulations of bacteria indicate that: (i) the form of drug-drug interactions can profoundly affect the rate at which infections are cleared, (ii) two-drug therapy can prevent treatment failure even when bacteria resistant to single drugs are present

  1. Hybrid Tuning of an Evolutionary Algorithm for Sensor Allocation

    DTIC Science & Technology

    2011-06-01

    survey of tuning methods for evolutionary algorithms can be found in [10] where algorith - mic and search approaches are distinguished. The main charac...Yilmaz, B. N. Mcquay, H. Yu, A. S. Wu, and J. C. Sciortino, “Evolving sensor suites for enemy radar detection,” in Genetic and Evolutionary Computation...GECCO, 2003. [3] T. Shima and C. Schumacher, “Assigning cooperating uavs to simulta- neous tasks on consecutive targets using genetic algorithms

  2. The spectral-element method, Beowulf computing, and global seismology.

    PubMed

    Komatitsch, Dimitri; Ritsema, Jeroen; Tromp, Jeroen

    2002-11-29

    The propagation of seismic waves through Earth can now be modeled accurately with the recently developed spectral-element method. This method takes into account heterogeneity in Earth models, such as three-dimensional variations of seismic wave velocity, density, and crustal thickness. The method is implemented on relatively inexpensive clusters of personal computers, so-called Beowulf machines. This combination of hardware and software enables us to simulate broadband seismograms without intrinsic restrictions on the level of heterogeneity or the frequency content.

  3. A stochastic method for computing hadronic matrix elements

    DOE PAGES

    Alexandrou, Constantia; Constantinou, Martha; Dinter, Simon; ...

    2014-01-24

    In this study, we present a stochastic method for the calculation of baryon 3-point functions which is an alternative to the typically used sequential method offering more versatility. We analyze the scaling of the error of the stochastically evaluated 3-point function with the lattice volume and find a favorable signal to noise ratio suggesting that the stochastic method can be extended to large volumes providing an efficient approach to compute hadronic matrix elements and form factors.

  4. Computational methods to obtain time optimal jet engine control

    NASA Technical Reports Server (NTRS)

    Basso, R. J.; Leake, R. J.

    1976-01-01

    Dynamic Programming and the Fletcher-Reeves Conjugate Gradient Method are two existing methods which can be applied to solve a general class of unconstrained fixed time, free right end optimal control problems. New techniques are developed to adapt these methods to solve a time optimal control problem with state variable and control constraints. Specifically, they are applied to compute a time optimal control for a jet engine control problem.

  5. Software for computing eigenvalue bounds for iterative subspace matrix methods

    NASA Astrophysics Data System (ADS)

    Shepard, Ron; Minkoff, Michael; Zhou, Yunkai

    2005-07-01

    This paper describes software for computing eigenvalue bounds to the standard and generalized hermitian eigenvalue problem as described in [Y. Zhou, R. Shepard, M. Minkoff, Computing eigenvalue bounds for iterative subspace matrix methods, Comput. Phys. Comm. 167 (2005) 90-102]. The software discussed in this manuscript applies to any subspace method, including Lanczos, Davidson, SPAM, Generalized Davidson Inverse Iteration, Jacobi-Davidson, and the Generalized Jacobi-Davidson methods, and it is applicable to either outer or inner eigenvalues. This software can be applied during the subspace iterations in order to truncate the iterative process and to avoid unnecessary effort when converging specific eigenvalues to a required target accuracy, and it can be applied to the final set of Ritz values to assess the accuracy of the converged results. Program summaryTitle of program: SUBROUTINE BOUNDS_OPT Catalogue identifier: ADVE Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVE Computers: any computer that supports a Fortran 90 compiler Operating systems: any computer that supports a Fortran 90 compiler Programming language: Standard Fortran 90 High speed storage required:5m+5 working-precision and 2m+7 integer for m Ritz values No. of bits in a word: The floating point working precision is parameterized with the symbolic constant WP No. of lines in distributed program, including test data, etc.: 2452 No. of bytes in distributed program, including test data, etc.: 281 543 Distribution format: tar.gz Nature of physical problem: The computational solution of eigenvalue problems using iterative subspace methods has widespread applications in the physical sciences and engineering as well as other areas of mathematical modeling (economics, social sciences, etc.). The accuracy of the solution of such problems and the utility of those errors is a fundamental problem that is of

  6. Applications of computer-intensive statistical methods to environmental research.

    PubMed

    Pitt, D G; Kreutzweiser, D P

    1998-02-01

    Conventional statistical approaches rely heavily on the properties of the central limit theorem to bridge the gap between the characteristics of a sample and some theoretical sampling distribution. Problems associated with nonrandom sampling, unknown population distributions, heterogeneous variances, small sample sizes, and missing data jeopardize the assumptions of such approaches and cast skepticism on conclusions. Conventional nonparametric alternatives offer freedom from distribution assumptions, but design limitations and loss of power can be serious drawbacks. With the data-processing capacity of today's computers, a new dimension of distribution-free statistical methods has evolved that addresses many of the limitations of conventional parametric and nonparametric methods. Computer-intensive statistical methods involve reshuffling, resampling, or simulating a data set thousands of times to empirically define a sampling distribution for a chosen test statistic. The only assumption necessary for valid results is the random assignment of experimental units to the test groups or treatments. Application to a real data set illustrates the advantages of these methods, including freedom from distribution assumptions without loss of power, complete choice over test statistics, easy adaptation to design complexities and missing data, and considerable intuitive appeal. The illustrations also reveal that computer-intensive methods can be more time consuming than conventional methods and the amount of computer code required to orchestrate reshuffling, resampling, or simulation procedures can be appreciable.

  7. Eco-Evo PVAs: Incorporating Eco-Evolutionary Processes into Population Viability Models

    EPA Science Inventory

    We synthesize how advances in computational methods and population genomics can be combined within an Ecological-Evolutionary (Eco-Evo) PVA model. Eco-Evo PVA models are powerful new tools for understanding the influence of evolutionary processes on plant and animal population pe...

  8. Eco-Evo PVAs: Incorporating Eco-Evolutionary Processes into Population Viability Models

    EPA Science Inventory

    We synthesize how advances in computational methods and population genomics can be combined within an Ecological-Evolutionary (Eco-Evo) PVA model. Eco-Evo PVA models are powerful new tools for understanding the influence of evolutionary processes on plant and animal population pe...

  9. Practical Use of Computationally Frugal Model Analysis Methods

    DOE PAGES

    Hill, Mary C.; Kavetski, Dmitri; Clark, Martyn; ...

    2015-03-21

    Computationally frugal methods of model analysis can provide substantial benefits when developing models of groundwater and other environmental systems. Model analysis includes ways to evaluate model adequacy and to perform sensitivity and uncertainty analysis. Frugal methods typically require 10s of parallelizable model runs; their convenience allows for other uses of the computational effort. We suggest that model analysis be posed as a set of questions used to organize methods that range from frugal to expensive (requiring 10,000 model runs or more). This encourages focus on method utility, even when methods have starkly different theoretical backgrounds. We note that many frugalmore » methods are more useful when unrealistic process-model nonlinearities are reduced. Inexpensive diagnostics are identified for determining when frugal methods are advantageous. Examples from the literature are used to demonstrate local methods and the diagnostics. We suggest that the greater use of computationally frugal model analysis methods would allow questions such as those posed in this work to be addressed more routinely, allowing the environmental sciences community to obtain greater scientific insight from the many ongoing and future modeling efforts« less

  10. Practical Use of Computationally Frugal Model Analysis Methods

    SciTech Connect

    Hill, Mary C.; Kavetski, Dmitri; Clark, Martyn; Ye, Ming; Arabi, Mazdak; Lu, Dan; Foglia, Laura; Mehl, Steffen

    2015-03-21

    Computationally frugal methods of model analysis can provide substantial benefits when developing models of groundwater and other environmental systems. Model analysis includes ways to evaluate model adequacy and to perform sensitivity and uncertainty analysis. Frugal methods typically require 10s of parallelizable model runs; their convenience allows for other uses of the computational effort. We suggest that model analysis be posed as a set of questions used to organize methods that range from frugal to expensive (requiring 10,000 model runs or more). This encourages focus on method utility, even when methods have starkly different theoretical backgrounds. We note that many frugal methods are more useful when unrealistic process-model nonlinearities are reduced. Inexpensive diagnostics are identified for determining when frugal methods are advantageous. Examples from the literature are used to demonstrate local methods and the diagnostics. We suggest that the greater use of computationally frugal model analysis methods would allow questions such as those posed in this work to be addressed more routinely, allowing the environmental sciences community to obtain greater scientific insight from the many ongoing and future modeling efforts

  11. Fully consistent CFD methods for incompressible flow computations

    NASA Astrophysics Data System (ADS)

    Kolmogorov, D. K.; Shen, W. Z.; Sørensen, N. N.; Sørensen, J. N.

    2014-06-01

    Nowadays collocated grid based CFD methods are one of the most efficient tools for computations of the flows past wind turbines. To ensure the robustness of the methods they require special attention to the well-known problem of pressure-velocity coupling. Many commercial codes to ensure the pressure-velocity coupling on collocated grids use the so-called momentum interpolation method of Rhie and Chow [1]. As known, the method and some of its widely spread modifications result in solutions, which are dependent of time step at convergence. In this paper the magnitude of the dependence is shown to contribute about 0.5% into the total error in a typical turbulent flow computation. Nevertheless if coarse grids are used, the standard interpolation methods result in much higher non-consistent behavior. To overcome the problem, a recently developed interpolation method, which is independent of time step, is used. It is shown that in comparison to other time step independent method, the method may enhance the convergence rate of the SIMPLEC algorithm up to 25 %. The method is verified using turbulent flow computations around a NACA 64618 airfoil and the roll-up of a shear layer, which may appear in wind turbine wake.

  12. Optimal error estimates for high order Runge-Kutta methods applied to evolutionary equations

    SciTech Connect

    McKinney, W.R.

    1989-01-01

    Fully discrete approximations to 1-periodic solutions of the Generalized Korteweg de-Vries and the Cahn-Hilliard equations are analyzed. These approximations are generated by an Implicit Runge-Kutta method for the temporal discretization and a Galerkin Finite Element method for the spatial discretization. Furthermore, these approximations may be of arbitrarily high order. In particular, it is shown that the well-known order reduction phenomenon afflicting Implicit Runge Kutta methods does not occur. Numerical results supporting these optimal error estimates for the Korteweg-de Vries equation and indicating the existence of a slow motion manifold for the Cahn-Hilliard equation are also provided.

  13. A novel evolutionary approach to image enhancement filter design: method and applications.

    PubMed

    Hong, Jin-Hyuk; Cho, Sung-Bae; Cho, Ung-Keun

    2009-12-01

    Image enhancement is an important issue in digital image processing. Various approaches have been developed to solve image enhancement problems, but most of them require deep expert knowledge to design appropriate image filters. To automatically design a filter, we propose a novel approach based on the genetic algorithm that optimizes a set of standard filters by determining their types and order. Moreover, the proposed method is able to manage various types of noise factors. We applied the proposed method to local and global image enhancement problems such as impulsive noise reduction, interpolation, and orientation enhancement. In terms of subjective and objective evaluations, the results show the superiority of the proposed method.

  14. Determinant Computation on the GPU using the Condensation Method

    NASA Astrophysics Data System (ADS)

    Anisul Haque, Sardar; Moreno Maza, Marc

    2012-02-01

    We report on a GPU implementation of the condensation method designed by Abdelmalek Salem and Kouachi Said for computing the determinant of a matrix. We consider two types of coefficients: modular integers and floating point numbers. We evaluate the performance of our code by measuring its effective bandwidth and argue that it is numerical stable in the floating point number case. In addition, we compare our code with serial implementation of determinant computation from well-known mathematical packages. Our results suggest that a GPU implementation of the condensation method has a large potential for improving those packages in terms of running time and numerical stability.

  15. Comparison of three computer methods of sperm head analysis.

    PubMed

    Goulart, Ariadne Rodrigues; de Alencar Hausen, Moema; Monteiro-Leal, Luiz Henrique

    2003-09-01

    Analysis of sperm heads using three different computer morphometrical tools and experimental conditions to find a more reliable and secure strategy among them. Controlled experiments on sperm morphology analysis from volunteers. Laboratory of microscopy and imaging processing. Ten human semen samples donated by different zoospermic men. Semen samples were collected by masturbation after > or =72 hours of abstinence. Spermatozoon head morphology was compared by the use of different video-microscopy systems, three computer programs, and various staining conditions and manipulation by different operators. Nonbiological material in the form of latex beads was also used. The data obtained suggest that the semiautomatic computer program is the most reliable and secure method for performing sperm analysis, besides the fact that it is a fast process compared with manual methods. Computer systems of sperm analysis should incorporate a step of interactive object identification to work properly, allowing the operator to confirm or correct possible computer misidentification. The latex beads were used to confirm the capability of all three computer programs to correctly evaluate nonbiological material.

  16. Support Vector Machine-based method for predicting subcellular localization of mycobacterial proteins using evolutionary information and motifs.

    PubMed

    Rashid, Mamoon; Saha, Sudipto; Raghava, Gajendra Ps

    2007-09-13

    In past number of methods have been developed for predicting subcellular location of eukaryotic, prokaryotic (Gram-negative and Gram-positive bacteria) and human proteins but no method has been developed for mycobacterial proteins which may represent repertoire of potent immunogens of this dreaded pathogen. In this study, attempt has been made to develop method for predicting subcellular location of mycobacterial proteins. The models were trained and tested on 852 mycobacterial proteins and evaluated using five-fold cross-validation technique. First SVM (Support Vector Machine) model was developed using amino acid composition and overall accuracy of 82.51% was achieved with average accuracy (mean of class-wise accuracy) of 68.47%. In order to utilize evolutionary information, a SVM model was developed using PSSM (Position-Specific Scoring Matrix) profiles obtained from PSI-BLAST (Position-Specific Iterated BLAST) and overall accuracy achieved was of 86.62% with average accuracy of 73.71%. In addition, HMM (Hidden Markov Model), MEME/MAST (Multiple Em for Motif Elicitation/Motif Alignment and Search Tool) and hybrid model that combined two or more models were also developed. We achieved maximum overall accuracy of 86.8% with average accuracy of 89.00% using combination of PSSM based SVM model and MEME/MAST. Performance of our method was compared with that of the existing methods developed for predicting subcellular locations of Gram-positive bacterial proteins. A highly accurate method has been developed for predicting subcellular location of mycobacterial proteins. This method also predicts very important class of proteins that is membrane-attached proteins. This method will be useful in annotating newly sequenced or hypothetical mycobacterial proteins. Based on above study, a freely accessible web server TBpred http://www.imtech.res.in/raghava/tbpred/ has been developed.

  17. Measuring coherence of computer-assisted likelihood ratio methods.

    PubMed

    Haraksim, Rudolf; Ramos, Daniel; Meuwly, Didier; Berger, Charles E H

    2015-04-01

    Measuring the performance of forensic evaluation methods that compute likelihood ratios (LRs) is relevant for both the development and the validation of such methods. A framework of performance characteristics categorized as primary and secondary is introduced in this study to help achieve such development and validation. Ground-truth labelled fingerprint data is used to assess the performance of an example likelihood ratio method in terms of those performance characteristics. Discrimination, calibration, and especially the coherence of this LR method are assessed as a function of the quantity and quality of the trace fingerprint specimen. Assessment of the coherence revealed a weakness of the comparison algorithm in the computer-assisted likelihood ratio method used.

  18. The continuous slope-area method for computing event hydrographs

    USGS Publications Warehouse

    Smith, Christopher F.; Cordova, Jeffrey T.; Wiele, Stephen M.

    2010-01-01

    The continuous slope-area (CSA) method expands the slope-area method of computing peak discharge to a complete flow event. Continuously recording pressure transducers installed at three or more cross sections provide water-surface slopes and stage during an event that can be used with cross-section surveys and estimates of channel roughness to compute a continuous discharge hydrograph. The CSA method has been made feasible by the availability of low-cost recording pressure transducers that provide a continuous record of stage. The CSA method was implemented on the Babocomari River in Arizona in 2002 to monitor streamflow in the channel reach by installing eight pressure transducers in four cross sections within the reach. Continuous discharge hydrographs were constructed from five streamflow events during 2002-2006. Results from this study indicate that the CSA method can be used to obtain continuous hydrographs and rating curves can be generated from streamflow events.

  19. Numerical methods for solving ODEs on the infinity computer

    NASA Astrophysics Data System (ADS)

    Mazzia, F.; Sergeyev, Ya. D.; Iavernaro, F.; Amodio, P.; Mukhametzhanov, M. S.

    2016-10-01

    New algorithms for the numerical solution of Ordinary Differential Equations (ODEs) with initial conditions are proposed. They are designed for working on a new kind of a supercomputer - the Infinity Computer - that is able to deal numerically with finite, infinite and infinitesimal numbers. Due to this fact, the Infinity Computer allows one to calculate the exact derivatives of functions using infinitesimal values of the stepsize. As a consequence, the new methods are able to work with the exact values of the derivatives, instead of their approximations. Within this context, variants of one-step multi-point methods closely related to the classical Taylor formulae and to the Obrechkoff methods are considered. To get numerical evidence of the theoretical results, test problems are solved by means of the new methods and the results compared with the performance of classical methods.

  20. New computational methods and algorithms for semiconductor science and nanotechnology

    NASA Astrophysics Data System (ADS)

    Gamoke, Benjamin C.

    The design and implementation of sophisticated computational methods and algorithms are critical to solve problems in nanotechnology and semiconductor science. Two key methods will be described to overcome challenges in contemporary surface science. The first method will focus on accurately cancelling interactions in a molecular system, such as modeling adsorbates on periodic surfaces at low coverages, a problem for which current methodologies are computationally inefficient. The second method pertains to the accurate calculation of core-ionization energies through X-ray photoelectron spectroscopy. The development can provide assignment of peaks in X-ray photoelectron spectra, which can determine the chemical composition and bonding environment of surface species. Finally, illustrative surface-adsorbate and gas-phase studies using the developed methods will also be featured.

  1. Critical thinking: concept analysis from the perspective of Rodger's evolutionary method of concept analysis.

    PubMed

    Carbogim, Fábio da Costa; Oliveira, Larissa Bertacchini de; Püschel, Vilanice Alves de Araújo

    2016-09-01

    to analyze the concept of critical thinking (CT) in Rodger's evolutionary perspective. documentary research undertaken in the Cinahl, Lilacs, Bdenf and Dedalus databases, using the keywords of 'critical thinking' and 'Nursing', without limitation based on year of publication. The data were analyzed in accordance with the stages of Rodger's conceptual model. The following were included: books and articles in full, published in Portuguese, English or Spanish, which addressed CT in the teaching and practice of Nursing; articles which did not address aspects related to the concept of CT were excluded. the sample was made up of 42 works. As a substitute term, emphasis is placed on 'analytical thinking', and, as a related factor, decision-making. In order, the most frequent preceding and consequent attributes were: ability to analyze, training of the student nurse, and clinical decision-making. As the implications of CT, emphasis is placed on achieving effective results in care for the patient, family and community. CT is a cognitive skill which involves analysis, logical reasoning and clinical judgment, geared towards the resolution of problems, and standing out in the training and practice of the nurse with a view to accurate clinical decision-making and the achieving of effective results. analisar o conceito de pensamento crítico (PC), na perspectiva evolucionista de Rodgers. pesquisa documental realizada nas bases de dados Cinahl, Lilacs, Bdenf e Dedalus, utilizando-se as palavras-chave pensamento crítico e Enfermagem, sem delimitação de ano de publicação. Os dados foram analisados conforme etapas do modelo conceitual de Rodgers. Incluíram-se livros e artigos na íntegra, publicados em português, inglês ou espanhol que abordavam o PC no ensino e prática de Enfermagem, excluindo-se estudos que não abordassem aspectos relacionados ao conceito do PC. a amostra foi constituída por 42 trabalhos. Como termo substituto, destacou-se pensamento analítico e, como

  2. Practical Use of Computationally Frugal Model Analysis Methods.

    PubMed

    Hill, Mary C; Kavetski, Dmitri; Clark, Martyn; Ye, Ming; Arabi, Mazdak; Lu, Dan; Foglia, Laura; Mehl, Steffen

    2016-03-01

    Three challenges compromise the utility of mathematical models of groundwater and other environmental systems: (1) a dizzying array of model analysis methods and metrics make it difficult to compare evaluations of model adequacy, sensitivity, and uncertainty; (2) the high computational demands of many popular model analysis methods (requiring 1000's, 10,000 s, or more model runs) make them difficult to apply to complex models; and (3) many models are plagued by unrealistic nonlinearities arising from the numerical model formulation and implementation. This study proposes a strategy to address these challenges through a careful combination of model analysis and implementation methods. In this strategy, computationally frugal model analysis methods (often requiring a few dozen parallelizable model runs) play a major role, and computationally demanding methods are used for problems where (relatively) inexpensive diagnostics suggest the frugal methods are unreliable. We also argue in favor of detecting and, where possible, eliminating unrealistic model nonlinearities-this increases the realism of the model itself and facilitates the application of frugal methods. Literature examples are used to demonstrate the use of frugal methods and associated diagnostics. We suggest that the strategy proposed in this paper would allow the environmental sciences community to achieve greater transparency and falsifiability of environmental models, and obtain greater scientific insight from ongoing and future modeling efforts.

  3. An Accurate and Efficient Method of Computing Differential Seismograms

    NASA Astrophysics Data System (ADS)

    Hu, S.; Zhu, L.

    2013-12-01

    Inversion of seismic waveforms for Earth structure usually requires computing partial derivatives of seismograms with respect to velocity model parameters. We developed an accurate and efficient method to calculate differential seismograms for multi-layered elastic media, based on the Thompson-Haskell propagator matrix technique. We first derived the partial derivatives of the Haskell matrix and its compound matrix respect to the layer parameters (P wave velocity, shear wave velocity and density). We then derived the partial derivatives of surface displacement kernels in the frequency-wavenumber domain. The differential seismograms are obtained by using the frequency-wavenumber double integration method. The implementation is computationally efficient and the total computing time is proportional to the time of computing the seismogram itself, i.e., independent of the number of layers in the model. We verified the correctness of results by comparing with differential seismograms computed using the finite differences method. Our results are more accurate because of the analytical nature of the derived partial derivatives.

  4. A comparative study of computational methods in cosmic gas dynamics

    NASA Technical Reports Server (NTRS)

    Van Albada, G. D.; Van Leer, B.; Roberts, W. W., Jr.

    1982-01-01

    Many theoretical investigations of fluid flows in astrophysics require extensive numerical calculations. The selection of an appropriate computational method is, therefore, important for the astronomer who has to solve an astrophysical flow problem. The present investigation has the objective to provide an informational basis for such a selection by comparing a variety of numerical methods with the aid of a test problem. The test problem involves a simple, one-dimensional model of the gas flow in a spiral galaxy. The numerical methods considered include the beam scheme, Godunov's method (G), the second-order flux-splitting method (FS2), MacCormack's method, and the flux corrected transport methods of Boris and Book (1973). It is found that the best second-order method (FS2) outperforms the best first-order method (G) by a huge margin.

  5. Computer controlled fluorometer device and method of operating same

    DOEpatents

    Kolber, Z.; Falkowski, P.

    1990-07-17

    A computer controlled fluorometer device and method of operating same, said device being made to include a pump flash source and a probe flash source and one or more sample chambers in combination with a light condenser lens system and associated filters and reflectors and collimators, as well as signal conditioning and monitoring means and a programmable computer means and a software programmable source of background irradiance that is operable according to the method of the invention to rapidly, efficiently and accurately measure photosynthetic activity by precisely monitoring and recording changes in fluorescence yield produced by a controlled series of predetermined cycles of probe and pump flashes from the respective probe and pump sources that are controlled by the computer means. 13 figs.

  6. Computational Methods for Dynamic Stability and Control Derivatives

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Spence, Angela M.; Murphy, Patrick C.

    2004-01-01

    Force and moment measurements from an F-16XL during forced pitch oscillation tests result in dynamic stability derivatives, which are measured in combinations. Initial computational simulations of the motions and combined derivatives are attempted via a low-order, time-dependent panel method computational fluid dynamics code. The code dynamics are shown to be highly questionable for this application and the chosen configuration. However, three methods to computationally separate such combined dynamic stability derivatives are proposed. One of the separation techniques is demonstrated on the measured forced pitch oscillation data. Extensions of the separation techniques to yawing and rolling motions are discussed. In addition, the possibility of considering the angles of attack and sideslip state vector elements as distributed quantities, rather than point quantities, is introduced.

  7. Computational Methods for Dynamic Stability and Control Derivatives

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Spence, Angela M.; Murphy, Patrick C.

    2003-01-01

    Force and moment measurements from an F-16XL during forced pitch oscillation tests result in dynamic stability derivatives, which are measured in combinations. Initial computational simulations of the motions and combined derivatives are attempted via a low-order, time-dependent panel method computational fluid dynamics code. The code dynamics are shown to be highly questionable for this application and the chosen configuration. However, three methods to computationally separate such combined dynamic stability derivatives are proposed. One of the separation techniques is demonstrated on the measured forced pitch oscillation data. Extensions of the separation techniques to yawing and rolling motions are discussed. In addition, the possibility of considering the angles of attack and sideslip state vector elements as distributed quantities, rather than point quantities, is introduced.

  8. Computer controlled fluorometer device and method of operating same

    DOEpatents

    Kolber, Zbigniew; Falkowski, Paul

    1990-01-01

    A computer controlled fluorometer device and method of operating same, said device being made to include a pump flash source and a probe flash source and one or more sample chambers in combination with a light condenser lens system and associated filters and reflectors and collimators, as well as signal conditioning and monitoring means and a programmable computer means and a software programmable source of background irradiance that is operable according to the method of the invention to rapidly, efficiently and accurately measure photosynthetic activity by precisely monitoring and recording changes in fluorescence yield produced by a controlled series of predetermined cycles of probe and pump flashes from the respective probe and pump sources that are controlled by the computer means.

  9. A Spectral Time-Domain Method for Computational Electrodynamics

    NASA Astrophysics Data System (ADS)

    Lambers, James V.

    2009-09-01

    We present a new approach to the numerical solution of Maxwell's equations in the case of spatially-varying electric permittivity and/or magnetic permeability, based on Krylov subspace spectral (KSS) methods. KSS methods for scalar equations compute each Fourier coefficient of the solution using techniques developed by Gene Golub and Gérard Meurant for approximating elements of functions of matrices by Gaussian quadrature in the spectral, rather than physical, domain. We show how they can be generalized to coupled systems of equations, such as Maxwell's equations, by choosing appropriate basis functions that, while induced by this coupling, still allow efficient and robust computation of the Fourier coefficients of each spatial component of the electric and magnetic fields. We also discuss the implementation of appropriate boundary conditions for simulation on infinite computational domains, and how discontinuous coefficients can be handled.

  10. Computational Methods for CLIP-seq Data Processing.

    PubMed

    Reyes-Herrera, Paula H; Ficarra, Elisa

    2014-01-01

    RNA-binding proteins (RBPs) are at the core of post-transcriptional regulation and thus of gene expression control at the RNA level. One of the principal challenges in the field of gene expression regulation is to understand RBPs mechanism of action. As a result of recent evolution of experimental techniques, it is now possible to obtain the RNA regions recognized by RBPs on a transcriptome-wide scale. In fact, CLIP-seq protocols use the joint action of CLIP, crosslinking immunoprecipitation, and high-throughput sequencing to recover the transcriptome-wide set of interaction regions for a particular protein. Nevertheless, computational methods are necessary to process CLIP-seq experimental data and are a key to advancement in the understanding of gene regulatory mechanisms. Considering the importance of computational methods in this area, we present a review of the current status of computational approaches used and proposed for CLIP-seq data.

  11. Computational Methods for CLIP-seq Data Processing

    PubMed Central

    Reyes-Herrera, Paula H; Ficarra, Elisa

    2014-01-01

    RNA-binding proteins (RBPs) are at the core of post-transcriptional regulation and thus of gene expression control at the RNA level. One of the principal challenges in the field of gene expression regulation is to understand RBPs mechanism of action. As a result of recent evolution of experimental techniques, it is now possible to obtain the RNA regions recognized by RBPs on a transcriptome-wide scale. In fact, CLIP-seq protocols use the joint action of CLIP, crosslinking immunoprecipitation, and high-throughput sequencing to recover the transcriptome-wide set of interaction regions for a particular protein. Nevertheless, computational methods are necessary to process CLIP-seq experimental data and are a key to advancement in the understanding of gene regulatory mechanisms. Considering the importance of computational methods in this area, we present a review of the current status of computational approaches used and proposed for CLIP-seq data. PMID:25336930

  12. Methods of parallel computation applied on granular simulations

    NASA Astrophysics Data System (ADS)

    Martins, Gustavo H. B.; Atman, Allbens P. F.

    2017-06-01

    Every year, parallel computing has becoming cheaper and more accessible. As consequence, applications were spreading over all research areas. Granular materials is a promising area for parallel computing. To prove this statement we study the impact of parallel computing in simulations of the BNE (Brazil Nut Effect). This property is due the remarkable arising of an intruder confined to a granular media when vertically shaken against gravity. By means of DEM (Discrete Element Methods) simulations, we study the code performance testing different methods to improve clock time. A comparison between serial and parallel algorithms, using OpenMP® is also shown. The best improvement was obtained by optimizing the function that find contacts using Verlet's cells.

  13. Using THz Spectroscopy, Evolutionary Network Analysis Methods, and MD Simulation to Map the Evolution of Allosteric Communication Pathways in c-Type Lysozymes

    PubMed Central

    Woods, Kristina N.; Pfeffer, Juergen

    2016-01-01

    It is now widely accepted that protein function is intimately tied with the navigation of energy landscapes. In this framework, a protein sequence is not described by a distinct structure but rather by an ensemble of conformations. And it is through this ensemble that evolution is able to modify a protein’s function by altering its landscape. Hence, the evolution of protein functions involves selective pressures that adjust the sampling of the conformational states. In this work, we focus on elucidating the evolutionary pathway that shaped the function of individual proteins that make-up the mammalian c-type lysozyme subfamily. Using both experimental and computational methods, we map out specific intermolecular interactions that direct the sampling of conformational states and accordingly, also underlie shifts in the landscape that are directly connected with the formation of novel protein functions. By contrasting three representative proteins in the family we identify molecular mechanisms that are associated with the selectivity of enhanced antimicrobial properties and consequently, divergent protein function. Namely, we link the extent of localized fluctuations involving the loop separating helices A and B with shifts in the equilibrium of the ensemble of conformational states that mediate interdomain coupling and concurrently moderate substrate binding affinity. This work reveals unique insights into the molecular level mechanisms that promote the progression of interactions that connect the immune response to infection with the nutritional properties of lactation, while also providing a deeper understanding about how evolving energy landscapes may define present-day protein function. PMID:26337549

  14. Solving evolutionary-type differential equations and physical problems using the operator method

    NASA Astrophysics Data System (ADS)

    Zhukovsky, K. V.

    2017-01-01

    We present a general operator method based on the advanced technique of the inverse derivative operator for solving a wide range of problems described by some classes of differential equations. We construct and use inverse differential operators to solve several differential equations. We obtain operator identities involving an inverse derivative operator, integral transformations, and generalized forms of orthogonal polynomials and special functions. We present examples of using the operator method to construct solutions of equations containing linear and quadratic forms of a pair of operators satisfying Heisenberg-type relations and solutions of various modifications of partial differential equations of the Fourier heat conduction type, Fokker-Planck type, Black-Scholes type, etc. We demonstrate using the operator technique to solve several physical problems related to the charge motion in quantum mechanics, heat propagation, and the dynamics of the beams in accelerators.

  15. PSD computations using Welch's method. [Power Spectral Density (PSD)

    SciTech Connect

    Solomon, Jr, O M

    1991-12-01

    This report describes Welch's method for computing Power Spectral Densities (PSDs). We first describe the bandpass filter method which uses filtering, squaring, and averaging operations to estimate a PSD. Second, we delineate the relationship of Welch's method to the bandpass filter method. Third, the frequency domain signal-to-noise ratio for a sine wave in white noise is derived. This derivation includes the computation of the noise floor due to quantization noise. The signal-to-noise ratio and noise flood depend on the FFT length and window. Fourth, the variance the Welch's PSD is discussed via chi-square random variables and degrees of freedom. This report contains many examples, figures and tables to illustrate the concepts. 26 refs.

  16. Trajectory optimization using parallel shooting method on parallel computer

    SciTech Connect

    Wirthman, D.J.; Park, S.Y.; Vadali, S.R.

    1995-03-01

    The efficiency of a parallel shooting method on a parallel computer for solving a variety of optimal control guidance problems is studied. Several examples are considered to demonstrate that a speedup of nearly 7 to 1 is achieved with the use of 16 processors. It is suggested that further improvements in performance can be achieved by parallelizing in the state domain. 10 refs.

  17. Interactive method for computation of viscous flow with recirculation

    NASA Technical Reports Server (NTRS)

    Brandeis, J.; Rom, J.

    1981-01-01

    An interactive method is proposed for the solution of two-dimensional, laminar flow fields with identifiable regions of recirculation, such as the shear-layer-driven cavity flow. The method treats the flow field as composed of two regions, with an appropriate mathematical model adopted for each region. The shear layer is computed by the compressible boundary layer equations, and the slowly recirculating flow by the incompressible Navier-Stokes equations. The flow field is solved iteratively by matching the local solutions in the two regions. For this purpose a new matching method utilizing an overlap between the two computational regions is developed, and shown to be most satisfactory. Matching of the two velocity components, as well as the change in velocity with respect to depth is amply accomplished using the present approach, and the stagnation points corresponding to separation and reattachment of the dividing streamline are computed as part of the interactive solution. The interactive method is applied to the test problem of a shear layer driven cavity. The computational results are used to show the validity and applicability of the present approach.

  18. EQUILIBRIUM AND NONEQUILIBRIUM FOUNDATIONS OF FREE ENERGY COMPUTATIONAL METHODS

    SciTech Connect

    C. JARZYNSKI

    2001-03-01

    Statistical mechanics provides a rigorous framework for the numerical estimation of free energy differences in complex systems such as biomolecules. This paper presents a brief review of the statistical mechanical identities underlying a number of techniques for computing free energy differences. Both equilibrium and nonequilibrium methods are covered.

  19. Convergence acceleration of the Proteus computer code with multigrid methods

    NASA Technical Reports Server (NTRS)

    Demuren, A. O.; Ibraheem, S. O.

    1995-01-01

    This report presents the results of a study to implement convergence acceleration techniques based on the multigrid concept in the two-dimensional and three-dimensional versions of the Proteus computer code. The first section presents a review of the relevant literature on the implementation of the multigrid methods in computer codes for compressible flow analysis. The next two sections present detailed stability analysis of numerical schemes for solving the Euler and Navier-Stokes equations, based on conventional von Neumann analysis and the bi-grid analysis, respectively. The next section presents details of the computational method used in the Proteus computer code. Finally, the multigrid implementation and applications to several two-dimensional and three-dimensional test problems are presented. The results of the present study show that the multigrid method always leads to a reduction in the number of iterations (or time steps) required for convergence. However, there is an overhead associated with the use of multigrid acceleration. The overhead is higher in 2-D problems than in 3-D problems, thus overall multigrid savings in CPU time are in general better in the latter. Savings of about 40-50 percent are typical in 3-D problems, but they are about 20-30 percent in large 2-D problems. The present multigrid method is applicable to steady-state problems and is therefore ineffective in problems with inherently unstable solutions.

  20. Decluttering Methods for Computer-Generated Graphic Displays

    NASA Technical Reports Server (NTRS)

    Schultz, E. Eugene, Jr.

    1986-01-01

    Symbol simplification and contrasting enhance viewer's ability to detect particular symbol. Report describes experiments designed to indicate how various decluttering methods affect viewer's abilities to distinguish essential from nonessential features on computer-generated graphic displays. Results indicate partial removal of nonessential graphic features through symbol simplification effective in decluttering as total removal of nonessential graphic features.

  1. Interactive method for computation of viscous flow with recirculation

    NASA Technical Reports Server (NTRS)

    Brandeis, J.; Rom, J.

    1981-01-01

    An interactive method is proposed for the solution of two-dimensional, laminar flow fields with identifiable regions of recirculation, such as the shear-layer-driven cavity flow. The method treats the flow field as composed of two regions, with an appropriate mathematical model adopted for each region. The shear layer is computed by the compressible boundary layer equations, and the slowly recirculating flow by the incompressible Navier-Stokes equations. The flow field is solved iteratively by matching the local solutions in the two regions. For this purpose a new matching method utilizing an overlap between the two computational regions is developed, and shown to be most satisfactory. Matching of the two velocity components, as well as the change in velocity with respect to depth is amply accomplished using the present approach, and the stagnation points corresponding to separation and reattachment of the dividing streamline are computed as part of the interactive solution. The interactive method is applied to the test problem of a shear layer driven cavity. The computational results are used to show the validity and applicability of the present approach.

  2. A multistep screening method to identify genes using evolutionary transcriptome of plants.

    PubMed

    Kim, Chang-Kug; Lim, Hye-Min; Na, Jong-Kuk; Choi, Ji-Weon; Sohn, Seong-Han; Park, Soo-Chul; Kim, Young-Hwan; Kim, Yong-Kab; Kim, Dool-Yi

    2014-01-01

    We introduced a multistep screening method to identify the genes in plants using microarrays and ribonucleic acid (RNA)-seq transcriptome data. Our method describes the process for identifying genes using the salt-tolerance response pathways of the potato (Solanum tuberosum) plant. Gene expression was analyzed using microarrays and RNA-seq experiments that examined three potato lines (high, intermediate, and low salt tolerance) under conditions of salt stress. We screened the orthologous genes and pathway genes involved in salinity-related biosynthetic pathways, and identified nine potato genes that were candidates for salinity-tolerance pathways. The nine genes were selected to characterize their phylogenetic reconstruction with homologous genes of Arabidopsis thaliana, and a Circos diagram was generated to understand the relationships among the selected genes. The involvement of the selected genes in salt-tolerance pathways was verified by reverse transcription polymerase chain reaction analysis. One candidate potato gene was selected for physiological validation by generating dehydration-responsive element-binding 1 (DREB1)-overexpressing transgenic potato plants. The DREB1 overexpression lines exhibited increased salt tolerance and plant growth when compared to that of the control. Although the nine genes identified by our multistep screening method require further characterization and validation, this study demonstrates the power of our screening strategy after the initial identification of genes using microarrays and RNA-seq experiments.

  3. Job-shop scheduling with a combination of evolutionary and heuristic methods

    NASA Astrophysics Data System (ADS)

    Patkai, Bela; Torvinen, Seppo

    1999-08-01

    Since almost all of the scheduling problems are NP-hard-- cannot be solved in polynomial time--those companies that need a realistic scheduling system face serious limitations of available methods for finding an optimal schedule, especially if the given environment requires adaptation to dynamic variations. Exact methods do find an optimal schedule, but the size of the problem they can solve is very limited, excluding this way the required scalability. The solution presented in this paper is a simple, multi-pass heuristic method, which aims to avoid the limitations of other well-known formulations. Even though the dispatching rules are fast and provide near-optimal solutions in most cases, they are severely limited in efficiency--especially in case the schedule builder satisfies a significant number of constraints. That is the main motivation for adding a simplified genetic algorithm to the dispatching rules, which--due to its stochastic nature--belongs to heuristic, too. The scheduling problem is of a middle size Finnish factory, throughout the investigations their up-to-date manufacturing data has been used for the sake of realistic calculations.

  4. Evolutionary analysis of apolipoprotein E by Maximum Likelihood and complex network methods

    PubMed Central

    Benevides, Leandro de Jesus; de Carvalho, Daniel Santana; Andrade, Roberto Fernandes Silva; Bomfim, Gilberto Cafezeiro; Fernandes, Flora Maria de Campos

    2016-01-01

    Abstract Apolipoprotein E (apo E) is a human glycoprotein with 299 amino acids, and it is a major component of very low density lipoproteins (VLDL) and a group of high-density lipoproteins (HDL). Phylogenetic studies are important to clarify how various apo E proteins are related in groups of organisms and whether they evolved from a common ancestor. Here, we aimed at performing a phylogenetic study on apo E carrying organisms. We employed a classical and robust method, such as Maximum Likelihood (ML), and compared the results using a more recent approach based on complex networks. Thirty-two apo E amino acid sequences were downloaded from NCBI. A clear separation could be observed among three major groups: mammals, fish and amphibians. The results obtained from ML method, as well as from the constructed networks showed two different groups: one with mammals only (C1) and another with fish (C2), and a single node with the single sequence available for an amphibian. The accordance in results from the different methods shows that the complex networks approach is effective in phylogenetic studies. Furthermore, our results revealed the conservation of apo E among animal groups. PMID:27560837

  5. Computational biology in the cloud: methods and new insights from computing at scale.

    PubMed

    Kasson, Peter M

    2013-01-01

    The past few years have seen both explosions in the size of biological data sets and the proliferation of new, highly flexible on-demand computing capabilities. The sheer amount of information available from genomic and metagenomic sequencing, high-throughput proteomics, experimental and simulation datasets on molecular structure and dynamics affords an opportunity for greatly expanded insight, but it creates new challenges of scale for computation, storage, and interpretation of petascale data. Cloud computing resources have the potential to help solve these problems by offering a utility model of computing and storage: near-unlimited capacity, the ability to burst usage, and cheap and flexible payment models. Effective use of cloud computing on large biological datasets requires dealing with non-trivial problems of scale and robustness, since performance-limiting factors can change substantially when a dataset grows by a factor of 10,000 or more. New computing paradigms are thus often needed. The use of cloud platforms also creates new opportunities to share data, reduce duplication, and to provide easy reproducibility by making the datasets and computational methods easily available.

  6. Stress intensity estimates by a computer assisted photoelastic method

    NASA Technical Reports Server (NTRS)

    Smith, C. W.

    1977-01-01

    Following an introductory history, the frozen stress photoelastic method is reviewed together with analytical and experimental aspects of cracks in photoelastic models. Analytical foundations are then presented upon which a computer assisted frozen stress photoelastic technique is based for extracting estimates of stress intensity factors from three-dimensional cracked body problems. The use of the method is demonstrated for two currently important three-dimensional crack problems.

  7. Method and system for environmentally adaptive fault tolerant computing

    NASA Technical Reports Server (NTRS)

    Copenhaver, Jason L. (Inventor); Jeremy, Ramos (Inventor); Wolfe, Jeffrey M. (Inventor); Brenner, Dean (Inventor)

    2010-01-01

    A method and system for adapting fault tolerant computing. The method includes the steps of measuring an environmental condition representative of an environment. An on-board processing system's sensitivity to the measured environmental condition is measured. It is determined whether to reconfigure a fault tolerance of the on-board processing system based in part on the measured environmental condition. The fault tolerance of the on-board processing system may be reconfigured based in part on the measured environmental condition.

  8. Ordering Methods for Sparse Matrices and Vector Computers.

    DTIC Science & Technology

    1986-08-15

    H.D. Simon, "Incomplete LU Preconditioners for Conjugate-Gradient-Type Iterative Methods," Eighth SPE Symposium on Reservoir Simulation , Dallas, Texas...Computing). H. D. Simon, Incomplete LU Preconditioners for Conjugate-Gradient-Type Iterative Methods, Proceedings of the Eighth SPE Symposium on Reservoir ... Simulation , Dallas, Texas, February 1985. RECENT PRESENTATIONS AT PROFESSIONAL MEETINGS C. Ashcraft, "The Solution of Banded Systems of Equations in

  9. Revisiting Seismic Tomography Through Direct Methods and High Performance Computing

    NASA Astrophysics Data System (ADS)

    Ishii, M.; Bogiatzis, P.; Davis, T. A.

    2015-12-01

    Over the last two decades, the rapid increase in data availability and computational power significantly increased the number of data and model parameters that can be investigated in seismic tomography problems. Often, the model space consists of 105-106 unknown parameters and there are comparable numbers of observations, making direct computational methods such as the singular value decomposition prohibitively expensive or impossible, leaving iterative solvers as the only alternative option. Among the disadvantages of the iterative algorithms is that the inverse of the matrix that defines the system is not explicitly formed. As a consequence, the model resolution and covariance matrices, that are crucial for the quantitative assessment of the uncertainty of the tomographic models, cannot be computed. Despite efforts in finding computationally affordable approximations of these matrices, challenges remain, and approaches such as the checkerboard resolution tests continue to be used. Based upon recent developments in sparse algorithms and high performance computing resources, we demonstrate that direct methods are becoming feasible for large seismic tomography problems, and apply the technique to obtain a regional P-wave tomography model and its full resolution matrix with 267,520 parameters. Furthermore, we show that the structural analysis of the forward operators of the seismic tomography problems can provide insights into the inverse problem, and allows us to determine and exploit approximations that yield accurate solutions.

  10. Multilevel Iterative Methods in Nonlinear Computational Plasma Physics

    NASA Astrophysics Data System (ADS)

    Knoll, D. A.; Finn, J. M.

    1997-11-01

    Many applications in computational plasma physics involve the implicit numerical solution of coupled systems of nonlinear partial differential equations or integro-differential equations. Such problems arise in MHD, systems of Vlasov-Fokker-Planck equations, edge plasma fluid equations. We have been developing matrix-free Newton-Krylov algorithms for such problems and have applied these algorithms to the edge plasma fluid equations [1,2] and to the Vlasov-Fokker-Planck equation [3]. Recently we have found that with increasing grid refinement, the number of Krylov iterations required per Newton iteration has grown unmanageable [4]. This has led us to the study of multigrid methods as a means of preconditioning matrix-free Newton-Krylov methods. In this poster we will give details of the general multigrid preconditioned Newton-Krylov algorithm, as well as algorithm performance details on problems of interest in the areas of magnetohydrodynamics and edge plasma physics. Work supported by US DoE 1. Knoll and McHugh, J. Comput. Phys., 116, pg. 281 (1995) 2. Knoll and McHugh, Comput. Phys. Comm., 88, pg. 141 (1995) 3. Mousseau and Knoll, J. Comput. Phys. (1997) (to appear) 4. Knoll and McHugh, SIAM J. Sci. Comput. 19, (1998) (to appear)

  11. Analysis and optimization of cyclic methods in orbit computation

    NASA Technical Reports Server (NTRS)

    Pierce, S.

    1973-01-01

    The mathematical analysis and computation of the K=3, order 4; K=4, order 6; and K=5, order 7 cyclic methods and the K=5, order 6 Cowell method and some results of optimizing the 3 backpoint cyclic multistep methods for solving ordinary differential equations are presented. Cyclic methods have the advantage over traditional methods of having higher order for a given number of backpoints while at the same time having more free parameters. After considering several error sources the primary source for the cyclic methods has been isolated. The free parameters for three backpoint methods were used to minimize the effects of some of these error sources. They now yield more accuracy with the same computing time as Cowell's method on selected problems. This work is being extended to the five backpoint methods. The analysis and optimization are more difficult here since the matrices are larger and the dimension of the optimizing space is larger. Indications are that the primary error source can be reduced. This will still leave several parameters free to minimize other sources.

  12. The ensemble switch method for computing interfacial tensions.

    PubMed

    Schmitz, Fabian; Virnau, Peter

    2015-04-14

    We present a systematic thermodynamic integration approach to compute interfacial tensions for solid-liquid interfaces, which is based on the ensemble switch method. Applying Monte Carlo simulations and finite-size scaling techniques, we obtain results for hard spheres, which are in agreement with previous computations. The case of solid-liquid interfaces in a variant of the effective Asakura-Oosawa model and of liquid-vapor interfaces in the Lennard-Jones model are discussed as well. We demonstrate that a thorough finite-size analysis of the simulation data is required to obtain precise results for the interfacial tension.

  13. Digital data storage systems, computers, and data verification methods

    DOEpatents

    Groeneveld, Bennett J.; Austad, Wayne E.; Walsh, Stuart C.; Herring, Catherine A.

    2005-12-27

    Digital data storage systems, computers, and data verification methods are provided. According to a first aspect of the invention, a computer includes an interface adapted to couple with a dynamic database; and processing circuitry configured to provide a first hash from digital data stored within a portion of the dynamic database at an initial moment in time, to provide a second hash from digital data stored within the portion of the dynamic database at a subsequent moment in time, and to compare the first hash and the second hash.

  14. Computing the Casimir energy using the point-matching method

    SciTech Connect

    Lombardo, F. C.; Mazzitelli, F. D.; Vazquez, M.; Villar, P. I.

    2009-09-15

    We use a point-matching approach to numerically compute the Casimir interaction energy for a two perfect-conductor waveguide of arbitrary section. We present the method and describe the procedure used to obtain the numerical results. At first, our technique is tested for geometries with known solutions, such as concentric and eccentric cylinders. Then, we apply the point-matching technique to compute the Casimir interaction energy for new geometries such as concentric corrugated cylinders and cylinders inside conductors with focal lines.

  15. The ensemble switch method for computing interfacial tensions

    SciTech Connect

    Schmitz, Fabian; Virnau, Peter

    2015-04-14

    We present a systematic thermodynamic integration approach to compute interfacial tensions for solid-liquid interfaces, which is based on the ensemble switch method. Applying Monte Carlo simulations and finite-size scaling techniques, we obtain results for hard spheres, which are in agreement with previous computations. The case of solid-liquid interfaces in a variant of the effective Asakura-Oosawa model and of liquid-vapor interfaces in the Lennard-Jones model are discussed as well. We demonstrate that a thorough finite-size analysis of the simulation data is required to obtain precise results for the interfacial tension.

  16. An effective method for computing the noise in biochemical networks

    PubMed Central

    Zhang, Jiajun; Nie, Qing; He, Miao; Zhou, Tianshou

    2013-01-01

    We present a simple yet effective method, which is based on power series expansion, for computing exact binomial moments that can be in turn used to compute steady-state probability distributions as well as the noise in linear or nonlinear biochemical reaction networks. When the method is applied to representative reaction networks such as the ON-OFF models of gene expression, gene models of promoter progression, gene auto-regulatory models, and common signaling motifs, the exact formulae for computing the intensities of noise in the species of interest or steady-state distributions are analytically given. Interestingly, we find that positive (negative) feedback does not enlarge (reduce) noise as claimed in previous works but has a counter-intuitive effect and that the multi-OFF (or ON) mechanism always attenuates the noise in contrast to the common ON-OFF mechanism and can modulate the noise to the lowest level independently of the mRNA mean. Except for its power in deriving analytical expressions for distributions and noise, our method is programmable and has apparent advantages in reducing computational cost. PMID:23464139

  17. Computational Methods for Structural Mechanics and Dynamics, part 1

    NASA Technical Reports Server (NTRS)

    Stroud, W. Jefferson (Editor); Housner, Jerrold M. (Editor); Tanner, John A. (Editor); Hayduk, Robert J. (Editor)

    1989-01-01

    The structural analysis methods research has several goals. One goal is to develop analysis methods that are general. This goal of generality leads naturally to finite-element methods, but the research will also include other structural analysis methods. Another goal is that the methods be amenable to error analysis; that is, given a physical problem and a mathematical model of that problem, an analyst would like to know the probable error in predicting a given response quantity. The ultimate objective is to specify the error tolerances and to use automated logic to adjust the mathematical model or solution strategy to obtain that accuracy. A third goal is to develop structural analysis methods that can exploit parallel processing computers. The structural analysis methods research will focus initially on three types of problems: local/global nonlinear stress analysis, nonlinear transient dynamics, and tire modeling.

  18. Computation of Pressurized Gas Bearings Using CE/SE Method

    NASA Technical Reports Server (NTRS)

    Cioc, Sorin; Dimofte, Florin; Keith, Theo G., Jr.; Fleming, David P.

    2003-01-01

    The space-time conservation element and solution element (CE/SE) method is extended to compute compressible viscous flows in pressurized thin fluid films. This numerical scheme has previously been used successfully to solve a wide variety of compressible flow problems, including flows with large and small discontinuities. In this paper, the method is applied to calculate the pressure distribution in a hybrid gas journal bearing. The formulation of the problem is presented, including the modeling of the feeding system. the numerical results obtained are compared with experimental data. Good agreement between the computed results and the test data were obtained, and thus validate the CE/SE method to solve such problems.

  19. A Computationally Efficient Method for Polyphonic Pitch Estimation

    NASA Astrophysics Data System (ADS)

    Zhou, Ruohua; Reiss, Joshua D.; Mattavelli, Marco; Zoia, Giorgio

    2009-12-01

    This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI) as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.

  20. A Method for Weight Multiplicity Computation Based on Berezin Quantization

    NASA Astrophysics Data System (ADS)

    Bar-Moshe, David

    2009-09-01

    Let G be a compact semisimple Lie group and T be a maximal torus of G. We describe a method for weight multiplicity computation in unitary irreducible representations of G, based on the theory of Berezin quantization on G/T. Let Γhol(Lλ) be the reproducing kernel Hilbert space of holomorphic sections of the homogeneous line bundle Lλ over G/T associated with the highest weight λ of the irreducible representation πλ of G. The multiplicity of a weight m in πλ is computed from functional analytical structure of the Berezin symbol of the projector in Γhol(Lλ) onto subspace of weight m. We describe a method of the construction of this symbol and the evaluation of the weight multiplicity as a rank of a Hermitian form. The application of this method is described in a number of examples.

  1. Method of computer-aided measurement in a shooting range

    NASA Astrophysics Data System (ADS)

    Liu, Chanlao; Zhang, Yun; Xiong, Rensheng; Sun, Yishang

    2000-10-01

    In the view of the blindness of photoelectric measurement scheme argument and the danger of live shell measurement in shooting range, this paper provided a computer aided measurement method guiding the measurement scheme argument and equipment researching and producing and driving the measurement process visiblization and standardization. The computer aided measurement in shooting range can be divided into the mathematical simulation of targets moving, the mathematical simulation of measurement method, the mathematical simulation of photoelectric system, the animated displaying of measurement process, and so on. By adding random jamming, Gaussian white noise and so on, the live measurement environment and condition was built. By using mathematical discretization, the time series pictures was obtained. By controlling the time changing and time unifying of several equipment, the animated displaying of measurement process was built. The programming language was MATLAB. The method was proved through simulating the intersection measurement trajectory of antiaircraft gun's shell successfully.

  2. Variational-moment method for computing magnetohydrodynamic equilibria

    SciTech Connect

    Lao, L.L.

    1983-08-01

    A fast yet accurate method to compute magnetohydrodynamic equilibria is provided by the variational-moment method, which is similar to the classical Rayleigh-Ritz-Galerkin approximation. The equilibrium solution sought is decomposed into a spectral representation. The partial differential equations describing the equilibrium are then recast into their equivalent variational form and systematically reduced to an optimum finite set of coupled ordinary differential equations. An appropriate spectral decomposition can make the series representing the solution coverge rapidly and hence substantially reduces the amount of computational time involved. The moment method was developed first to compute fixed-boundary inverse equilibria in axisymmetric toroidal geometry, and was demonstrated to be both efficient and accurate. The method since has been generalized to calculate free-boundary axisymmetric equilibria, to include toroidal plasma rotation and pressure anisotropy, and to treat three-dimensional toroidal geometry. In all these formulations, the flux surfaces are assumed to be smooth and nested so that the solutions can be decomposed in Fourier series in inverse coordinates. These recent developments and the advantages and limitations of the moment method are reviewed. The use of alternate coordinates for decomposition is discussed.

  3. Methods for Optimal Output Prediction in Computational Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Kast, Steven Michael

    In a Computational Fluid Dynamics (CFD) simulation, not all data is of equal importance. Instead, the goal of the user is often to compute certain critical outputs - such as lift and drag - accurately. While in recent years CFD simulations have become routine, ensuring accuracy in these outputs is still surprisingly difficult. Unacceptable levels of output error arise even in industry-standard simulations, such as the steady flow around commercial aircraft. This problem is only exacerbated when simulating more complex, unsteady flows. In this thesis, we present a mesh adaptation strategy for unsteady problems that can automatically reduce errors in outputs of interest. This strategy applies to problems in which the computational domain deforms in time - such as flapping-flight simulations - and relies on an unsteady adjoint to identify regions of the mesh contributing most to the output error. This error is then driven down via refinement of the critical regions in both space and time. Here, we demonstrate this strategy on a series of flapping-wing problems in two and three dimensions, using high-order discontinuous Galerkin (DG) methods for both spatial and temporal discretizations. Compared to other methods, results indicate that this strategy can deliver a desired level of output accuracy with significant reductions in computational cost. After concluding our work on mesh adaptation, we take a step back and investigate another idea for obtaining output accuracy: adapting the numerical method itself. In particular, we show how the test space of discontinuous finite element methods can be "optimized" to achieve accuracy in certain outputs or regions. In this work, we compute test functions that ensure accuracy specifically along domain boundaries. These regions - which are vital to both scalar outputs (such as lift and drag) and distributions (such as pressure and skin friction) - are often the most important from an engineering standpoint.

  4. Computational methods for coupling microstructural and micromechanical materials response simulations

    SciTech Connect

    HOLM,ELIZABETH A.; BATTAILE,CORBETT C.; BUCHHEIT,THOMAS E.; FANG,HUEI ELIOT; RINTOUL,MARK DANIEL; VEDULA,VENKATA R.; GLASS,S. JILL; KNOROVSKY,GERALD A.; NEILSEN,MICHAEL K.; WELLMAN,GERALD W.; SULSKY,DEBORAH; SHEN,YU-LIN; SCHREYER,H. BUCK

    2000-04-01

    Computational materials simulations have traditionally focused on individual phenomena: grain growth, crack propagation, plastic flow, etc. However, real materials behavior results from a complex interplay between phenomena. In this project, the authors explored methods for coupling mesoscale simulations of microstructural evolution and micromechanical response. In one case, massively parallel (MP) simulations for grain evolution and microcracking in alumina stronglink materials were dynamically coupled. In the other, codes for domain coarsening and plastic deformation in CuSi braze alloys were iteratively linked. this program provided the first comparison of two promising ways to integrate mesoscale computer codes. Coupled microstructural/micromechanical codes were applied to experimentally observed microstructures for the first time. In addition to the coupled codes, this project developed a suite of new computational capabilities (PARGRAIN, GLAD, OOF, MPM, polycrystal plasticity, front tracking). The problem of plasticity length scale in continuum calculations was recognized and a solution strategy was developed. The simulations were experimentally validated on stockpile materials.

  5. Fast calculation method for computer-generated cylindrical holograms.

    PubMed

    Yamaguchi, Takeshi; Fujii, Tomohiko; Yoshikawa, Hiroshi

    2008-07-01

    Since a general flat hologram has a limited viewable area, we usually cannot see the other side of a reconstructed object. There are some holograms that can solve this problem. A cylindrical hologram is well known to be viewable in 360 deg. Most cylindrical holograms are optical holograms, but there are few reports of computer-generated cylindrical holograms. The lack of computer-generated cylindrical holograms is because the spatial resolution of output devices is not great enough; therefore, we have to make a large hologram or use a small object to fulfill the sampling theorem. In addition, in calculating the large fringe, the calculation amount increases in proportion to the hologram size. Therefore, we propose what we believe to be a new calculation method for fast calculation. Then, we print these fringes with our prototype fringe printer. As a result, we obtain a good reconstructed image from a computer-generated cylindrical hologram.

  6. Evolutionary thinking

    PubMed Central

    Hunt, Tam

    2014-01-01

    Evolution as an idea has a lengthy history, even though the idea of evolution is generally associated with Darwin today. Rebecca Stott provides an engaging and thoughtful overview of this history of evolutionary thinking in her 2013 book, Darwin's Ghosts: The Secret History of Evolution. Since Darwin, the debate over evolution—both how it takes place and, in a long war of words with religiously-oriented thinkers, whether it takes place—has been sustained and heated. A growing share of this debate is now devoted to examining how evolutionary thinking affects areas outside of biology. How do our lives change when we recognize that all is in flux? What can we learn about life more generally if we study change instead of stasis? Carter Phipps’ book, Evolutionaries: Unlocking the Spiritual and Cultural Potential of Science's Greatest Idea, delves deep into this relatively new development. Phipps generally takes as a given the validity of the Modern Synthesis of evolutionary biology. His story takes us into, as the subtitle suggests, the spiritual and cultural implications of evolutionary thinking. Can religion and evolution be reconciled? Can evolutionary thinking lead to a new type of spirituality? Is our culture already being changed in ways that we don't realize by evolutionary thinking? These are all important questions and Phipps book is a great introduction to this discussion. Phipps is an author, journalist, and contributor to the emerging “integral” or “evolutionary” cultural movement that combines the insights of Integral Philosophy, evolutionary science, developmental psychology, and the social sciences. He has served as the Executive Editor of EnlightenNext magazine (no longer published) and more recently is the co-founder of the Institute for Cultural Evolution, a public policy think tank addressing the cultural roots of America's political challenges. What follows is an email interview with Phipps. PMID:26478766

  7. Toward an Efficient Method of Identifying Core Genes for Evolutionary and Functional Microbial Phylogenies

    PubMed Central

    Segata, Nicola; Huttenhower, Curtis

    2011-01-01

    Microbial community metagenomes and individual microbial genomes are becoming increasingly accessible by means of high-throughput sequencing. Assessing organismal membership within a community is typically performed using one or a few taxonomic marker genes such as the 16S rDNA, and these same genes are also employed to reconstruct molecular phylogenies. There is thus a growing need to bioinformatically catalog strongly conserved core genes that can serve as effective taxonomic markers, to assess the agreement among phylogenies generated from different core gene, and to characterize the biological functions enriched within core genes and thus conserved throughout large microbial clades. We present a method to recursively identify core genes (i.e. genes ubiquitous within a microbial clade) in high-throughput from a large number of complete input genomes. We analyzed over 1,100 genomes to produce core gene sets spanning 2,861 bacterial and archaeal clades, ranging in size from one to >2,000 genes in inverse correlation with the α-diversity (total phylogenetic branch length) spanned by each clade. These cores are enriched as expected for housekeeping functions including translation, transcription, and replication, in addition to significant representations of regulatory, chaperone, and conserved uncharacterized proteins. In agreement with previous manually curated core gene sets, phylogenies constructed from one or more of these core genes agree with those built using 16S rDNA sequence similarity, suggesting that systematic core gene selection can be used to optimize both comparative genomics and determination of microbial community structure. Finally, we examine functional phylogenies constructed by clustering genomes by the presence or absence of orthologous gene families and show that they provide an informative complement to standard sequence-based molecular phylogenies. PMID:21931822

  8. Methods for library-scale computational protein design.

    PubMed

    Johnson, Lucas B; Huber, Thaddaus R; Snow, Christopher D

    2014-01-01

    Faced with a protein engineering challenge, a contemporary researcher can choose from myriad design strategies. Library-scale computational protein design (LCPD) is a hybrid method suitable for the engineering of improved protein variants with diverse sequences. This chapter discusses the background and merits of several practical LCPD techniques. First, LCPD methods suitable for delocalized protein design are presented in the context of example design calculations for cellobiohydrolase II. Second, localized design methods are discussed in the context of an example design calculation intended to shift the substrate specificity of a ketol-acid reductoisomerase Rossmann domain from NADPH to NADH.

  9. A Review of Computational Intelligence Methods for Eukaryotic Promoter Prediction.

    PubMed

    Singh, Shailendra; Kaur, Sukhbir; Goel, Neelam

    2015-01-01

    In past decades, prediction of genes in DNA sequences has attracted the attention of many researchers but due to its complex structure it is extremely intricate to correctly locate its position. A large number of regulatory regions are present in DNA that helps in transcription of a gene. Promoter is one such region and to find its location is a challenging problem. Various computational methods for promoter prediction have been developed over the past few years. This paper reviews these promoter prediction methods. Several difficulties and pitfalls encountered by these methods are also detailed, along with future research directions.

  10. Investigation of Ultrasonic Wave Scattering Effects using Computational Methods

    NASA Astrophysics Data System (ADS)

    Campbell Leckey, Cara Ann

    2011-12-01

    Advances in computational power and expanded access to computing clusters has made mathematical modeling of complex wave effects possible. We have used multi-core and cluster computing to implement analytical and numerical models of ultrasonic wave scattering in fluid and solid media (acoustic and elastic waves). We begin by implementing complicated analytical equations that describe the force upon spheres immersed in inviscid and viscous fluids due to an incident plane wave. Two real-world applications of acoustic force upon spheres are investigated using the mathematical formulations: emboli removal from cardiopulmonary bypass circuits using traveling waves and the micromanipulation of algal cells with standing waves to aid in biomass processing for algae biofuels. We then move on to consider wave scattering situations where analytical models do not exist: scattering of acoustic waves from multiple scatterers in fluids and Lamb wave scattering in solids. We use a numerical method called finite integration technique (FIT) to simulate wave behavior in three dimensions. The 3D simulations provide insight into experimental results for situations where 2D simulations would not be sufficient. The diverse set of scattering situations explored in this work show the broad applicability of the underlying principles and the computational tools that we have developed. Overall, our work shows that the movement towards better availability of large computational resources is opening up new ways to investigate complicated physics phenomena.

  11. Practical methods to improve the development of computational software

    SciTech Connect

    Osborne, A. G.; Harding, D. W.; Deinert, M. R.

    2013-07-01

    The use of computation has become ubiquitous in science and engineering. As the complexity of computer codes has increased, so has the need for robust methods to minimize errors. Past work has show that the number of functional errors is related the number of commands that a code executes. Since the late 1960's, major participants in the field of computation have encouraged the development of best practices for programming to help reduce coder induced error, and this has lead to the emergence of 'software engineering' as a field of study. Best practices for coding and software production have now evolved and become common in the development of commercial software. These same techniques, however, are largely absent from the development of computational codes by research groups. Many of the best practice techniques from the professional software community would be easy for research groups in nuclear science and engineering to adopt. This paper outlines the history of software engineering, as well as issues in modern scientific computation, and recommends practices that should be adopted by individual scientific programmers and university research groups. (authors)

  12. Computational methods to determine the structure of hydrogen storage materials

    NASA Astrophysics Data System (ADS)

    Mueller, Tim

    2009-03-01

    To understand the mechanisms and thermodynamics of material-based hydrogen storage, it is important to know the structure of the material and the positions of the hydrogen atoms within the material. Because hydrogen can be difficult to resolve experimentally computational research has proven to be a valuable tool to address these problems. We discuss different computational methods for identifying the structure of hydrogen materials and the positions of hydrogen atoms, and we illustrate the methods with specific examples. Through the use of ab-initio molecular dynamics, we identify molecular hydrogen binding sites in the metal-organic framework commonly known as MOF-5 [1]. We present a method to identify the positions of atomic hydrogen in imide structures using a novel type of effective Hamiltonian. We apply this new method to lithium imide (Li2NH), a potentially important hydrogen storage material, and demonstrate that it predicts a new ground state structure [2]. We also present the results of a recent computational study of the room-temperature structure of lithium imide in which we suggest a new structure that reconciles the differences between previous experimental and theoretical studies. [4pt] [1] T. Mueller and G. Ceder, Journal of Physical Chemistry B 109, 17974 (2005). [0pt] [2] T. Mueller and G. Ceder, Physical Review B 74 (2006).

  13. Domain decomposition methods for the parallel computation of reacting flows

    NASA Technical Reports Server (NTRS)

    Keyes, David E.

    1988-01-01

    Domain decomposition is a natural route to parallel computing for partial differential equation solvers. Subdomains of which the original domain of definition is comprised are assigned to independent processors at the price of periodic coordination between processors to compute global parameters and maintain the requisite degree of continuity of the solution at the subdomain interfaces. In the domain-decomposed solution of steady multidimensional systems of PDEs by finite difference methods using a pseudo-transient version of Newton iteration, the only portion of the computation which generally stands in the way of efficient parallelization is the solution of the large, sparse linear systems arising at each Newton step. For some Jacobian matrices drawn from an actual two-dimensional reacting flow problem, comparisons are made between relaxation-based linear solvers and also preconditioned iterative methods of Conjugate Gradient and Chebyshev type, focusing attention on both iteration count and global inner product count. The generalized minimum residual method with block-ILU preconditioning is judged the best serial method among those considered, and parallel numerical experiments on the Encore Multimax demonstrate for it approximately 10-fold speedup on 16 processors.

  14. Applications of meshless methods for damage computations with finite strains

    NASA Astrophysics Data System (ADS)

    Pan, Xiaofei; Yuan, Huang

    2009-06-01

    Material defects such as cavities have great effects on the damage process in ductile materials. Computations based on finite element methods (FEMs) often suffer from instability due to material failure as well as large distortions. To improve computational efficiency and robustness the element-free Galerkin (EFG) method is applied in the micro-mechanical constitute damage model proposed by Gurson and modified by Tvergaard and Needleman (the GTN damage model). The EFG algorithm is implemented in the general purpose finite element code ABAQUS via the user interface UEL. With the help of the EFG method, damage processes in uniaxial tension specimens and notched specimens are analyzed and verified with experimental data. Computational results reveal that the damage which takes place in the interior of specimens will extend to the exterior and cause fracture of specimens; the damage is a fast procedure relative to the whole tensing process. The EFG method provides more stable and robust numerical solution in comparing with the FEM analysis.

  15. Dominance, submissivity (and homosexuality) in general population: testing of evolutionary hypothesis of sadomasochism by Internet-trap-method.

    PubMed

    Jozifkova, Eva; Flegr, Jaroslav

    2006-12-01

    Dominance and submissiveness represent strong sexual arousal stimuli for a considerable part of population. In contrast to men's sexual dominance and women's sexual submissiveness, the opposite preferences represent an evolutionary enigma. Here, we studied prevalence and strength of particular preferences in general population by Internet-trap-method. The subjects who clicked the banner displayed in the web interface of e-mail boxes were allowed to choose icons with homosexual or heterosexual partner of different hierarchical position. Dominant partner was chosen by 13.8% men and 20.5% women, and submissive partner by 36.6% men and 19.8% women. Homosexual partners were chosen by 7.3% men and 12.2% women. The response times for the submissive and dominant stimuli did not differ while for the equal-status stimuli were significantly longer, suggesting that part of subjects with equal-status preferences probably intentionally mask their natural interests. Large number of people who chose unequal sexual partner suggests that hierarchical status plays important role in human mating system.

  16. Computed Optical Interferometric Imaging: Methods, Achievements, and Challenges

    PubMed Central

    South, Fredrick A.; Liu, Yuan-Zhi; Carney, P. Scott; Boppart, Stephen A.

    2016-01-01

    Three-dimensional high-resolution optical imaging systems are generally restricted by the trade-off between resolution and depth-of-field as well as imperfections in the imaging system or sample. Computed optical interferometric imaging is able to overcome these longstanding limitations using methods such as interferometric synthetic aperture microscopy (ISAM) and computational adaptive optics (CAO) which manipulate the complex interferometric data. These techniques correct for limited depth-of-field and optical aberrations without the need for additional hardware. This paper aims to outline these computational methods, making them readily available to the research community. Achievements of the techniques will be highlighted, along with past and present challenges in implementing the techniques. Challenges such as phase instability and determination of the appropriate aberration correction have been largely overcome so that imaging of living tissues using ISAM and CAO is now possible. Computed imaging in optics is becoming a mature technology poised to make a significant impact in medicine and biology. PMID:27795663

  17. Advanced Computational Aeroacoustics Methods for Fan Noise Prediction

    NASA Technical Reports Server (NTRS)

    Envia, Edmane (Technical Monitor); Tam, Christopher

    2003-01-01

    Direct computation of fan noise is presently not possible. One of the major difficulties is the geometrical complexity of the problem. In the case of fan noise, the blade geometry is critical to the loading on the blade and hence the intensity of the radiated noise. The precise geometry must be incorporated into the computation. In computational fluid dynamics (CFD), there are two general ways to handle problems with complex geometry. One way is to use unstructured grids. The other is to use body fitted overset grids. In the overset grid method, accurate data transfer is of utmost importance. For acoustic computation, it is not clear that the currently used data transfer methods are sufficiently accurate as not to contaminate the very small amplitude acoustic disturbances. In CFD, low order schemes are, invariably, used in conjunction with unstructured grids. However, low order schemes are known to be numerically dispersive and dissipative. dissipative errors are extremely undesirable for acoustic wave problems. The objective of this project is to develop a high order unstructured grid Dispersion-Relation-Preserving (DRP) scheme. would minimize numerical dispersion and dissipation errors. contains the results of the funded portion of the project. scheme on an unstructured grid has been developed. constructed in the wave number space. The characteristics of the scheme can be improved by the inclusion of additional constraints. Stability of the scheme has been investigated. Stability can be improved by adopting the upwinding strategy.

  18. Inductive reasoning and forecasting of population dynamics of Cylindrospermopsis raciborskii in three sub-tropical reservoirs by evolutionary computation.

    PubMed

    Recknagel, Friedrich; Orr, Philip T; Cao, Hongqing

    2014-01-01

    Seven-day-ahead forecasting models of Cylindrospermopsis raciborskii in three warm-monomictic and mesotrophic reservoirs in south-east Queensland have been developed by means of water quality data from 1999 to 2010 and the hybrid evolutionary algorithm HEA. Resulting models using all measured variables as inputs as well as models using electronically measurable variables only as inputs forecasted accurately timing of overgrowth of C. raciborskii and matched well high and low magnitudes of observed bloom events with 0.45≤r(2)>0.61 and 0.4≤r(2)>0.57, respectively. The models also revealed relationships and thresholds triggering bloom events that provide valuable information on synergism between water quality conditions and population dynamics of C. raciborskii. Best performing models based on using all measured variables as inputs indicated electrical conductivity (EC) within the range of 206-280mSm(-1) as threshold above which fast growth and high abundances of C. raciborskii have been observed for the three lakes. Best models based on electronically measurable variables for the Lakes Wivenhoe and Somerset indicated a water temperature (WT) range of 25.5-32.7°C within which fast growth and high abundances of C. raciborskii can be expected. By contrast the model for Lake Samsonvale highlighted a turbidity (TURB) level of 4.8 NTU as indicator for mass developments of C. raciborskii. Experiments with online measured water quality data of the Lake Wivenhoe from 2007 to 2010 resulted in predictive models with 0.61≤r(2)>0.65 whereby again similar levels of EC and WT have been discovered as thresholds for outgrowth of C. raciborskii. The highest validity of r(2)=0.75 for an in situ data-based model has been achieved after considering time lags for EC by 7 days and dissolved oxygen by 1 day. These time lags have been discovered by a systematic screening of all possible combinations of time lags between 0 and 10 days for all electronically measurable variables. The so

  19. Computational characterization of HPGe detectors usable for a wide variety of source geometries by using Monte Carlo simulation and a multi-objective evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Guerra, J. G.; Rubiano, J. G.; Winter, G.; Guerra, A. G.; Alonso, H.; Arnedo, M. A.; Tejera, A.; Martel, P.; Bolivar, J. P.

    2017-06-01

    In this work, we have developed a computational methodology for characterizing HPGe detectors by implementing in parallel a multi-objective evolutionary algorithm, together with a Monte Carlo simulation code. The evolutionary algorithm is used for searching the geometrical parameters of a model of detector by minimizing the differences between the efficiencies calculated by Monte Carlo simulation and two reference sets of Full Energy Peak Efficiencies (FEPEs) corresponding to two given sample geometries, a beaker of small diameter laid over the detector window and a beaker of large capacity which wrap the detector. This methodology is a generalization of a previously published work, which was limited to beakers placed over the window of the detector with a diameter equal or smaller than the crystal diameter, so that the crystal mount cap (which surround the lateral surface of the crystal), was not considered in the detector model. The generalization has been accomplished not only by including such a mount cap in the model, but also using multi-objective optimization instead of mono-objective, with the aim of building a model sufficiently accurate for a wider variety of beakers commonly used for the measurement of environmental samples by gamma spectrometry, like for instance, Marinellis, Petris, or any other beaker with a diameter larger than the crystal diameter, for which part of the detected radiation have to pass through the mount cap. The proposed methodology has been applied to an HPGe XtRa detector, providing a model of detector which has been successfully verificated for different source-detector geometries and materials and experimentally validated using CRMs.

  20. A new computational method for reacting hypersonic flows

    NASA Astrophysics Data System (ADS)

    Niculescu, M. L.; Cojocaru, M. G.; Pricop, M. V.; Fadgyas, M. C.; Pepelea, D.; Stoican, M. G.

    2017-07-01

    Hypersonic gas dynamics computations are challenging due to the difficulties to have reliable and robust chemistry models that are usually added to Navier-Stokes equations. From the numerical point of view, it is very difficult to integrate together Navier-Stokes equations and chemistry model equations because these partial differential equations have different specific time scales. For these reasons, almost all known finite volume methods fail shortly to solve this second order partial differential system. Unfortunately, the heating of Earth reentry vehicles such as space shuttles and capsules is very close linked to endothermic chemical reactions. A better prediction of wall heat flux leads to smaller safety coefficient for thermal shield of space reentry vehicle; therefore, the size of thermal shield decreases and the payload increases. For these reasons, the present paper proposes a new computational method based on chemical equilibrium, which gives accurate prediction of hypersonic heating in order to support the Earth reentry capsule design.

  1. Computational methods for efficient structural reliability and reliability sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.

    1993-01-01

    This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.

  2. Precise computations of chemotactic collapse using moving mesh methods

    NASA Astrophysics Data System (ADS)

    Budd, C. J.; Carretero-González, R.; Russell, R. D.

    2005-01-01

    We consider the problem of computing blow-up solutions of chemotaxis systems, or the so-called chemotactic collapse. In two spatial dimensions, such solutions can have approximate self-similar behaviour, which can be very challenging to verify in numerical simulations [cf. Betterton and Brenner, Collapsing bacterial cylinders, Phys. Rev. E 64 (2001) 061904]. We analyse a dynamic (scale-invariant) remeshing method which performs spatial mesh movement based upon equidistribution. Using a suitably chosen monitor function, the numerical solution resolves the fine detail in the asymptotic solution structure, such that the computations are seen to be fully consistent with the asymptotic description of the collapse phenomenon given by Herrero and Velázquez [Singularity patterns in a chemotaxis model, Math. Ann. 306 (1996) 583-623]. We believe that the methods we construct are ideally suited to a large number of problems in mathematical biology for which collapse phenomena are expected.

  3. Beyond the Melnikov method: A computer assisted approach

    NASA Astrophysics Data System (ADS)

    Capiński, Maciej J.; Zgliczyński, Piotr

    2017-01-01

    We present a Melnikov type approach for establishing transversal intersections of stable/unstable manifolds of perturbed normally hyperbolic invariant manifolds (NHIMs). The method is based on a new geometric proof of the normally hyperbolic invariant manifold theorem, which establishes the existence of a NHIM, together with its associated invariant manifolds and bounds on their first and second derivatives. We do not need to know the explicit formulas for the homoclinic orbits prior to the perturbation. We also do not need to compute any integrals along such homoclinics. All needed bounds are established using rigorous computer assisted numerics. Lastly, and most importantly, the method establishes intersections for an explicit range of parameters, and not only for perturbations that are 'small enough', as is the case in the classical Melnikov approach.

  4. Computer processing improves hydraulics optimization with new methods

    SciTech Connect

    Gavignet, A.A.; Wick, C.J.

    1987-12-01

    In current practice, pressure drops in the mud circulating system and the settling velocity of cuttings are calculated with simple rheological models and simple equations. Wellsite computers now allow more sophistication in drilling computations. In this paper, experimental results on the settling velocity of spheres in drilling fluids are reported, along with rheograms done over a wide range of shear rates. The flow curves are fitted to polynomials and general methods are developed to predict friction losses and settling velocities as functions of the polynomial coefficients. These methods were incorporated in a software package that can handle any rig configuration system, including riser booster. Graphic displays show the effect of each parameter on the performance of the circulating system.

  5. Computational methods. [Calculation of dynamic loading to offshore platforms

    SciTech Connect

    Maeda, H. . Inst. of Industrial Science)

    1993-02-01

    With regard to the computational methods for hydrodynamic forces, first identification of marine hydrodynamics in offshore technology is discussed. Then general computational methods, the state of the arts and uncertainty on flow problems in offshore technology in which developed, developing and undeveloped problems are categorized and future works follow. Marine hydrodynamics consists of water surface and underwater fluid dynamics. Marine hydrodynamics covers, not only hydro, but also aerodynamics such as wind load or current-wave-wind interaction, hydrodynamics such as cavitation, underwater noise, multi-phase flow such as two-phase flow in pipes or air bubble in water or surface and internal waves, and magneto-hydrodynamics such as propulsion due to super conductivity. Among them, two key words are focused on as the identification of marine hydrodynamics in offshore technology; they are free surface and vortex shedding.

  6. Characterization of Meta-Materials Using Computational Electromagnetic Methods

    NASA Technical Reports Server (NTRS)

    Deshpande, Manohar; Shin, Joon

    2005-01-01

    An efficient and powerful computational method is presented to synthesize a meta-material to specified electromagnetic properties. Using the periodicity of meta-materials, the Finite Element Methodology (FEM) is developed to estimate the reflection and transmission through the meta-material structure for a normal plane wave incidence. For efficient computations of the reflection and transmission over a wide band frequency range through a meta-material a Finite Difference Time Domain (FDTD) approach is also developed. Using the Nicholson-Ross method and the Genetic Algorithms, a robust procedure to extract electromagnetic properties of meta-material from the knowledge of its reflection and transmission coefficients is described. Few numerical examples are also presented to validate the present approach.

  7. Computational methods for efficient structural reliability and reliability sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.

    1993-01-01

    This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.

  8. Computer Simulation Modeling: A Method for Predicting the Utilities of Alternative Computer-Aided Treat Evaluation Algorithms

    DTIC Science & Technology

    1990-09-01

    0 Technical Report 911 D~i. FiLE COPY Computer Simulation Modeling : A Method for Predicting the Utilities of Alternative Computer-Aided Threat...63007A 793 1202 HI 11. TITLE (Include Security Classification) Computer Simulation Modeling : A Method for Predicting the Utilities of Alternative...SECURITY CLASSIFICATION OF THIS PAGE("wn Data Entered) ii Technical Report 911 Computer Simulation Modeling : A Method for Predicting the Utilities of

  9. Computational Methods for Failure Analysis and Life Prediction

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Compiler); Harris, Charles E. (Compiler); Housner, Jerrold M. (Compiler); Hopkins, Dale A. (Compiler)

    1993-01-01

    This conference publication contains the presentations and discussions from the joint UVA/NASA Workshop on Computational Methods for Failure Analysis and Life Prediction held at NASA Langley Research Center 14-15 Oct. 1992. The presentations focused on damage failure and life predictions of polymer-matrix composite structures. They covered some of the research activities at NASA Langley, NASA Lewis, Southwest Research Institute, industry, and universities. Both airframes and propulsion systems were considered.

  10. A Robust Method for Computing Truth-to-Track Assignments

    DTIC Science & Technology

    2009-07-01

    A Robust Method for Computing Truth -to-Track Assignments Mark Silbert Air 4.5.3.3 NAVAIR Patuxent River, MD. mark.silbert@navy.mil...tracked by each track. Determining which track corresponds to which target is called the truth -to-track assignment problem. In the past, this...be used for all types of tracking systems. Keywords: Multi-target tracking performance, Multi- sensor tracking performance, truth -to-track

  11. Computational Methods for Sparse Solution of Linear Inverse Problems

    DTIC Science & Technology

    2009-03-01

    methods from harmonic analysis [5]. For example, natural images can be approximated with relatively few wavelet coefficients. As a consequence, in many...performed efficiently. For example, the cost of these products is O(N logN) when Φ is constructed from Fourier or wavelet bases. For algorithms that...stream community has proposed efficient algorithms for computing near-optimal histograms and wavelet -packet approximations from compressive samples [4

  12. Evolutionary behavioral genetics.

    PubMed

    Zietsch, Brendan P; de Candia, Teresa R; Keller, Matthew C

    2015-04-01

    We describe the scientific enterprise at the intersection of evolutionary psychology and behavioral genetics-a field that could be termed Evolutionary Behavioral Genetics-and how modern genetic data is revolutionizing our ability to test questions in this field. We first explain how genetically informative data and designs can be used to investigate questions about the evolution of human behavior, and describe some of the findings arising from these approaches. Second, we explain how evolutionary theory can be applied to the investigation of behavioral genetic variation. We give examples of how new data and methods provide insight into the genetic architecture of behavioral variation and what this tells us about the evolutionary processes that acted on the underlying causal genetic variants.

  13. Evolutionary behavioral genetics

    PubMed Central

    Zietsch, Brendan P.; de Candia, Teresa R; Keller, Matthew C.

    2014-01-01

    We describe the scientific enterprise at the intersection of evolutionary psychology and behavioral genetics—a field that could be termed Evolutionary Behavioral Genetics—and how modern genetic data is revolutionizing our ability to test questions in this field. We first explain how genetically informative data and designs can be used to investigate questions about the evolution of human behavior, and describe some of the findings arising from these approaches. Second, we explain how evolutionary theory can be applied to the investigation of behavioral genetic variation. We give examples of how new data and methods provide insight into the genetic architecture of behavioral variation and what this tells us about the evolutionary processes that acted on the underlying causal genetic variants. PMID:25587556

  14. Interval sampling methods and measurement error: a computer simulation.

    PubMed

    Wirth, Oliver; Slaven, James; Taylor, Matthew A

    2014-01-01

    A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. © Society for the Experimental Analysis of Behavior.

  15. Experiences using DAKOTA stochastic expansion methods in computational simulations.

    SciTech Connect

    Templeton, Jeremy Alan; Ruthruff, Joseph R.

    2012-01-01

    Uncertainty quantification (UQ) methods bring rigorous statistical connections to the analysis of computational and experiment data, and provide a basis for probabilistically assessing margins associated with safety and reliability. The DAKOTA toolkit developed at Sandia National Laboratories implements a number of UQ methods, which are being increasingly adopted by modeling and simulation teams to facilitate these analyses. This report disseminates results as to the performance of DAKOTA's stochastic expansion methods for UQ on a representative application. Our results provide a number of insights that may be of interest to future users of these methods, including the behavior of the methods in estimating responses at varying probability levels, and the expansion levels for the methodologies that may be needed to achieve convergence.

  16. Improved diffraction computation with a hybrid C-RCWA-method

    NASA Astrophysics Data System (ADS)

    Bischoff, Joerg

    2009-03-01

    The Rigorous Coupled Wave Approach (RCWA) is acknowledged as a well established diffraction simulation method in electro-magnetic computing. Its two most essential applications in the semiconductor industry are in optical scatterometry and optical lithography simulation. In scatterometry, it is the standard technique to simulate spectra or diffraction responses for gratings to be characterized. In optical lithography simulation, it is an effective alternative to supplement or even to replace the FDTD for the calculation of light diffraction from thick masks as well as from wafer topographies. Unfortunately, the RCWA shows some serious disadvantages particularly for the modelling of grating profiles with shallow slopes and multilayer stacks with many layers such as extreme UV masks with large number of quarter wave layers. Here, the slicing may become a nightmare and also the computation costs may increase dramatically. Moreover, the accuracy is suffering due to the inadequate staircase approximation of the slicing in conjunction with the boundary conditions in TM polarization. On the other hand, the Chandezon Method (C-Method) solves all these problems in a very elegant way, however, it fails for binary patterns or gratings with very steep profiles where the RCWA works excellent. Therefore, we suggest a combination of both methods as plug-ins in the same scattering matrix coupling frame. The improved performance and the advantages of this hybrid C-RCWA-Method over the individual methods is shown with some relevant examples.

  17. A hierarchical method for molecular docking using cloud computing.

    PubMed

    Kang, Ling; Guo, Quan; Wang, Xicheng

    2012-11-01

    Discovering small molecules that interact with protein targets will be a key part of future drug discovery efforts. Molecular docking of drug-like molecules is likely to be valuable in this field; however, the great number of such molecules makes the potential size of this task enormous. In this paper, a method to screen small molecular databases using cloud computing is proposed. This method is called the hierarchical method for molecular docking and can be completed in a relatively short period of time. In this method, the optimization of molecular docking is divided into two subproblems based on the different effects on the protein-ligand interaction energy. An adaptive genetic algorithm is developed to solve the optimization problem and a new docking program (FlexGAsDock) based on the hierarchical docking method has been developed. The implementation of docking on a cloud computing platform is then discussed. The docking results show that this method can be conveniently used for the efficient molecular design of drugs.

  18. Evolutionary and Neural Computing Based Decision Support System for Disease Diagnosis from Clinical Data Sets in Medical Practice.

    PubMed

    Sudha, M

    2017-09-27

    As a recent trend, various computational intelligence and machine learning approaches have been used for mining inferences hidden in the large clinical databases to assist the clinician in strategic decision making. In any target data the irrelevant information may be detrimental, causing confusion for the mining algorithm and degrades the prediction outcome. To address this issue, this study attempts to identify an intelligent approach to assist disease diagnostic procedure using an optimal set of attributes instead of all attributes present in the clinical data set. In this proposed Application Specific Intelligent Computing (ASIC) decision support system, a rough set based genetic algorithm is employed in pre-processing phase and a back propagation neural network is applied in training and testing phase. ASIC has two phases, the first phase handles outliers, noisy data, and missing values to obtain a qualitative target data to generate appropriate attribute reduct sets from the input data using rough computing based genetic algorithm centred on a relative fitness function measure. The succeeding phase of this system involves both training and testing of back propagation neural network classifier on the selected reducts. The model performance is evaluated with widely adopted existing classifiers. The proposed ASIC system for clinical decision support has been tested with breast cancer, fertility diagnosis and heart disease data set from the University of California at Irvine (UCI) machine learning repository. The proposed system outperformed the existing approaches attaining the accuracy rate of 95.33%, 97.61%, and 93.04% for breast cancer, fertility issue and heart disease diagnosis.

  19. A Review of Computational Methods for Predicting Drug Targets.

    PubMed

    Huang, Guohua; Yan, Fengxia; Tan, Duoduo

    2016-11-14

    Drug discovery and development is not only a time-consuming and labor-intensive process but also full of risk. Identifying targets of small molecules helps evaluate safety of drugs and find new therapeutic applications. The biotechnology measures a wide variety of properties related to drug and targets from different perspectives, thus generating a large body of data. This undoubtedly provides a solid foundation to explore relationships between drugs and targets. A large number of computational techniques have recently been developed for drug target prediction. In this paper, we summarize these computational methods and classify them into structure-based, molecular activity-based, side-effect-based and multi-omics-based predictions according to the used data for inference. The multi-omics-based methods are further grouped into two types: classifier-based and network-based predictions. Furthermore,the advantages and limitations of each type of methods are discussed. Finally, we point out the future directions of computational predictions for drug targets.

  20. Multiobjective Multifactorial Optimization in Evolutionary Multitasking.

    PubMed

    Gupta, Abhishek; Ong, Yew-Soon; Feng, Liang; Tan, Kay Chen

    2016-05-03

    In recent decades, the field of multiobjective optimization has attracted considerable interest among evolutionary computation researchers. One of the main features that makes evolutionary methods particularly appealing for multiobjective problems is the implicit parallelism offered by a population, which enables simultaneous convergence toward the entire Pareto front. While a plethora of related algorithms have been proposed till date, a common attribute among them is that they focus on efficiently solving only a single optimization problem at a time. Despite the known power of implicit parallelism, seldom has an attempt been made to multitask, i.e., to solve multiple optimization problems simultaneously. It is contended that the notion of evolutionary multitasking leads to the possibility of automated transfer of information across different optimization exercises that may share underlying similarities, thereby facilitating improved convergence characteristics. In particular, the potential for automated transfer is deemed invaluable from the standpoint of engineering design exercises where manual knowledge adaptation and reuse are routine. Accordingly, in this paper, we present a realization of the evolutionary multitasking paradigm within the domain of multiobjective optimization. The efficacy of the associated evolutionary algorithm is demonstrated on some benchmark test functions as well as on a real-world manufacturing process design problem from the composites industry.

  1. Computer optimization techniques for NASA Langley's CSI evolutionary model's real-time control system. [Controls/Structure Interaction

    NASA Technical Reports Server (NTRS)

    Elliott, Kenny B.; Ugoletti, Roberto; Sulla, Jeff

    1992-01-01

    The evolution and optimization of a real-time digital control system is presented. The control system is part of a testbed used to perform focused technology research on the interactions of spacecraft platform and instrument controllers with the flexible-body dynamics of the platform and platform appendages. The control system consists of Computer Automated Measurement and Control (CAMAC) standard data acquisition equipment interfaced to a workstation computer. The goal of this work is to optimize the control system's performance to support controls research using controllers with up to 50 states and frame rates above 200 Hz. The original system could support a 16-state controller operating at a rate of 150 Hz. By using simple yet effective software improvements, Input/Output (I/O) latencies and contention problems are reduced or eliminated in the control system. The final configuration can support a 16-state controller operating at 475 Hz. Effectively the control system's performance was increased by a factor of 3.

  2. Computer optimization techniques for NASA Langley's CSI evolutionary model's real-time control system. [Controls/Structure Interaction

    NASA Technical Reports Server (NTRS)

    Elliott, Kenny B.; Ugoletti, Roberto; Sulla, Jeff

    1992-01-01

    The evolution and optimization of a real-time digital control system is presented. The control system is part of a testbed used to perform focused technology research on the interactions of spacecraft platform and instrument controllers with the flexible-body dynamics of the platform and platform appendages. The control system consists of Computer Automated Measurement and Control (CAMAC) standard data acquisition equipment interfaced to a workstation computer. The goal of this work is to optimize the control system's performance to support controls research using controllers with up to 50 states and frame rates above 200 Hz. The original system could support a 16-state controller operating at a rate of 150 Hz. By using simple yet effective software improvements, Input/Output (I/O) latencies and contention problems are reduced or eliminated in the control system. The final configuration can support a 16-state controller operating at 475 Hz. Effectively the control system's performance was increased by a factor of 3.

  3. Video meteor detection filtering using soft computing methods

    NASA Astrophysics Data System (ADS)

    Silađi, E.; Vida, D.; Nyarko, K.

    2015-01-01

    In this paper we present the current progress and results from the filtering of Croatian Meteor Network video meteor detections using soft computing methods such as neural networks and support vector machines (SVMs). The goal is to minimize the number of false-positives while preserving the real meteor detections. This is achieved by pre-processing the data to extract meteor movement parameters and then recognizing patterns distinct to meteors. The input data format is fully compliant with the CAMS meteor data standard, and as such the proposed method could be utilized by other meteor networks of the similar kind.

  4. Computational Catalysis Using the Artificial Force Induced Reaction Method.

    PubMed

    Sameera, W M C; Maeda, Satoshi; Morokuma, Keiji

    2016-04-19

    The artificial force induced reaction (AFIR) method in the global reaction route mapping (GRRM) strategy is an automatic approach to explore all important reaction paths of complex reactions. Most traditional methods in computational catalysis require guess reaction paths. On the other hand, the AFIR approach locates local minima (LMs) and transition states (TSs) of reaction paths without a guess, and therefore finds unanticipated as well as anticipated reaction paths. The AFIR method has been applied for multicomponent organic reactions, such as the aldol reaction, Passerini reaction, Biginelli reaction, and phase-transfer catalysis. In the presence of several reactants, many equilibrium structures are possible, leading to a number of reaction pathways. The AFIR method in the GRRM strategy determines all of the important equilibrium structures and subsequent reaction paths systematically. As the AFIR search is fully automatic, exhaustive trial-and-error and guess-and-check processes by the user can be eliminated. At the same time, the AFIR search is systematic, and therefore a more accurate and comprehensive description of the reaction mechanism can be determined. The AFIR method has been used for the study of full catalytic cycles and reaction steps in transition metal catalysis, such as cobalt-catalyzed hydroformylation and iron-catalyzed carbon-carbon bond formation reactions in aqueous media. Some AFIR applications have targeted the selectivity-determining step of transition-metal-catalyzed asymmetric reactions, including stereoselective water-tolerant lanthanide Lewis acid-catalyzed Mukaiyama aldol reactions. In terms of establishing the selectivity of a reaction, systematic sampling of the transition states is critical. In this direction, AFIR is very useful for performing a systematic and automatic determination of TSs. In the presence of a comprehensive description of the transition states, the selectivity of the reaction can be calculated more accurately

  5. Computer method for identification of boiler transfer functions

    NASA Technical Reports Server (NTRS)

    Miles, J. H.

    1971-01-01

    An iterative computer method is described for identifying boiler transfer functions using frequency response data. An objective penalized performance measure and a nonlinear minimization technique are used to cause the locus of points generated by a transfer function to resemble the locus of points obtained from frequency response measurements. Different transfer functions can be tried until a satisfactory empirical transfer function to the system is found. To illustrate the method, some examples and some results from a study of a set of data consisting of measurements of the inlet impedance of a single tube forced flow boiler with inserts are given.

  6. Rapid methods and computer assisted diagnosis in medical microbiology.

    PubMed

    Heizmann, W R

    1991-01-01

    Rapid diagnosis and reporting in medical microbiology is becoming more and more important. In recent years, introduction of automated instruments as well as of computer assisted diagnosis contributed to this aim. These methods, however, are very expensive. A more cost efficient and simple to perform method for rapid diagnosis is the use of specific fluorogenic substrates incorporated into culture media (solid or liquid) for identification of the most important pathogens, e.g. Escherichia coli. Investigation of Fluorocult ECD agar and Columbia agar revealed a high sensitivity (85%) and an excellent specificity (greater than 99%) of fluorescence in combination with a positive indole reaction for identification of E. coli.

  7. A framework for evolutionary systems biology

    PubMed Central

    Loewe, Laurence

    2009-01-01

    Background Many difficult problems in evolutionary genomics are related to mutations that have weak effects on fitness, as the consequences of mutations with large effects are often simple to predict. Current systems biology has accumulated much data on mutations with large effects and can predict the properties of knockout mutants in some systems. However experimental methods are too insensitive to observe small effects. Results Here I propose a novel framework that brings together evolutionary theory and current systems biology approaches in order to quantify small effects of mutations and their epistatic interactions in silico. Central to this approach is the definition of fitness correlates that can be computed in some current systems biology models employing the rigorous algorithms that are at the core of much work in computational systems biology. The framework exploits synergies between the realism of such models and the need to understand real systems in evolutionary theory. This framework can address many longstanding topics in evolutionary biology by defining various 'levels' of the adaptive landscape. Addressed topics include the distribution of mutational effects on fitness, as well as the nature of advantageous mutations, epistasis and robustness. Combining corresponding parameter estimates with population genetics models raises the possibility of testing evolutionary hypotheses at a new level of realism. Conclusion EvoSysBio is expected to lead to a more detailed understanding of the fundamental principles of life by combining knowledge about well-known biological systems from several disciplines. This will benefit both evolutionary theory and current systems biology. Understanding robustness by analysing distributions of mutational effects and epistasis is pivotal for drug design, cancer research, responsible genetic engineering in synthetic biology and many other practical applications. PMID:19239699

  8. A scalable method for computing quadruplet wave-wave interactions

    NASA Astrophysics Data System (ADS)

    Van Vledder, Gerbrant

    2017-04-01

    Non-linear four-wave interactions are a key physical process in the evolution of wind generated ocean waves. The present generation operational wave models use the Discrete Interaction Approximation (DIA), but it accuracy is poor. It is now generally acknowledged that the DIA should be replaced with a more accurate method to improve predicted spectral shapes and derived parameters. The search for such a method is challenging as one should find a balance between accuracy and computational requirements. Such a method is presented here in the form of a scalable and adaptive method that can mimic both the time consuming exact Snl4 approach and the fast but inaccurate DIA, and everything in between. The method provides an elegant approach to improve the DIA, not by including more arbitrarily shaped wave number configurations, but by a mathematically consistent reduction of an exact method, viz. the WRT method. The adaptiveness is to adapt the abscissa of the locus integrand in relation to the magnitude of the known terms. The adaptiveness is extended to the highest level of the WRT method to select interacting wavenumber configurations in a hierarchical way in relation to their importance. This adaptiveness results in a speed-up of one to three orders of magnitude depending on the measure of accuracy. This definition of accuracy should not be expressed in terms of the quality of the transfer integral for academic spectra but rather in terms of wave model performance in a dynamic run. This has consequences for the balance between the required accuracy and the computational workload for evaluating these interactions. The performance of the scalable method on different scales is illustrated with results from academic spectra, simple growth curves to more complicated field cases using a 3G-wave model.

  9. Optimization-based method for structural damage localization and quantification by means of static displacements computed by flexibility matrix

    NASA Astrophysics Data System (ADS)

    Zare Hosseinzadeh, Ali; Ghodrati Amiri, Gholamreza; Koo, Ki-Young

    2016-04-01

    This article presents an effective method for structural damage identification. The damage diagnosis problem is introduced as an optimization problem which is based on computing static displacements by the flexibility matrix. By utilizing this matrix, the complexity of the static displacement measurements in real cases can be overcome. The optimization problem is solved by a fast evolutionary optimization strategy, named the cuckoo optimization algorithm. The performance of the presented method was demonstrated by studying the benchmark problem provided by the IASC-ASCE Task Group on Structural Health Monitoring, and a numerical example of a frame. Moreover, the robustness of the presented approach was investigated in the presence of some prevalent modelling errors, and also when noisy and incomplete modal data are available. Finally, the efficiency of the proposed method was verified by an experimental study of a five-storey shear building structure. All the obtained results show the good performance of the presented method.

  10. Graphics processing unit acceleration of computational electromagnetic methods

    NASA Astrophysics Data System (ADS)

    Inman, Matthew

    The use of Graphical Processing Units (GPU's) for scientific applications has been evolving and expanding for the decade. GPU's provide an alternative to the CPU in the creation and execution of the numerical codes that are often relied upon in to perform simulations in computational electromagnetics. While originally designed purely to display graphics on the users monitor, GPU's today are essentially powerful floating point co-processors that can be programmed not only to render complex graphics, but also perform the complex mathematical calculations often encountered in scientific computing. Currently the GPU's being produced often contain hundreds of separate cores able to access large amounts of high-speed dedicated memory. By utilizing the power offered by such a specialized processor, it is possible to drastically speed up the calculations required in computational electromagnetics. This increase in speed allows for the use of GPU based simulations in a variety of situations that the computational time has heretofore been a limiting factor in, such as in educational courses. Many situations in teaching electromagnetics often rely upon simple examples of problems due to the simulation times needed to analyze more complex problems. The use of GPU based simulations will be shown to allow demonstrations of more advanced problems than previously allowed by adapting the methods for use on the GPU. Modules will be developed for a wide variety of teaching situations utilizing the speed of the GPU to demonstrate various techniques and ideas previously unrealizable.

  11. COMPARISON OF CLASSIFICATION STRATEGIES BY COMPUTER SIMULATION METHODS.

    DTIC Science & Technology

    NAVAL TRAINING, COMPUTER PROGRAMMING), (*NAVAL PERSONNEL, CLASSIFICATION), SELECTION, SIMULATION, CORRELATION TECHNIQUES , PROBABILITY, COSTS, OPTIMIZATION, PERSONNEL MANAGEMENT, DECISION THEORY, COMPUTERS

  12. Methods for transition toward computer assisted cognitive examination.

    PubMed

    Jurica, P; Valenzi, S; Struzik, Z R; Cichocki, A

    2015-01-01

    We present a software framework which enables the extension of current methods for the assessment of cognitive fitness using recent technological advances. Screening for cognitive impairment is becoming more important as the world's population grows older. Current methods could be enhanced by use of computers. Introduction of new methods to clinics requires basic tools for collection and communication of collected data. To develop tools that, with minimal interference, offer new opportunities for the enhancement of the current interview based cognitive examinations. We suggest methods and discuss process by which established cognitive tests can be adapted for data collection through digitization by pen enabled tablets. We discuss a number of methods for evaluation of collected data, which promise to increase the resolution and objectivity of the common scoring strategy based on visual inspection. By involving computers in the roles of both instructing and scoring, we aim to increase the precision and reproducibility of cognitive examination. The tools provided in Python framework CogExTools available at http://bsp. brain.riken.jp/cogextools/ enable the design, application and evaluation of screening tests for assessment of cognitive impairment. The toolbox is a research platform; it represents a foundation for further collaborative development by the wider research community and enthusiasts. It is free to download and use, and open-source. We introduce a set of open-source tools that facilitate the design and development of new cognitive tests for modern technology. We provide these tools in order to enable the adaptation of technology for cognitive examination in clinical settings. The tools provide the first step in a possible transition toward standardized mental state examination using computers.

  13. INTERVAL SAMPLING METHODS AND MEASUREMENT ERROR: A COMPUTER SIMULATION

    PubMed Central

    Wirth, Oliver; Slaven, James; Taylor, Matthew A.

    2015-01-01

    A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method’s inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. PMID:24127380

  14. On a method computing transient wave propagation in ionospheric regions

    NASA Technical Reports Server (NTRS)

    Gray, K. G.; Bowhill, S. A.

    1978-01-01

    A consequence of an exoatmospheric nuclear burst is an electromagnetic pulse (EMP) radiated from it. In a region far enough away from the burst, where nonlinear effects can be ignored, the EMP can be represented by a large-amplitude narrow-time-width plane-wave pulse. If the ionosphere intervenes the origin and destination of the EMP, frequency dispersion can cause significant changes in the original pulse upon reception. A method of computing these dispersive effects of transient wave propagation is summarized. The method described is different from the standard transform techniques and provides physical insight into the transient wave process. The method, although exact, can be used in approximating the early-time transient response of an ionospheric region by a simple integration with only explicit knowledge of the electron density, electron collision frequency, and electron gyrofrequency required. As an illustration of the method, it is applied to a simple example and contrasted with the corresponding transform solution.

  15. Computational methods for drug design and discovery: focus on China.

    PubMed

    Zheng, Mingyue; Liu, Xian; Xu, Yuan; Li, Honglin; Luo, Cheng; Jiang, Hualiang

    2013-10-01

    In the past decades, China's computational drug design and discovery research has experienced fast development through various novel methodologies. Application of these methods spans a wide range, from drug target identification to hit discovery and lead optimization. In this review, we firstly provide an overview of China's status in this field and briefly analyze the possible reasons for this rapid advancement. The methodology development is then outlined. For each selected method, a short background precedes an assessment of the method with respect to the needs of drug discovery, and, in particular, work from China is highlighted. Furthermore, several successful applications of these methods are illustrated. Finally, we conclude with a discussion of current major challenges and future directions of the field.

  16. A comparison of computation methods for leg stiffness during hopping.

    PubMed

    Hobara, Hiroaki; Inoue, Koh; Kobayashi, Yoshiyuki; Ogata, Toru

    2014-02-01

    Despite the presence of several different calculations of leg stiffness during hopping, little is known about how the methodologies produce differences in the leg stiffness. The purpose of this study was to directly compare Kleg during hopping as calculated from three previously published computation methods. Ten male subjects hopped in place on two legs, at four frequencies (2.2, 2.6, 3.0, and 3.4 Hz). In this article, leg stiffness was calculated from the natural frequency of oscillation (method A), the ratio of maximal ground reaction force (GRF) to peak center of mass displacement at the middle of the stance phase (method B), and an approximation based on sine-wave GRF modeling (method C). We found that leg stiffness in all methods increased with an increase in hopping frequency, but Kleg values using methods A and B were significantly higher than when using method C at all hopping frequencies. Therefore, care should be taken when comparing leg stiffness obtained by method C with those calculated by other methods.

  17. Evolutionary awareness.

    PubMed

    Gorelik, Gregory; Shackelford, Todd K

    2014-08-27

    In this article, we advance the concept of "evolutionary awareness," a metacognitive framework that examines human thought and emotion from a naturalistic, evolutionary perspective. We begin by discussing the evolution and current functioning of the moral foundations on which our framework rests. Next, we discuss the possible applications of such an evolutionarily-informed ethical framework to several domains of human behavior, namely: sexual maturation, mate attraction, intrasexual competition, culture, and the separation between various academic disciplines. Finally, we discuss ways in which an evolutionary awareness can inform our cross-generational activities-which we refer to as "intergenerational extended phenotypes"-by helping us to construct a better future for ourselves, for other sentient beings, and for our environment.

  18. Evolutionary engineering of Saccharomyces cerevisiae for improved industrially important properties.

    PubMed

    Cakar, Z Petek; Turanli-Yildiz, Burcu; Alkim, Ceren; Yilmaz, Ulkü

    2012-03-01

    This article reviews evolutionary engineering of Saccharomyces cerevisiae. Following a brief introduction to the 'rational' metabolic engineering approach and its limitations such as extensive genetic and metabolic information requirement on the organism of interest, complexity of cellular physiological responses, and difficulties of cloning in industrial strains, evolutionary engineering is discussed as an alternative, inverse metabolic engineering strategy. Major evolutionary engineering applications with S. cerevisiae are then discussed in two general categories: (1) evolutionary engineering of substrate utilization and product formation and (2) evolutionary engineering of stress resistance. Recent developments in functional genomics methods allow rapid identification of the molecular basis of the desired phenotypes obtained by evolutionary engineering. To conclude, when used alone or in combination with rational metabolic engineering and/or computational methods to study and analyze processes of adaptive evolution, evolutionary engineering is a powerful strategy for improvement in industrially important, complex properties of S. cerevisiae. © 2011 Federation of European Microbiological Societies. Published by Blackwell Publishing Ltd. All rights reserved.

  19. ALFRED: A Practical Method for Alignment-Free Distance Computation.

    PubMed

    Thankachan, Sharma V; Chockalingam, Sriram P; Liu, Yongchao; Apostolico, Alberto; Aluru, Srinivas

    2016-06-01

    Alignment-free approaches are gaining persistent interest in many sequence analysis applications such as phylogenetic inference and metagenomic classification/clustering, especially for large-scale sequence datasets. Besides the widely used k-mer methods, the average common substring (ACS) approach has emerged to be one of the well-known alignment-free approaches. Two recent works further generalize this ACS approach by allowing a bounded number k of mismatches in the common substrings, relying on approximation (linear time) and exact computation, respectively. Albeit having a good worst-case time complexity [Formula: see text], the exact approach is complex and unlikely to be efficient in practice. Herein, we present ALFRED, an alignment-free distance computation method, which solves the generalized common substring search problem via exact computation. Compared to the theoretical approach, our algorithm is easier to implement and more practical to use, while still providing highly competitive theoretical performances with an expected run-time of [Formula: see text]. By applying our program to phylogenetic inference as a case study, we find that our program facilitates to exactly reconstruct the topology of the reference phylogenetic tree for a set of 27 primate mitochondrial genomes, at reasonably acceptable speed. ALFRED is implemented in C++ programming language and the source code is freely available online.

  20. Approximation method to compute domain related integrals in structural studies

    NASA Astrophysics Data System (ADS)

    Oanta, E.; Panait, C.; Raicu, A.; Barhalescu, M.; Axinte, T.

    2015-11-01

    Various engineering calculi use integral calculus in theoretical models, i.e. analytical and numerical models. For usual problems, integrals have mathematical exact solutions. If the domain of integration is complicated, there may be used several methods to calculate the integral. The first idea is to divide the domain in smaller sub-domains for which there are direct calculus relations, i.e. in strength of materials the bending moment may be computed in some discrete points using the graphical integration of the shear force diagram, which usually has a simple shape. Another example is in mathematics, where the surface of a subgraph may be approximated by a set of rectangles or trapezoids used to calculate the definite integral. The goal of the work is to introduce our studies about the calculus of the integrals in the transverse section domains, computer aided solutions and a generalizing method. The aim of our research is to create general computer based methods to execute the calculi in structural studies. Thus, we define a Boolean algebra which operates with ‘simple’ shape domains. This algebraic standpoint uses addition and subtraction, conditioned by the sign of every ‘simple’ shape (-1 for the shapes to be subtracted). By ‘simple’ shape or ‘basic’ shape we define either shapes for which there are direct calculus relations, or domains for which their frontiers are approximated by known functions and the according calculus is carried out using an algorithm. The ‘basic’ shapes are linked to the calculus of the most significant stresses in the section, refined aspect which needs special attention. Starting from this idea, in the libraries of ‘basic’ shapes, there were included rectangles, ellipses and domains whose frontiers are approximated by spline functions. The domain triangularization methods suggested that another ‘basic’ shape to be considered is the triangle. The subsequent phase was to deduce the exact relations for the

  1. COMSAC: Computational Methods for Stability and Control. Part 2

    NASA Technical Reports Server (NTRS)

    Fremaux, C. Michael (Compiler); Hall, Robert M. (Compiler)

    2004-01-01

    The unprecedented advances being made in computational fluid dynamic (CFD) technology have demonstrated the powerful capabilities of codes in applications to civil and military aircraft. Used in conjunction with wind-tunnel and flight investigations, many codes are now routinely used by designers in diverse applications such as aerodynamic performance predictions and propulsion integration. Typically, these codes are most reliable for attached, steady, and predominantly turbulent flows. As a result of increasing reliability and confidence in CFD, wind-tunnel testing for some new configurations has been substantially reduced in key areas, such as wing trade studies for mission performance guarantees. Interest is now growing in the application of computational methods to other critical design challenges. One of the most important disciplinary elements for civil and military aircraft is prediction of stability and control characteristics. CFD offers the potential for significantly increasing the basic understanding, prediction, and control of flow phenomena associated with requirements for satisfactory aircraft handling characteristics.

  2. Unified computational method for design of fluid loop systems

    NASA Astrophysics Data System (ADS)

    Furukawa, Masao

    1991-12-01

    Various kinds of empirical formulas of Nusselt numbers, fanning friction factors, and pressure loss coefficients were collected and reviewed with the object of constructing a common basis of design calculations of pumped fluid loop systems. The practical expressions obtained after numerical modifications are listed in tables with identification numbers corresponding to configurations of the flow passages. Design procedure of a cold plate and of a space radiator are clearly shown in a series of mathematical relations coupled with a number of detailed expressions which are put in the tables in order of numerical computations. Weight estimate models and several pump characteristics are given in the tables as a result of data regression. A unified computational method based upon the above procedure is presented for preliminary design analyses of a fluid loop system consisting of cold plates, plane radiators, mechanical pumps, valves, and so on.

  3. Interpolated histogram method for area optimised median computation

    NASA Astrophysics Data System (ADS)

    Buch, Kaushal D.; Darji, Anand D.

    2013-04-01

    The article describes an area efficient algorithm for real-time approximate median computation on VLSI platforms. The improvement in performance and area optimisation are achieved through linear interpolation within a reduced number of histogram bins. In order to reduce the hardware utilisation further, an approximation technique for interpolation is also proposed. This approach extends the utility of the histogram method to data sets having a large dynamic range. The performance of the proposed algorithm in terms of mean squared error (MSE) and resource utilisation is provided and compared to that of the existing algorithms. This comparison indicates that more than 60% optimisation in resources is achieved with marginal compromise in the accuracy of the median. The proposed algorithm finds applications in the areas of image processing, time series analysis and median absolute deviation (MAD) computation.

  4. Computational methods of the Advanced Fluid Dynamics Model

    SciTech Connect

    Bohl, W.R.; Wilhelm, D.; Parker, F.R.; Berthier, J.; Maudlin, P.J.; Schmuck, P.; Goutagny, L.; Ichikawa, S.; Ninokata, H.; Luck, L.B.

    1987-01-01

    To more accurately treat severe accidents in fast reactors, a program has been set up to investigate new computational models and approaches. The product of this effort is a computer code, the Advanced Fluid Dynamics Model (AFDM). This paper describes some of the basic features of the numerical algorithm used in AFDM. Aspects receiving particular emphasis are the fractional-step method of time integration, the semi-implicit pressure iteration, the virtual mass inertial terms, the use of three velocity fields, higher order differencing, convection of interfacial area with source and sink terms, multicomponent diffusion processes in heat and mass transfer, the SESAME equation of state, and vectorized programming. A calculated comparison with an isothermal tetralin/ammonia experiment is performed. We conclude that significant improvements are possible in reliably calculating the progression of severe accidents with further development.

  5. Adaptive Mesh Refinement in Computational Astrophysics -- Methods and Applications

    NASA Astrophysics Data System (ADS)

    Balsara, D.

    2001-12-01

    The advent of robust, reliable and accurate higher order Godunov schemes for many of the systems of equations of interest in computational astrophysics has made it important to understand how to solve them in multi-scale fashion. This is so because the physics associated with astrophysical phenomena evolves in multi-scale fashion and we wish to arrive at a multi-scale simulational capability to represent the physics. Because astrophysical systems have magnetic fields, multi-scale magnetohydrodynamics (MHD) is of especial interest. In this paper we first discuss general issues in adaptive mesh refinement (AMR). We then focus on the important issues in carrying out divergence-free AMR-MHD and catalogue the progress we have made in that area. We show that AMR methods lend themselves to easy parallelization. We then discuss applications of the RIEMANN framework for AMR-MHD to problems in computational astophysics.

  6. An analytical method for computing atomic contact areas in biomolecules.

    PubMed

    Mach, Paul; Koehl, Patrice

    2013-01-15

    We propose a new analytical method for detecting and computing contacts between atoms in biomolecules. It is based on the alpha shape theory and proceeds in three steps. First, we compute the weighted Delaunay triangulation of the union of spheres representing the molecule. In the second step, the Delaunay complex is filtered to derive the dual complex. Finally, contacts between spheres are collected. In this approach, two atoms i and j are defined to be in contact if their centers are connected by an edge in the dual complex. The contact areas between atom i and its neighbors are computed based on the caps formed by these neighbors on the surface of i; the total area of all these caps is partitioned according to their spherical Laguerre Voronoi diagram on the surface of i. This method is analytical and its implementation in a new program BallContact is fast and robust. We have used BallContact to study contacts in a database of 1551 high resolution protein structures. We show that with this new definition of atomic contacts, we generate realistic representations of the environments of atoms and residues within a protein. In particular, we establish the importance of nonpolar contact areas that complement the information represented by the accessible surface areas. This new method bears similarity to the tessellation methods used to quantify atomic volumes and contacts, with the advantage that it does not require the presence of explicit solvent molecules if the surface of the protein is to be considered. © 2012 Wiley Periodicals, Inc. Copyright © 2012 Wiley Periodicals, Inc.

  7. Computation of multi-material interactions using point method

    SciTech Connect

    Zhang, Duan Z; Ma, Xia; Giguere, Paul T

    2009-01-01

    Calculations of fluid flows are often based on Eulerian description, while calculations of solid deformations are often based on Lagrangian description of the material. When the Eulerian descriptions are used to problems of solid deformations, the state variables, such as stress and damage, need to be advected, causing significant numerical diffusion error. When Lagrangian methods are used to problems involving large solid deformat ions or fluid flows, mesh distortion and entanglement are significant sources of error, and often lead to failure of the calculation. There are significant difficulties for either method when applied to problems involving large deformation of solids. To address these difficulties, particle-in-cell (PIC) method is introduced in the 1960s. In the method Eulerian meshes stay fixed and the Lagrangian particles move through the Eulerian meshes during the material deformation. Since its introduction, many improvements to the method have been made. The work of Sulsky et al. (1995, Comput. Phys. Commun. v. 87, pp. 236) provides a mathematical foundation for an improved version, material point method (MPM) of the PIC method. The unique advantages of the MPM method have led to many attempts of applying the method to problems involving interaction of different materials, such as fluid-structure interactions. These problems are multiphase flow or multimaterial deformation problems. In these problems pressures, material densities and volume fractions are determined by satisfying the continuity constraint. However, due to the difference in the approximations between the material point method and the Eulerian method, erroneous results for pressure will be obtained if the same scheme used in Eulerian methods for multiphase flows is used to calculate the pressure. To resolve this issue, we introduce a numerical scheme that satisfies the continuity requirement to higher order of accuracy in the sense of weak solutions for the continuity equations

  8. Using THz Spectroscopy, Evolutionary Network Analysis Methods, and MD Simulation to Map the Evolution of Allosteric Communication Pathways in c-Type Lysozymes.

    PubMed

    Woods, Kristina N; Pfeffer, Juergen

    2016-01-01

    It is now widely accepted that protein function is intimately tied with the navigation of energy landscapes. In this framework, a protein sequence is not described by a distinct structure but rather by an ensemble of conformations. And it is through this ensemble that evolution is able to modify a protein's function by altering its landscape. Hence, the evolution of protein functions involves selective pressures that adjust the sampling of the conformational states. In this work, we focus on elucidating the evolutionary pathway that shaped the function of individual proteins that make-up the mammalian c-type lysozyme subfamily. Using both experimental and computational methods, we map out specific intermolecular interactions that direct the sampling of conformational states and accordingly, also underlie shifts in the landscape that are directly connected with the formation of novel protein functions. By contrasting three representative proteins in the family we identify molecular mechanisms that are associated with the selectivity of enhanced antimicrobial properties and consequently, divergent protein function. Namely, we link the extent of localized fluctuations involving the loop separating helices A and B with shifts in the equilibrium of the ensemble of conformational states that mediate interdomain coupling and concurrently moderate substrate binding affinity. This work reveals unique insights into the molecular level mechanisms that promote the progression of interactions that connect the immune response to infection with the nutritional properties of lactation, while also providing a deeper understanding about how evolving energy landscapes may define present-day protein function. © The Author 2015. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  9. Multiscale Methods, Parallel Computation, and Neural Networks for Real-Time Computer Vision.

    NASA Astrophysics Data System (ADS)

    Battiti, Roberto

    1990-01-01

    This thesis presents new algorithms for low and intermediate level computer vision. The guiding ideas in the presented approach are those of hierarchical and adaptive processing, concurrent computation, and supervised learning. Processing of the visual data at different resolutions is used not only to reduce the amount of computation necessary to reach the fixed point, but also to produce a more accurate estimation of the desired parameters. The presented adaptive multiple scale technique is applied to the problem of motion field estimation. Different parts of the image are analyzed at a resolution that is chosen in order to minimize the error in the coefficients of the differential equations to be solved. Tests with video-acquired images show that velocity estimation is more accurate over a wide range of motion with respect to the homogeneous scheme. In some cases introduction of explicit discontinuities coupled to the continuous variables can be used to avoid propagation of visual information from areas corresponding to objects with different physical and/or kinematic properties. The human visual system uses concurrent computation in order to process the vast amount of visual data in "real -time." Although with different technological constraints, parallel computation can be used efficiently for computer vision. All the presented algorithms have been implemented on medium grain distributed memory multicomputers with a speed-up approximately proportional to the number of processors used. A simple two-dimensional domain decomposition assigns regions of the multiresolution pyramid to the different processors. The inter-processor communication needed during the solution process is proportional to the linear dimension of the assigned domain, so that efficiency is close to 100% if a large region is assigned to each processor. Finally, learning algorithms are shown to be a viable technique to engineer computer vision systems for different applications starting from

  10. A novel computational method for comparing vibrational circular dichroism spectra.

    PubMed

    Shen, Jian; Zhu, Chengyue; Reiling, Stephan; Vaz, Roy

    2010-08-01

    A novel method, SimIR/VCD, for comparing experimental and calculated VCD (vibrational circular dichroism) spectra is developed, based on newly defined spectra similarities. With computationally optimized frequency scaling and shifting, a calculated spectrum can be easily identified to match an observed spectrum, which leads to an unbiased molecular chirality assignment. The time-consuming manual band-fitting work is greatly reduced. With (1S)-(-)-alpha-pinene as an example, it demonstrates that the calculated VCD similarity is correlated with VCD spectra matching quality and has enough sensitivity to identify variations in the spectra. The study also compares spectra calculated using different DFT methods and basis sets. Using this method should facilitate the spectra matching, reduce human error and provide a confidence measure in the chiral assignment using VCD spectroscopy.

  11. On implicit Runge-Kutta methods for parallel computations

    NASA Technical Reports Server (NTRS)

    Keeling, Stephen L.

    1987-01-01

    Implicit Runge-Kutta methods which are well-suited for parallel computations are characterized. It is claimed that such methods are first of all, those for which the associated rational approximation to the exponential has distinct poles, and these are called multiply explicit (MIRK) methods. Also, because of the so-called order reduction phenomenon, there is reason to require that these poles be real. Then, it is proved that a necessary condition for a q-stage, real MIRK to be A sub 0-stable with maximal order q + 1 is that q = 1, 2, 3, or 5. Nevertheless, it is shown that for every positive integer q, there exists a q-stage, real MIRK which is I-stable with order q. Finally, some useful examples of algebraically stable MIRKs are given.

  12. Structure-based Methods for Computational Protein Functional Site Prediction

    PubMed Central

    Dukka, B KC

    2013-01-01

    Due to the advent of high throughput sequencing techniques and structural genomic projects, the number of gene and protein sequences has been ever increasing. Computational methods to annotate these genes and proteins are even more indispensable. Proteins are important macromolecules and study of the function of proteins is an important problem in structural bioinformatics. This paper discusses a number of methods to predict protein functional site especially focusing on protein ligand binding site prediction. Initially, a short overview is presented on recent advances in methods for selection of homologous sequences. Furthermore, a few recent structural based approaches and sequence-and-structure based approaches for protein functional sites are discussed in details. PMID:24688745

  13. A modified Henyey method for computing radiative transfer hydrodynamics

    NASA Technical Reports Server (NTRS)

    Karp, A. H.

    1975-01-01

    The implicit hydrodynamic code of Kutter and Sparks (1972), which is limited to optically thick regions and employs the diffusion approximation for radiative transfer, is modified to include radiative transfer effects in the optically thin regions of a model star. A modified Henyey method is used to include the solution of the radiative transfer equation in this implicit code, and the convergence properties of this method are proven. A comparison is made between two hydrodynamic models of a classical Cepheid with a 12-day period, one of which was computed with the diffusion approximation and the other with the modified Henyey method. It is found that the two models produce nearly identical light and velocity curves, but differ in the fact that the former never has temperature inversions in the atmosphere while the latter does when sufficiently strong shocks are present.

  14. The Piecewise Cubic Method (PCM) for computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Lee, Dongwook; Faller, Hugues; Reyes, Adam

    2017-07-01

    We present a new high-order finite volume reconstruction method for hyperbolic conservation laws. The method is based on a piecewise cubic polynomial which provides its solutions a fifth-order accuracy in space. The spatially reconstructed solutions are evolved in time with a fourth-order accuracy by tracing the characteristics of the cubic polynomials. As a result, our temporal update scheme provides a significantly simpler and computationally more efficient approach in achieving fourth order accuracy in time, relative to the comparable fourth-order Runge-Kutta method. We demonstrate that the solutions of PCM converges at fifth-order in solving 1D smooth flows described by hyperbolic conservation laws. We test the new scheme on a range of numerical experiments, including both gas dynamics and magnetohydrodynamics applications in multiple spatial dimensions.

  15. Review methods for image segmentation from computed tomography images

    SciTech Connect

    Mamat, Nurwahidah; Rahman, Wan Eny Zarina Wan Abdul; Soh, Shaharuddin Cik; Mahmud, Rozi

    2014-12-04

    Image segmentation is a challenging process in order to get the accuracy of segmentation, automation and robustness especially in medical images. There exist many segmentation methods that can be implemented to medical images but not all methods are suitable. For the medical purposes, the aims of image segmentation are to study the anatomical structure, identify the region of interest, measure tissue volume to measure growth of tumor and help in treatment planning prior to radiation therapy. In this paper, we present a review method for segmentation purposes using Computed Tomography (CT) images. CT images has their own characteristics that affect the ability to visualize anatomic structures and pathologic features such as blurring of the image and visual noise. The details about the methods, the goodness and the problem incurred in the methods will be defined and explained. It is necessary to know the suitable segmentation method in order to get accurate segmentation. This paper can be a guide to researcher to choose the suitable segmentation method especially in segmenting the images from CT scan.

  16. A computationally efficient method for hand-eye calibration.

    PubMed

    Zhang, Zhiqiang; Zhang, Lin; Yang, Guang-Zhong

    2017-07-19

    Surgical robots with cooperative control and semiautonomous features have shown increasing clinical potential, particularly for repetitive tasks under imaging and vision guidance. Effective performance of an autonomous task requires accurate hand-eye calibration so that the transformation between the robot coordinate frame and the camera coordinates is well defined. In practice, due to changes in surgical instruments, online hand-eye calibration must be performed regularly. In order to ensure seamless execution of the surgical procedure without affecting the normal surgical workflow, it is important to derive fast and efficient hand-eye calibration methods. We present a computationally efficient iterative method for hand-eye calibration. In this method, dual quaternion is introduced to represent the rigid transformation, and a two-step iterative method is proposed to recover the real and dual parts of the dual quaternion simultaneously, and thus the estimation of rotation and translation of the transformation. The proposed method was applied to determine the rigid transformation between the stereo laparoscope and the robot manipulator. Promising experimental and simulation results have shown significant convergence speed improvement to 3 iterations from larger than 30 with regard to standard optimization method, which illustrates the effectiveness and efficiency of the proposed method.

  17. Airburst height computation method of Sea-Impact Test

    NASA Astrophysics Data System (ADS)

    Kim, Jinho; Kim, Hyungsup; Chae, Sungwoo; Park, Sungho

    2017-05-01

    This paper describes the ways how to measure the airburst height of projectiles and rockets. In general, the airburst height could be determined by using triangulation method or the images from the camera installed on the radar. There are some limitations in these previous methods when the missiles impact the sea surface. To apply triangulation method, the cameras should be installed so that the lines of sight intersect at angles from 60 to 120 degrees. There could be no effective observation towers to install the optical system. In case the range of the missile is more than 50km, the images from the camera of the radar could be useless. This paper proposes the method to measure the airburst height of sea impact projectile by using a single camera. The camera would be installed on the island near to the impact area and the distance could be computed by using the position and attitude of camera and sea level. To demonstrate the proposed method, the results from the proposed method are compared with that from the previous method.

  18. Numerical Methods of Computational Electromagnetics for Complex Inhomogeneous Systems

    SciTech Connect

    Cai, Wei

    2014-05-15

    Understanding electromagnetic phenomena is the key in many scientific investigation and engineering designs such as solar cell designs, studying biological ion channels for diseases, and creating clean fusion energies, among other things. The objectives of the project are to develop high order numerical methods to simulate evanescent electromagnetic waves occurring in plasmon solar cells and biological ion-channels, where local field enhancement within random media in the former and long range electrostatic interactions in the latter are of major challenges for accurate and efficient numerical computations. We have accomplished these objectives by developing high order numerical methods for solving Maxwell equations such as high order finite element basis for discontinuous Galerkin methods, well-conditioned Nedelec edge element method, divergence free finite element basis for MHD, and fast integral equation methods for layered media. These methods can be used to model the complex local field enhancement in plasmon solar cells. On the other hand, to treat long range electrostatic interaction in ion channels, we have developed image charge based method for a hybrid model in combining atomistic electrostatics and continuum Poisson-Boltzmann electrostatics. Such a hybrid model will speed up the molecular dynamics simulation of transport in biological ion-channels.

  19. Modeling methods for merging computational and experimental aerodynamic pressure data

    NASA Astrophysics Data System (ADS)

    Haderlie, Jacob C.

    This research describes a process to model surface pressure data sets as a function of wing geometry from computational and wind tunnel sources and then merge them into a single predicted value. The described merging process will enable engineers to integrate these data sets with the goal of utilizing the advantages of each data source while overcoming the limitations of both; this provides a single, combined data set to support analysis and design. The main challenge with this process is accurately representing each data source everywhere on the wing. Additionally, this effort demonstrates methods to model wind tunnel pressure data as a function of angle of attack as an initial step towards a merging process that uses both location on the wing and flow conditions (e.g., angle of attack, flow velocity or Reynold's number) as independent variables. This surrogate model of pressure as a function of angle of attack can be useful for engineers that need to predict the location of zero-order discontinuities, e.g., flow separation or normal shocks. Because, to the author's best knowledge, there is no published, well-established merging method for aerodynamic pressure data (here, the coefficient of pressure Cp), this work identifies promising modeling and merging methods, and then makes a critical comparison of these methods. Surrogate models represent the pressure data for both data sets. Cubic B-spline surrogate models represent the computational simulation results. Machine learning and multi-fidelity surrogate models represent the experimental data. This research compares three surrogates for the experimental data (sequential--a.k.a. online--Gaussian processes, batch Gaussian processes, and multi-fidelity additive corrector) on the merits of accuracy and computational cost. The Gaussian process (GP) methods employ cubic B-spline CFD surrogates as a model basis function to build a surrogate model of the WT data, and this usage of the CFD surrogate in building the WT

  20. A Novel Automated Method for Analyzing Cylindrical Computed Tomography Data

    NASA Technical Reports Server (NTRS)

    Roth, D. J.; Burke, E. R.; Rauser, R. W.; Martin, R. E.

    2011-01-01

    A novel software method is presented that is applicable for analyzing cylindrical and partially cylindrical objects inspected using computed tomography. This method involves unwrapping and re-slicing data so that the CT data from the cylindrical object can be viewed as a series of 2-D sheets in the vertical direction in addition to volume rendering and normal plane views provided by traditional CT software. The method is based on interior and exterior surface edge detection and under proper conditions, is FULLY AUTOMATED and requires no input from the user except the correct voxel dimension from the CT scan. The software is available from NASA in 32- and 64-bit versions that can be applied to gigabyte-sized data sets, processing data either in random access memory or primarily on the computer hard drive. Please inquire with the presenting author if further interested. This software differentiates itself in total from other possible re-slicing software solutions due to complete automation and advanced processing and analysis capabilities.