Sample records for parallel adaptation theory

  1. Parallel software for lattice N = 4 supersymmetric Yang-Mills theory

    NASA Astrophysics Data System (ADS)

    Schaich, David; DeGrand, Thomas

    2015-05-01

    We present new parallel software, SUSY LATTICE, for lattice studies of four-dimensional N = 4 supersymmetric Yang-Mills theory with gauge group SU(N). The lattice action is constructed to exactly preserve a single supersymmetry charge at non-zero lattice spacing, up to additional potential terms included to stabilize numerical simulations. The software evolved from the MILC code for lattice QCD, and retains a similar large-scale framework despite the different target theory. Many routines are adapted from an existing serial code (Catterall and Joseph, 2012), which SUSY LATTICE supersedes. This paper provides an overview of the new parallel software, summarizing the lattice system, describing the applications that are currently provided and explaining their basic workflow for non-experts in lattice gauge theory. We discuss the parallel performance of the code, and highlight some notable aspects of the documentation for those interested in contributing to its future development.

  2. Attention and apparent motion.

    PubMed

    Horowitz, T; Treisman, A

    1994-01-01

    Two dissociations between short- and long-range motion in visual search are reported. Previous research has shown parallel processing for short-range motion and apparently serial processing for long-range motion. This finding has been replicated and it has also been found that search for short-range targets can be impaired both by using bicontrast stimuli, and by prior adaptation to the target direction of motion. Neither factor impaired search in long-range motion displays. Adaptation actually facilitated search with long-range displays, which is attributed to response-level effects. A feature-integration account of apparent motion is proposed. In this theory, short-range motion depends on specialized motion feature detectors operating in parallel across the display, but subject to selective adaptation, whereas attention is needed to link successive elements when they appear at greater separations, or across opposite contrasts.

  3. The Feldenkrais Method: A Dynamic Approach to Changing Motor Behavior.

    ERIC Educational Resources Information Center

    Buchanan, Patricia A.; Ulrich, Beverly D.

    2001-01-01

    Describes the Feldenkrais Method of somatic education, noting parallels with a dynamic systems theory (DST) approach to motor behavior. Feldenkrais uses movement and perception to foster individualized improvement in function. DST explains that a human-environment system continually adapts to changing conditions and assembles behaviors…

  4. Understanding the leaky engineering pipeline: Motivation and job adaptability of female engineers

    NASA Astrophysics Data System (ADS)

    Saraswathiamma, Manjusha Thekkedathu

    This dissertation is a mixed-method study conducted using qualitative grounded theory and quantitative survey and correlation approaches. This study aims to explore the motivation and adaptability of females in the engineering profession and to develop a theoretical framework for both motivation and adaptability issues. As a result, this study endeavors to design solutions for the low enrollment and attenuation of female engineers in the engineering profession, often referred to as the "leaky female engineering pipeline." Profiles of 123 female engineers were studied for the qualitative approach, and 98 completed survey responses were analyzed for the quantitative approach. The qualitative, grounded-theory approach applied the constant comparison method; open, axial, and selective coding was used to classify the information in categories, sub-categories, and themes for both motivation and adaptability. The emergent themes for decisions motivating female enrollment include cognitive, emotional, and environmental factors. The themes identified for adaptability include the seven job adaptability factors: job satisfaction, risk- taking attitude, career/skill development, family, gender stereotyping, interpersonal skills, and personal benefit, as well as the self-perceived job adaptability factor. Illeris' Three-dimensional Learning Theory was modified as a model for decisions motivating female enrollment. This study suggests a firsthand conceptual parallelism of McClusky's Theory of Margin for the adaptability of female engineers in the profession. Also, this study attempted to design a survey instrument to measure job adaptability of female engineers. The study identifies two factors that are significantly related to job adaptability: interpersonal skills (< p = 0.01) and family (< p = 0.05); gender stereotyping and personal benefit are other factors that are also significantly (< p = 0.1) related.

  5. Following the Hand: The First Three Years of Life.

    ERIC Educational Resources Information Center

    Orion, Judy

    2001-01-01

    Discusses the development of the human hand from birth to age three as it contributes to the formation of human personality. Considers how parallels in eye, hand, brain, and motor skill development portray the evolving complexity and adaptation of the human grasp and illustrate Montessori theories about the relationship between physical experience…

  6. Adaptive multi-resolution 3D Hartree-Fock-Bogoliubov solver for nuclear structure

    NASA Astrophysics Data System (ADS)

    Pei, J. C.; Fann, G. I.; Harrison, R. J.; Nazarewicz, W.; Shi, Yue; Thornton, S.

    2014-08-01

    Background: Complex many-body systems, such as triaxial and reflection-asymmetric nuclei, weakly bound halo states, cluster configurations, nuclear fragments produced in heavy-ion fusion reactions, cold Fermi gases, and pasta phases in neutron star crust, are all characterized by large sizes and complex topologies in which many geometrical symmetries characteristic of ground-state configurations are broken. A tool of choice to study such complex forms of matter is an adaptive multi-resolution wavelet analysis. This method has generated much excitement since it provides a common framework linking many diversified methodologies across different fields, including signal processing, data compression, harmonic analysis and operator theory, fractals, and quantum field theory. Purpose: To describe complex superfluid many-fermion systems, we introduce an adaptive pseudospectral method for solving self-consistent equations of nuclear density functional theory in three dimensions, without symmetry restrictions. Methods: The numerical method is based on the multi-resolution and computational harmonic analysis techniques with a multi-wavelet basis. The application of state-of-the-art parallel programming techniques include sophisticated object-oriented templates which parse the high-level code into distributed parallel tasks with a multi-thread task queue scheduler for each multi-core node. The internode communications are asynchronous. The algorithm is variational and is capable of solving coupled complex-geometric systems of equations adaptively, with functional and boundary constraints, in a finite spatial domain of very large size, limited by existing parallel computer memory. For smooth functions, user-defined finite precision is guaranteed. Results: The new adaptive multi-resolution Hartree-Fock-Bogoliubov (HFB) solver madness-hfb is benchmarked against a two-dimensional coordinate-space solver hfb-ax that is based on the B-spline technique and a three-dimensional solver hfodd that is based on the harmonic-oscillator basis expansion. Several examples are considered, including the self-consistent HFB problem for spin-polarized trapped cold fermions and the Skyrme-Hartree-Fock (+BCS) problem for triaxial deformed nuclei. Conclusions: The new madness-hfb framework has many attractive features when applied to nuclear and atomic problems involving many-particle superfluid systems. Of particular interest are weakly bound nuclear configurations close to particle drip lines, strongly elongated and dinuclear configurations such as those present in fission and heavy-ion fusion, and exotic pasta phases that appear in neutron star crust.

  7. Many-to-one form-to-function mapping weakens parallel morphological evolution.

    PubMed

    Thompson, Cole J; Ahmed, Newaz I; Veen, Thor; Peichel, Catherine L; Hendry, Andrew P; Bolnick, Daniel I; Stuart, Yoel E

    2017-11-01

    Evolutionary ecologists aim to explain and predict evolutionary change under different selective regimes. Theory suggests that such evolutionary prediction should be more difficult for biomechanical systems in which different trait combinations generate the same functional output: "many-to-one mapping." Many-to-one mapping of phenotype to function enables multiple morphological solutions to meet the same adaptive challenges. Therefore, many-to-one mapping should undermine parallel morphological evolution, and hence evolutionary predictability, even when selection pressures are shared among populations. Studying 16 replicate pairs of lake- and stream-adapted threespine stickleback (Gasterosteus aculeatus), we quantified three parts of the teleost feeding apparatus and used biomechanical models to calculate their expected functional outputs. The three feeding structures differed in their form-to-function relationship from one-to-one (lower jaw lever ratio) to increasingly many-to-one (buccal suction index, opercular 4-bar linkage). We tested for (1) weaker linear correlations between phenotype and calculated function, and (2) less parallel evolution across lake-stream pairs, in the many-to-one systems relative to the one-to-one system. We confirm both predictions, thus supporting the theoretical expectation that increasing many-to-one mapping undermines parallel evolution. Therefore, sole consideration of morphological variation within and among populations might not serve as a proxy for functional variation when multiple adaptive trait combinations exist. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.

  8. A cerebellar learning model of vestibulo-ocular reflex adaptation in wild-type and mutant mice.

    PubMed

    Clopath, Claudia; Badura, Aleksandra; De Zeeuw, Chris I; Brunel, Nicolas

    2014-05-21

    Mechanisms of cerebellar motor learning are still poorly understood. The standard Marr-Albus-Ito theory posits that learning involves plasticity at the parallel fiber to Purkinje cell synapses under control of the climbing fiber input, which provides an error signal as in classical supervised learning paradigms. However, a growing body of evidence challenges this theory, in that additional sites of plasticity appear to contribute to motor adaptation. Here, we consider phase-reversal training of the vestibulo-ocular reflex (VOR), a simple form of motor learning for which a large body of experimental data is available in wild-type and mutant mice, in which the excitability of granule cells or inhibition of Purkinje cells was affected in a cell-specific fashion. We present novel electrophysiological recordings of Purkinje cell activity measured in naive wild-type mice subjected to this VOR adaptation task. We then introduce a minimal model that consists of learning at the parallel fibers to Purkinje cells with the help of the climbing fibers. Although the minimal model reproduces the behavior of the wild-type animals and is analytically tractable, it fails at reproducing the behavior of mutant mice and the electrophysiology data. Therefore, we build a detailed model involving plasticity at the parallel fibers to Purkinje cells' synapse guided by climbing fibers, feedforward inhibition of Purkinje cells, and plasticity at the mossy fiber to vestibular nuclei neuron synapse. The detailed model reproduces both the behavioral and electrophysiological data of both the wild-type and mutant mice and allows for experimentally testable predictions. Copyright © 2014 the authors 0270-6474/14/347203-13$15.00/0.

  9. A third-generation density-functional-theory-based method for calculating canonical molecular orbitals of large molecules.

    PubMed

    Hirano, Toshiyuki; Sato, Fumitoshi

    2014-07-28

    We used grid-free modified Cholesky decomposition (CD) to develop a density-functional-theory (DFT)-based method for calculating the canonical molecular orbitals (CMOs) of large molecules. Our method can be used to calculate standard CMOs, analytically compute exchange-correlation terms, and maximise the capacity of next-generation supercomputers. Cholesky vectors were first analytically downscaled using low-rank pivoted CD and CD with adaptive metric (CDAM). The obtained Cholesky vectors were distributed and stored on each computer node in a parallel computer, and the Coulomb, Fock exchange, and pure exchange-correlation terms were calculated by multiplying the Cholesky vectors without evaluating molecular integrals in self-consistent field iterations. Our method enables DFT and massively distributed memory parallel computers to be used in order to very efficiently calculate the CMOs of large molecules.

  10. The Development of a Research Environment for Neural Networks: Instantiating Neocognitions

    DTIC Science & Technology

    1990-12-21

    interactive activation to adaptive reso- nance. Cognitive Science, 11:23-63. Reprinted in (Grossberg, 1988). Grossberg, S., editor (1988). Neural...higher order correlation network. Physica 22D, pages 276-306. Rosenblatt, F. (1962). Principles of Neurodynamics : Perceptrons and the Theory of Brain...and the PDP Research Group (1986b). Parallel Dis- tributed Processing: Ezplorations in the Microstructures of Cognition , volume 1: Foun- dations

  11. Before hierarchy: the rise and fall of Stephen Jay Gould's first macroevolutionary synthesis.

    PubMed

    Dresow, Max W

    2017-06-01

    Few of Stephen Jay Gould's accomplishments in evolutionary biology have received more attention than his hierarchical theory of evolution, which postulates a causal discontinuity between micro- and macroevolutionary events. But Gould's hierarchical theory was his second attempt to supply a theoretical framework for macroevolutionary studies-and one he did not inaugurate until the mid-1970s. In this paper, I examine Gould's first attempt: a proposed fusion of theoretical morphology, multivariate biometry and the experimental study of adaptation in fossils. This early "macroevolutionary synthesis" was predicated on the notion that parallelism and convergence dominate the history of higher taxa, and moreover, that they can be explained in terms of adaptation leading to mechanical improvement. In this paper, I explore the origins and contents of Gould's first macroevolutionary synthesis, as well as the reasons for its downfall. In addition, I consider how various developments during the mid-1970s led Gould to identify hierarchy and constraint as the leading themes of macroevolutionary studies-and adaptation as a macroevolutionary red herring.

  12. High-resolution multi-code implementation of unsteady Navier-Stokes flow solver based on paralleled overset adaptive mesh refinement and high-order low-dissipation hybrid schemes

    NASA Astrophysics Data System (ADS)

    Li, Gaohua; Fu, Xiang; Wang, Fuxin

    2017-10-01

    The low-dissipation high-order accurate hybrid up-winding/central scheme based on fifth-order weighted essentially non-oscillatory (WENO) and sixth-order central schemes, along with the Spalart-Allmaras (SA)-based delayed detached eddy simulation (DDES) turbulence model, and the flow feature-based adaptive mesh refinement (AMR), are implemented into a dual-mesh overset grid infrastructure with parallel computing capabilities, for the purpose of simulating vortex-dominated unsteady detached wake flows with high spatial resolutions. The overset grid assembly (OGA) process based on collection detection theory and implicit hole-cutting algorithm achieves an automatic coupling for the near-body and off-body solvers, and the error-and-try method is used for obtaining a globally balanced load distribution among the composed multiple codes. The results of flows over high Reynolds cylinder and two-bladed helicopter rotor show that the combination of high-order hybrid scheme, advanced turbulence model, and overset adaptive mesh refinement can effectively enhance the spatial resolution for the simulation of turbulent wake eddies.

  13. DGDFT: A massively parallel method for large scale density functional theory calculations.

    PubMed

    Hu, Wei; Lin, Lin; Yang, Chao

    2015-09-28

    We describe a massively parallel implementation of the recently developed discontinuous Galerkin density functional theory (DGDFT) method, for efficient large-scale Kohn-Sham DFT based electronic structure calculations. The DGDFT method uses adaptive local basis (ALB) functions generated on-the-fly during the self-consistent field iteration to represent the solution to the Kohn-Sham equations. The use of the ALB set provides a systematic way to improve the accuracy of the approximation. By using the pole expansion and selected inversion technique to compute electron density, energy, and atomic forces, we can make the computational complexity of DGDFT scale at most quadratically with respect to the number of electrons for both insulating and metallic systems. We show that for the two-dimensional (2D) phosphorene systems studied here, using 37 basis functions per atom allows us to reach an accuracy level of 1.3 × 10(-4) Hartree/atom in terms of the error of energy and 6.2 × 10(-4) Hartree/bohr in terms of the error of atomic force, respectively. DGDFT can achieve 80% parallel efficiency on 128,000 high performance computing cores when it is used to study the electronic structure of 2D phosphorene systems with 3500-14 000 atoms. This high parallel efficiency results from a two-level parallelization scheme that we will describe in detail.

  14. An Integrative Theory of Psychotherapy: Research and Practice

    PubMed Central

    Epstein, Seymour; Epstein, Martha L.

    2016-01-01

    A dual-process personality theory and supporting research are presented. The dual processes comprise an experiential system and a rational system. The experiential system is an adaptive, associative learning system that humans share with other higher-order animals. The rational system is a uniquely human, primarily verbal, reasoning system. It is assumed that when humans developed language they did not abandon their previous ways of adapting, they simply added language to their experiential system. The two systems are assumed to operate in parallel and are bi-directionally interactive. The validity of these assumptions is supported by extensive research. Of particular relevance for psychotherapy, the experiential system, which is compatible with evolutionary theory, replaces the Freudian maladaptive unconscious system that is indefensible from an evolutionary perspective, as sub-human animals would then have only a single system that is maladaptive. The aim of psychotherapy is to produce constructive changes in the experiential system. Changes in the rational system are useful only to the extent that they contribute to constructive changes in the experiential system. PMID:27672302

  15. An Integrative Theory of Psychotherapy: Research and Practice.

    PubMed

    Epstein, Seymour; Epstein, Martha L

    2016-06-01

    A dual-process personality theory and supporting research are presented. The dual processes comprise an experiential system and a rational system. The experiential system is an adaptive, associative learning system that humans share with other higher-order animals. The rational system is a uniquely human, primarily verbal, reasoning system. It is assumed that when humans developed language they did not abandon their previous ways of adapting, they simply added language to their experiential system. The two systems are assumed to operate in parallel and are bi-directionally interactive. The validity of these assumptions is supported by extensive research. Of particular relevance for psychotherapy, the experiential system, which is compatible with evolutionary theory, replaces the Freudian maladaptive unconscious system that is indefensible from an evolutionary perspective, as sub-human animals would then have only a single system that is maladaptive. The aim of psychotherapy is to produce constructive changes in the experiential system. Changes in the rational system are useful only to the extent that they contribute to constructive changes in the experiential system.

  16. Converging Paradigms: A Reflection on Parallel Theoretical Developments in Psychoanalytic Metapsychology and Empirical Dream Research.

    PubMed

    Schmelowszky, Ágoston

    2016-08-01

    In the last decades one can perceive a striking parallelism between the shifting perspective of leading representatives of empirical dream research concerning their conceptualization of dreaming and the paradigm shift within clinically based psychoanalytic metapsychology with respect to its theory on the significance of dreaming. In metapsychology, dreaming becomes more and more a central metaphor of mental functioning in general. The theories of Klein, Bion, and Matte-Blanco can be considered as milestones of this paradigm shift. In empirical dream research, the competing theories of Hobson and of Solms respectively argued for and against the meaningfulness of the dream-work in the functioning of the mind. In the meantime, empirical data coming from various sources seemed to prove the significance of dream consciousness for the development and maintenance of adaptive waking consciousness. Metapsychological speculations and hypotheses based on empirical research data seem to point in the same direction, promising for contemporary psychoanalytic practice a more secure theoretical base. In this paper the author brings together these diverse theoretical developments and presents conclusions regarding psychoanalytic theory and technique, as well as proposing an outline of an empirical research plan for testing the specificity of psychoanalysis in developing dream formation.

  17. Broadening and collisional interference of lines in the IR spectra of ammonia. Theory

    NASA Astrophysics Data System (ADS)

    Cherkasov, M. R.

    2016-06-01

    The general theory of relaxation spectral shape parameters in the impact approximation (M. R. Cherkasov, J. Quant. Spectrosc. Radiat. Transfer 141, 73 (2014)) is adapted to the case of line broadening of infrared spectra of ammonia. Specific features of line broadening of parallel and perpendicular bands are discussed. It is shown that in both cases the spectrum consists of independently broadened singlets and doublets; however, the components of doublets can be affected by collisional interference. The paper is the first part of a cycle of studies devoted to the problems of spectral line broadening of ammonia.

  18. Parallel Adaptive Simulation of Detonation Waves Using a Weighted Essentially Non-Oscillatory Scheme

    NASA Astrophysics Data System (ADS)

    McMahon, Sean

    The purpose of this thesis was to develop a code that could be used to develop a better understanding of the physics of detonation waves. First, a detonation was simulated in one dimension using ZND theory. Then, using the 1D solution as an initial condition, a detonation was simulated in two dimensions using a weighted essentially non-oscillatory scheme on an adaptive mesh with the smallest lengthscales being equal to 2-3 flamelet lengths. The code development in linking Chemkin for chemical kinetics to the adaptive mesh refinement flow solver was completed. The detonation evolved in a way that, qualitatively, matched the experimental observations, however, the simulation was unable to progress past the formation of the triple point.

  19. A Robust and Scalable Software Library for Parallel Adaptive Refinement on Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Lou, John Z.; Norton, Charles D.; Cwik, Thomas A.

    1999-01-01

    The design and implementation of Pyramid, a software library for performing parallel adaptive mesh refinement (PAMR) on unstructured meshes, is described. This software library can be easily used in a variety of unstructured parallel computational applications, including parallel finite element, parallel finite volume, and parallel visualization applications using triangular or tetrahedral meshes. The library contains a suite of well-designed and efficiently implemented modules that perform operations in a typical PAMR process. Among these are mesh quality control during successive parallel adaptive refinement (typically guided by a local-error estimator), parallel load-balancing, and parallel mesh partitioning using the ParMeTiS partitioner. The Pyramid library is implemented in Fortran 90 with an interface to the Message-Passing Interface (MPI) library, supporting code efficiency, modularity, and portability. An EM waveguide filter application, adaptively refined using the Pyramid library, is illustrated.

  20. Million city traveling salesman problem solution by divide and conquer clustering with adaptive resonance neural networks.

    PubMed

    Mulder, Samuel A; Wunsch, Donald C

    2003-01-01

    The Traveling Salesman Problem (TSP) is a very hard optimization problem in the field of operations research. It has been shown to be NP-complete, and is an often-used benchmark for new optimization techniques. One of the main challenges with this problem is that standard, non-AI heuristic approaches such as the Lin-Kernighan algorithm (LK) and the chained LK variant are currently very effective and in wide use for the common fully connected, Euclidean variant that is considered here. This paper presents an algorithm that uses adaptive resonance theory (ART) in combination with a variation of the Lin-Kernighan local optimization algorithm to solve very large instances of the TSP. The primary advantage of this algorithm over traditional LK and chained-LK approaches is the increased scalability and parallelism allowed by the divide-and-conquer clustering paradigm. Tours obtained by the algorithm are lower quality, but scaling is much better and there is a high potential for increasing performance using parallel hardware.

  1. Why the Rediscoverer Ended up on the Sidelines: Hugo De Vries's Theory of Inheritance and the Mendelian Laws

    NASA Astrophysics Data System (ADS)

    Stamhuis, Ida H.

    2015-01-01

    Eleven years before the `rediscovery' in 1900 of Mendel's work, Hugo De Vries published his theory of heredity. He expected his theory to become a big success, but it was not well-received. To find supporting evidence for this theory De Vries started an extensive research program. Because of the parallels of his ideas with the Mendelian laws and because of his use of statistics, he became one of the rediscoverers. However, the Mendelian laws, which soon became the foundation of a new discipline of genetics, presented a problem. De Vries was the only one of the early Mendelians who had developed his own theory of heredity. His theory could not be brought in line with the Mendelian laws. But because his original theory was still very dear to him, something important was at stake and he was unwilling to adapt his ideas to the new situation. He belittled the importance of the Mendelian laws and ended up on the sidelines.

  2. Adaptive parallel logic networks

    NASA Technical Reports Server (NTRS)

    Martinez, Tony R.; Vidal, Jacques J.

    1988-01-01

    Adaptive, self-organizing concurrent systems (ASOCS) that combine self-organization with massive parallelism for such applications as adaptive logic devices, robotics, process control, and system malfunction management, are presently discussed. In ASOCS, an adaptive network composed of many simple computing elements operating in combinational and asynchronous fashion is used and problems are specified by presenting if-then rules to the system in the form of Boolean conjunctions. During data processing, which is a different operational phase from adaptation, the network acts as a parallel hardware circuit.

  3. Mathematical and Numerical Aspects of the Adaptive Fast Multipole Poisson-Boltzmann Solver

    DOE PAGES

    Zhang, Bo; Lu, Benzhuo; Cheng, Xiaolin; ...

    2013-01-01

    This paper summarizes the mathematical and numerical theories and computational elements of the adaptive fast multipole Poisson-Boltzmann (AFMPB) solver. We introduce and discuss the following components in order: the Poisson-Boltzmann model, boundary integral equation reformulation, surface mesh generation, the nodepatch discretization approach, Krylov iterative methods, the new version of fast multipole methods (FMMs), and a dynamic prioritization technique for scheduling parallel operations. For each component, we also remark on feasible approaches for further improvements in efficiency, accuracy and applicability of the AFMPB solver to large-scale long-time molecular dynamics simulations. Lastly, the potential of the solver is demonstrated with preliminary numericalmore » results.« less

  4. Neural controller for adaptive movements with unforeseen payloads.

    PubMed

    Kuperstein, M; Wang, J

    1990-01-01

    A theory and computer simulation of a neural controller that learns to move and position a link carrying an unforeseen payload accurately are presented. The neural controller learns adaptive dynamic control from its own experience. It does not use information about link mass, link length, or direction of gravity, and it uses only indirect uncalibrated information about payload and actuator limits. Its average positioning accuracy across a large range of payloads after learning is 3% of the positioning range. This neural controller can be used as a basis for coordinating any number of sensory inputs with limbs of any number of joints. The feedforward nature of control allows parallel implementation in real time across multiple joints.

  5. PARAMESH: A Parallel Adaptive Mesh Refinement Community Toolkit

    NASA Technical Reports Server (NTRS)

    MacNeice, Peter; Olson, Kevin M.; Mobarry, Clark; deFainchtein, Rosalinda; Packer, Charles

    1999-01-01

    In this paper, we describe a community toolkit which is designed to provide parallel support with adaptive mesh capability for a large and important class of computational models, those using structured, logically cartesian meshes. The package of Fortran 90 subroutines, called PARAMESH, is designed to provide an application developer with an easy route to extend an existing serial code which uses a logically cartesian structured mesh into a parallel code with adaptive mesh refinement. Alternatively, in its simplest use, and with minimal effort, it can operate as a domain decomposition tool for users who want to parallelize their serial codes, but who do not wish to use adaptivity. The package can provide them with an incremental evolutionary path for their code, converting it first to uniformly refined parallel code, and then later if they so desire, adding adaptivity.

  6. An object-oriented approach for parallel self adaptive mesh refinement on block structured grids

    NASA Technical Reports Server (NTRS)

    Lemke, Max; Witsch, Kristian; Quinlan, Daniel

    1993-01-01

    Self-adaptive mesh refinement dynamically matches the computational demands of a solver for partial differential equations to the activity in the application's domain. In this paper we present two C++ class libraries, P++ and AMR++, which significantly simplify the development of sophisticated adaptive mesh refinement codes on (massively) parallel distributed memory architectures. The development is based on our previous research in this area. The C++ class libraries provide abstractions to separate the issues of developing parallel adaptive mesh refinement applications into those of parallelism, abstracted by P++, and adaptive mesh refinement, abstracted by AMR++. P++ is a parallel array class library to permit efficient development of architecture independent codes for structured grid applications, and AMR++ provides support for self-adaptive mesh refinement on block-structured grids of rectangular non-overlapping blocks. Using these libraries, the application programmers' work is greatly simplified to primarily specifying the serial single grid application and obtaining the parallel and self-adaptive mesh refinement code with minimal effort. Initial results for simple singular perturbation problems solved by self-adaptive multilevel techniques (FAC, AFAC), being implemented on the basis of prototypes of the P++/AMR++ environment, are presented. Singular perturbation problems frequently arise in large applications, e.g. in the area of computational fluid dynamics. They usually have solutions with layers which require adaptive mesh refinement and fast basic solvers in order to be resolved efficiently.

  7. Parallel adaptive wavelet collocation method for PDEs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nejadmalayeri, Alireza, E-mail: Alireza.Nejadmalayeri@gmail.com; Vezolainen, Alexei, E-mail: Alexei.Vezolainen@Colorado.edu; Brown-Dymkoski, Eric, E-mail: Eric.Browndymkoski@Colorado.edu

    2015-10-01

    A parallel adaptive wavelet collocation method for solving a large class of Partial Differential Equations is presented. The parallelization is achieved by developing an asynchronous parallel wavelet transform, which allows one to perform parallel wavelet transform and derivative calculations with only one data synchronization at the highest level of resolution. The data are stored using tree-like structure with tree roots starting at a priori defined level of resolution. Both static and dynamic domain partitioning approaches are developed. For the dynamic domain partitioning, trees are considered to be the minimum quanta of data to be migrated between the processes. This allowsmore » fully automated and efficient handling of non-simply connected partitioning of a computational domain. Dynamic load balancing is achieved via domain repartitioning during the grid adaptation step and reassigning trees to the appropriate processes to ensure approximately the same number of grid points on each process. The parallel efficiency of the approach is discussed based on parallel adaptive wavelet-based Coherent Vortex Simulations of homogeneous turbulence with linear forcing at effective non-adaptive resolutions up to 2048{sup 3} using as many as 2048 CPU cores.« less

  8. The Feasibility of Adaptive Unstructured Computations On Petaflops Systems

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Oliker, Leonid; Heber, Gerd; Gao, Guang; Saini, Subhash (Technical Monitor)

    1999-01-01

    This viewgraph presentation covers the advantages of mesh adaptation, unstructured grids, and dynamic load balancing. It illustrates parallel adaptive communications, and explains PLUM (Parallel dynamic load balancing for adaptive unstructured meshes), and PSAW (Proper Self Avoiding Walks).

  9. Robustness and management adaptability in tropical rangelands: a viability-based assessment under the non-equilibrium paradigm.

    PubMed

    Accatino, F; Sabatier, R; De Michele, C; Ward, D; Wiegand, K; Meyer, K M

    2014-08-01

    Rangelands provide the main forage resource for livestock in many parts of the world, but maintaining long-term productivity and providing sufficient income for the rancher remains a challenge. One key issue is to maintain the rangeland in conditions where the rancher has the greatest possibility to adapt his/her management choices to a highly fluctuating and uncertain environment. In this study, we address management robustness and adaptability, which increase the resilience of a rangeland. After reviewing how the concept of resilience evolved in parallel to modelling views on rangelands, we present a dynamic model of rangelands to which we applied the mathematical framework of viability theory to quantify the management adaptability of the system in a stochastic environment. This quantification is based on an index that combines the robustness of the system to rainfall variability and the ability of the rancher to adjust his/her management through time. We evaluated the adaptability for four possible scenarios combining two rainfall regimes (high or low) with two herding strategies (grazers only or mixed herd). Results show that pure grazing is viable only for high-rainfall regimes, and that the use of mixed-feeder herds increases the adaptability of the management. The management is the most adaptive with mixed herds and in rangelands composed of an intermediate density of trees and grasses. In such situations, grass provides high quantities of biomass and woody plants ensure robustness to droughts. Beyond the implications for management, our results illustrate the relevance of viability theory for addressing the issue of robustness and adaptability in non-equilibrium environments.

  10. Metascalable molecular dynamics simulation of nano-mechano-chemistry

    NASA Astrophysics Data System (ADS)

    Shimojo, F.; Kalia, R. K.; Nakano, A.; Nomura, K.; Vashishta, P.

    2008-07-01

    We have developed a metascalable (or 'design once, scale on new architectures') parallel application-development framework for first-principles based simulations of nano-mechano-chemical processes on emerging petaflops architectures based on spatiotemporal data locality principles. The framework consists of (1) an embedded divide-and-conquer (EDC) algorithmic framework based on spatial locality to design linear-scaling algorithms, (2) a space-time-ensemble parallel (STEP) approach based on temporal locality to predict long-time dynamics, and (3) a tunable hierarchical cellular decomposition (HCD) parallelization framework to map these scalable algorithms onto hardware. The EDC-STEP-HCD framework exposes and expresses maximal concurrency and data locality, thereby achieving parallel efficiency as high as 0.99 for 1.59-billion-atom reactive force field molecular dynamics (MD) and 17.7-million-atom (1.56 trillion electronic degrees of freedom) quantum mechanical (QM) MD in the framework of the density functional theory (DFT) on adaptive multigrids, in addition to 201-billion-atom nonreactive MD, on 196 608 IBM BlueGene/L processors. We have also used the framework for automated execution of adaptive hybrid DFT/MD simulation on a grid of six supercomputers in the US and Japan, in which the number of processors changed dynamically on demand and tasks were migrated according to unexpected faults. The paper presents the application of the framework to the study of nanoenergetic materials: (1) combustion of an Al/Fe2O3 thermite and (2) shock initiation and reactive nanojets at a void in an energetic crystal.

  11. Genomics of parallel adaptation at two timescales in Drosophila

    PubMed Central

    Begun, David J.

    2017-01-01

    Two interesting unanswered questions are the extent to which both the broad patterns and genetic details of adaptive divergence are repeatable across species, and the timescales over which parallel adaptation may be observed. Drosophila melanogaster is a key model system for population and evolutionary genomics. Findings from genetics and genomics suggest that recent adaptation to latitudinal environmental variation (on the timescale of hundreds or thousands of years) associated with Out-of-Africa colonization plays an important role in maintaining biological variation in the species. Additionally, studies of interspecific differences between D. melanogaster and its sister species D. simulans have revealed that a substantial proportion of proteins and amino acid residues exhibit adaptive divergence on a roughly few million years long timescale. Here we use population genomic approaches to attack the problem of parallelism between D. melanogaster and a highly diverged conger, D. hydei, on two timescales. D. hydei, a member of the repleta group of Drosophila, is similar to D. melanogaster, in that it too appears to be a recently cosmopolitan species and recent colonizer of high latitude environments. We observed parallelism both for genes exhibiting latitudinal allele frequency differentiation within species and for genes exhibiting recurrent adaptive protein divergence between species. Greater parallelism was observed for long-term adaptive protein evolution and this parallelism includes not only the specific genes/proteins that exhibit adaptive evolution, but extends even to the magnitudes of the selective effects on interspecific protein differences. Thus, despite the roughly 50 million years of time separating D. melanogaster and D. hydei, and despite their considerably divergent biology, they exhibit substantial parallelism, suggesting the existence of a fundamental predictability of adaptive evolution in the genus. PMID:28968391

  12. Teaching adaptive leadership to family medicine residents: what? why? how?

    PubMed

    Eubank, Daniel; Geffken, Dominic; Orzano, John; Ricci, Rocco

    2012-09-01

    Health care reform calls for patient-centered medical homes built around whole person care and healing relationships. Efforts to transform primary care practices and deliver these qualities have been challenging. This study describes one Family Medicine residency's efforts to develop an adaptive leadership curriculum and use coaching as a teaching method to address this challenge. We review literature that describes a parallel between the skills underlying such care and those required for adaptive leadership. We address two questions: What is leadership? Why focus on adaptive leadership? We then present a synthesis of leadership theories as a set of process skills that lead to organization learning through effective work relationships and adaptive leadership. Four models of the learning process needed to acquire such skills are explored. Coaching is proposed as a teaching method useful for going beyond information transfer to create the experiential learning necessary to acquire the process skills. Evaluations of our efforts to date are summarized. We discuss key challenges to implementing such a curriculum and propose that teaching adaptive leadership is feasible but difficult in the current medical education and practice contexts.

  13. Nature of the water/aromatic parallel alignment interactions.

    PubMed

    Mitoraj, Mariusz P; Janjić, Goran V; Medaković, Vesna B; Veljković, Dušan Ž; Michalak, Artur; Zarić, Snežana D; Milčić, Miloš K

    2015-01-30

    The water/aromatic parallel alignment interactions are interactions where the water molecule or one of its O-H bonds is parallel to the aromatic ring plane. The calculated energies of the interactions are significant, up to ΔE(CCSD)(T)(limit) = -2.45 kcal mol(-1) at large horizontal displacement, out of benzene ring and CH bond region. These interactions are stronger than CH···O water/benzene interactions, but weaker than OH···π interactions. To investigate the nature of water/aromatic parallel alignment interactions, energy decomposition methods, symmetry-adapted perturbation theory, and extended transition state-natural orbitals for chemical valence (NOCV), were used. The calculations have shown that, for the complexes at large horizontal displacements, major contribution to interaction energy comes from electrostatic interactions between monomers, and for the complexes at small horizontal displacements, dispersion interactions are dominant binding force. The NOCV-based analysis has shown that in structures with strong interaction energies charge transfer of the type π → σ*(O-H) between the monomers also exists. © 2014 Wiley Periodicals, Inc.

  14. Global Load Balancing with Parallel Mesh Adaption on Distributed-Memory Systems

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Oliker, Leonid; Sohn, Andrew

    1996-01-01

    Dynamic mesh adaption on unstructured grids is a powerful tool for efficiently computing unsteady problems to resolve solution features of interest. Unfortunately, this causes load imbalance among processors on a parallel machine. This paper describes the parallel implementation of a tetrahedral mesh adaption scheme and a new global load balancing method. A heuristic remapping algorithm is presented that assigns partitions to processors such that the redistribution cost is minimized. Results indicate that the parallel performance of the mesh adaption code depends on the nature of the adaption region and show a 35.5X speedup on 64 processors of an SP2 when 35% of the mesh is randomly adapted. For large-scale scientific computations, our load balancing strategy gives almost a sixfold reduction in solver execution times over non-balanced loads. Furthermore, our heuristic remapper yields processor assignments that are less than 3% off the optimal solutions but requires only 1% of the computational time.

  15. Covariance Matrix Adaptation Evolutionary Strategy for Drift Correction of Electronic Nose Data

    NASA Astrophysics Data System (ADS)

    Di Carlo, S.; Falasconi, M.; Sanchez, E.; Sberveglieri, G.; Scionti, A.; Squillero, G.; Tonda, A.

    2011-09-01

    Electronic Noses (ENs) might represent a simple, fast, high sample throughput and economic alternative to conventional analytical instruments [1]. However, gas sensors drift still limits the EN adoption in real industrial setups due to high recalibration effort and cost [2]. In fact, pattern recognition (PaRC) models built in the training phase become useless after a period of time, in some cases a few weeks. Although algorithms to mitigate the drift date back to the early 90 this is still a challenging issue for the chemical sensor community [3]. Among other approaches, adaptive drift correction methods adjust the PaRC model in parallel with data acquisition without need of periodic calibration. Self-Organizing Maps (SOMs) [4] and Adaptive Resonance Theory (ART) networks [5] have been already tested in the past with fair success. This paper presents and discusses an original methodology based on a Covariance Matrix Adaptation Evolution Strategy (CMA-ES) [6], suited for stochastic optimization of complex problems.

  16. Adaptive independent joint control of manipulators - Theory and experiment

    NASA Technical Reports Server (NTRS)

    Seraji, H.

    1988-01-01

    The author presents a simple decentralized adaptive control scheme for multijoint robot manipulators based on the independent joint control concept. The proposed control scheme for each joint consists of a PID (proportional integral and differential) feedback controller and a position-velocity-acceleration feedforward controller, both with adjustable gains. The static and dynamic couplings that exist between the joint motions are compensated by the adaptive independent joint controllers while ensuring trajectory tracking. The proposed scheme is implemented on a MicroVAX II computer for motion control of the first three joints of a PUMA 560 arm. Experimental results are presented to demonstrate that trajectory tracking is achieved despite strongly coupled, highly nonlinear joint dynamics. The results confirm that the proposed decentralized adaptive control of manipulators is feasible, in spite of strong interactions between joint motions. The control scheme presented is computationally very fast and is amenable to parallel processing implementation within a distributed computing architecture, where each joint is controlled independently by a simple algorithm on a dedicated microprocessor.

  17. A new RISE-based adaptive control of PKMs: design, stability analysis and experiments

    NASA Astrophysics Data System (ADS)

    Bennehar, M.; Chemori, A.; Bouri, M.; Jenni, L. F.; Pierrot, F.

    2018-03-01

    This paper deals with the development of a new adaptive control scheme for parallel kinematic manipulators (PKMs) based on Rrbust integral of the sign of the error (RISE) control theory. Original RISE control law is only based on state feedback and does not take advantage of the modelled dynamics of the manipulator. Consequently, the overall performance of the resulting closed-loop system may be poor compared to modern advanced model-based control strategies. We propose in this work to extend RISE by including the nonlinear dynamics of the PKM in the control loop to improve its overall performance. More precisely, we augment original RISE control scheme with a model-based adaptive control term to account for the inherent nonlinearities in the closed-loop system. To demonstrate the relevance of the proposed controller, real-time experiments are conducted on the Delta robot, a three-degree-of-freedom (3-DOF) PKM.

  18. Computational aspects of helicopter trim analysis and damping levels from Floquet theory

    NASA Technical Reports Server (NTRS)

    Gaonkar, Gopal H.; Achar, N. S.

    1992-01-01

    Helicopter trim settings of periodic initial state and control inputs are investigated for convergence of Newton iteration in computing the settings sequentially and in parallel. The trim analysis uses a shooting method and a weak version of two temporal finite element methods with displacement formulation and with mixed formulation of displacements and momenta. These three methods broadly represent two main approaches of trim analysis: adaptation of initial-value and finite element boundary-value codes to periodic boundary conditions, particularly for unstable and marginally stable systems. In each method, both the sequential and in-parallel schemes are used and the resulting nonlinear algebraic equations are solved by damped Newton iteration with an optimally selected damping parameter. The impact of damped Newton iteration, including earlier-observed divergence problems in trim analysis, is demonstrated by the maximum condition number of the Jacobian matrices of the iterative scheme and by virtual elimination of divergence. The advantages of the in-parallel scheme over the conventional sequential scheme are also demonstrated.

  19. Multithreaded Model for Dynamic Load Balancing Parallel Adaptive PDE Computations

    NASA Technical Reports Server (NTRS)

    Chrisochoides, Nikos

    1995-01-01

    We present a multithreaded model for the dynamic load-balancing of numerical, adaptive computations required for the solution of Partial Differential Equations (PDE's) on multiprocessors. Multithreading is used as a means of exploring concurrency in the processor level in order to tolerate synchronization costs inherent to traditional (non-threaded) parallel adaptive PDE solvers. Our preliminary analysis for parallel, adaptive PDE solvers indicates that multithreading can be used an a mechanism to mask overheads required for the dynamic balancing of processor workloads with computations required for the actual numerical solution of the PDE's. Also, multithreading can simplify the implementation of dynamic load-balancing algorithms, a task that is very difficult for traditional data parallel adaptive PDE computations. Unfortunately, multithreading does not always simplify program complexity, often makes code re-usability not an easy task, and increases software complexity.

  20. Fast adaptive composite grid methods on distributed parallel architectures

    NASA Technical Reports Server (NTRS)

    Lemke, Max; Quinlan, Daniel

    1992-01-01

    The fast adaptive composite (FAC) grid method is compared with the adaptive composite method (AFAC) under variety of conditions including vectorization and parallelization. Results are given for distributed memory multiprocessor architectures (SUPRENUM, Intel iPSC/2 and iPSC/860). It is shown that the good performance of AFAC and its superiority over FAC in a parallel environment is a property of the algorithm and not dependent on peculiarities of any machine.

  1. Synchronization Of Parallel Discrete Event Simulations

    NASA Technical Reports Server (NTRS)

    Steinman, Jeffrey S.

    1992-01-01

    Adaptive, parallel, discrete-event-simulation-synchronization algorithm, Breathing Time Buckets, developed in Synchronous Parallel Environment for Emulation and Discrete Event Simulation (SPEEDES) operating system. Algorithm allows parallel simulations to process events optimistically in fluctuating time cycles that naturally adapt while simulation in progress. Combines best of optimistic and conservative synchronization strategies while avoiding major disadvantages. Algorithm processes events optimistically in time cycles adapting while simulation in progress. Well suited for modeling communication networks, for large-scale war games, for simulated flights of aircraft, for simulations of computer equipment, for mathematical modeling, for interactive engineering simulations, and for depictions of flows of information.

  2. Parallel Tetrahedral Mesh Adaptation with Dynamic Load Balancing

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Biswas, Rupak; Gabow, Harold N.

    1999-01-01

    The ability to dynamically adapt an unstructured grid is a powerful tool for efficiently solving computational problems with evolving physical features. In this paper, we report on our experience parallelizing an edge-based adaptation scheme, called 3D_TAG. using message passing. Results show excellent speedup when a realistic helicopter rotor mesh is randomly refined. However. performance deteriorates when the mesh is refined using a solution-based error indicator since mesh adaptation for practical problems occurs in a localized region., creating a severe load imbalance. To address this problem, we have developed PLUM, a global dynamic load balancing framework for adaptive numerical computations. Even though PLUM primarily balances processor workloads for the solution phase, it reduces the load imbalance problem within mesh adaptation by repartitioning the mesh after targeting edges for refinement but before the actual subdivision. This dramatically improves the performance of parallel 3D_TAG since refinement occurs in a more load balanced fashion. We also present optimal and heuristic algorithms that, when applied to the default mapping of a parallel repartitioner, significantly reduce the data redistribution overhead. Finally, portability is examined by comparing performance on three state-of-the-art parallel machines.

  3. Global Load Balancing with Parallel Mesh Adaption on Distributed-Memory Systems

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Oliker, Leonid; Sohn, Andrew

    1996-01-01

    Dynamic mesh adaptation on unstructured grids is a powerful tool for efficiently computing unsteady problems to resolve solution features of interest. Unfortunately, this causes load inbalances among processors on a parallel machine. This paper described the parallel implementation of a tetrahedral mesh adaption scheme and a new global load balancing method. A heuristic remapping algorithm is presented that assigns partitions to processors such that the redistribution coast is minimized. Results indicate that the parallel performance of the mesh adaption code depends on the nature of the adaption region and show a 35.5X speedup on 64 processors of an SP2 when 35 percent of the mesh is randomly adapted. For large scale scientific computations, our load balancing strategy gives an almost sixfold reduction in solver execution times over non-balanced loads. Furthermore, our heuristic remappier yields processor assignments that are less than 3 percent of the optimal solutions, but requires only 1 percent of the computational time.

  4. Dynamic grid refinement for partial differential equations on parallel computers

    NASA Technical Reports Server (NTRS)

    Mccormick, S.; Quinlan, D.

    1989-01-01

    The fast adaptive composite grid method (FAC) is an algorithm that uses various levels of uniform grids to provide adaptive resolution and fast solution of PDEs. An asynchronous version of FAC, called AFAC, that completely eliminates the bottleneck to parallelism is presented. This paper describes the advantage that this algorithm has in adaptive refinement for moving singularities on multiprocessor computers. This work is applicable to the parallel solution of two- and three-dimensional shock tracking problems.

  5. Parallel Anisotropic Tetrahedral Adaptation

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Darmofal, David L.

    2008-01-01

    An adaptive method that robustly produces high aspect ratio tetrahedra to a general 3D metric specification without introducing hybrid semi-structured regions is presented. The elemental operators and higher-level logic is described with their respective domain-decomposed parallelizations. An anisotropic tetrahedral grid adaptation scheme is demonstrated for 1000-1 stretching for a simple cube geometry. This form of adaptation is applicable to more complex domain boundaries via a cut-cell approach as demonstrated by a parallel 3D supersonic simulation of a complex fighter aircraft. To avoid the assumptions and approximations required to form a metric to specify adaptation, an approach is introduced that directly evaluates interpolation error. The grid is adapted to reduce and equidistribute this interpolation error calculation without the use of an intervening anisotropic metric. Direct interpolation error adaptation is illustrated for 1D and 3D domains.

  6. Scalability of a Low-Cost Multi-Teraflop Linux Cluster for High-End Classical Atomistic and Quantum Mechanical Simulations

    NASA Technical Reports Server (NTRS)

    Kikuchi, Hideaki; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya; Shimojo, Fuyuki; Saini, Subhash

    2003-01-01

    Scalability of a low-cost, Intel Xeon-based, multi-Teraflop Linux cluster is tested for two high-end scientific applications: Classical atomistic simulation based on the molecular dynamics method and quantum mechanical calculation based on the density functional theory. These scalable parallel applications use space-time multiresolution algorithms and feature computational-space decomposition, wavelet-based adaptive load balancing, and spacefilling-curve-based data compression for scalable I/O. Comparative performance tests are performed on a 1,024-processor Linux cluster and a conventional higher-end parallel supercomputer, 1,184-processor IBM SP4. The results show that the performance of the Linux cluster is comparable to that of the SP4. We also study various effects, such as the sharing of memory and L2 cache among processors, on the performance.

  7. Self-organization in neural networks - Applications in structural optimization

    NASA Technical Reports Server (NTRS)

    Hajela, Prabhat; Fu, B.; Berke, Laszlo

    1993-01-01

    The present paper discusses the applicability of ART (Adaptive Resonance Theory) networks, and the Hopfield and Elastic networks, in problems of structural analysis and design. A characteristic of these network architectures is the ability to classify patterns presented as inputs into specific categories. The categories may themselves represent distinct procedural solution strategies. The paper shows how this property can be adapted in the structural analysis and design problem. A second application is the use of Hopfield and Elastic networks in optimization problems. Of particular interest are problems characterized by the presence of discrete and integer design variables. The parallel computing architecture that is typical of neural networks is shown to be effective in such problems. Results of preliminary implementations in structural design problems are also included in the paper.

  8. The effect of selection environment on the probability of parallel evolution.

    PubMed

    Bailey, Susan F; Rodrigue, Nicolas; Kassen, Rees

    2015-06-01

    Across the great diversity of life, there are many compelling examples of parallel and convergent evolution-similar evolutionary changes arising in independently evolving populations. Parallel evolution is often taken to be strong evidence of adaptation occurring in populations that are highly constrained in their genetic variation. Theoretical models suggest a few potential factors driving the probability of parallel evolution, but experimental tests are needed. In this study, we quantify the degree of parallel evolution in 15 replicate populations of Pseudomonas fluorescens evolved in five different environments that varied in resource type and arrangement. We identified repeat changes across multiple levels of biological organization from phenotype, to gene, to nucleotide, and tested the impact of 1) selection environment, 2) the degree of adaptation, and 3) the degree of heterogeneity in the environment on the degree of parallel evolution at the gene-level. We saw, as expected, that parallel evolution occurred more often between populations evolved in the same environment; however, the extent of parallel evolution varied widely. The degree of adaptation did not significantly explain variation in the extent of parallelism in our system but number of available beneficial mutations correlated negatively with parallel evolution. In addition, degree of parallel evolution was significantly higher in populations evolved in a spatially structured, multiresource environment, suggesting that environmental heterogeneity may be an important factor constraining adaptation. Overall, our results stress the importance of environment in driving parallel evolutionary changes and point to a number of avenues for future work for understanding when evolution is predictable. © The Author 2015. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  9. Solar Wind Proton Temperature Anisotropy: Linear Theory and WIND/SWE Observations

    NASA Technical Reports Server (NTRS)

    Hellinger, P.; Travnicek, P.; Kasper, J. C.; Lazarus, A. J.

    2006-01-01

    We present a comparison between WIND/SWE observations (Kasper et al., 2006) of beta parallel to p and T perpendicular to p/T parallel to p (where beta parallel to p is the proton parallel beta and T perpendicular to p and T parallel to p are the perpendicular and parallel proton are the perpendicular and parallel proton temperatures, respectively; here parallel and perpendicular indicate directions with respect to the ambient magnetic field) and predictions of the Vlasov linear theory. In the slow solar wind, the observed proton temperature anisotropy seems to be constrained by oblique instabilities, by the mirror one and the oblique fire hose, contrary to the results of the linear theory which predicts a dominance of the proton cyclotron instability and the parallel fire hose. The fast solar wind core protons exhibit an anticorrelation between beta parallel to c and T perpendicular to c/T parallel to c (where beta parallel to c is the core proton parallel beta and T perpendicular to c and T parallel to c are the perpendicular and parallel core proton temperatures, respectively) similar to that observed in the HELIOS data (Marsch et al., 2004).

  10. An HP Adaptive Discontinuous Galerkin Method for Hyperbolic Conservation Laws. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Bey, Kim S.

    1994-01-01

    This dissertation addresses various issues for model classes of hyperbolic conservation laws. The basic approach developed in this work employs a new family of adaptive, hp-version, finite element methods based on a special discontinuous Galerkin formulation for hyperbolic problems. The discontinuous Galerkin formulation admits high-order local approximations on domains of quite general geometry, while providing a natural framework for finite element approximations and for theoretical developments. The use of hp-versions of the finite element method makes possible exponentially convergent schemes with very high accuracies in certain cases; the use of adaptive hp-schemes allows h-refinement in regions of low regularity and p-enrichment to deliver high accuracy, while keeping problem sizes manageable and dramatically smaller than many conventional approaches. The use of discontinuous Galerkin methods is uncommon in applications, but the methods rest on a reasonable mathematical basis for low-order cases and has local approximation features that can be exploited to produce very efficient schemes, especially in a parallel, multiprocessor environment. The place of this work is to first and primarily focus on a model class of linear hyperbolic conservation laws for which concrete mathematical results, methodologies, error estimates, convergence criteria, and parallel adaptive strategies can be developed, and to then briefly explore some extensions to more general cases. Next, we provide preliminaries to the study and a review of some aspects of the theory of hyperbolic conservation laws. We also provide a review of relevant literature on this subject and on the numerical analysis of these types of problems.

  11. Optimal Design of Passive Power Filters Based on Pseudo-parallel Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Li, Pei; Li, Hongbo; Gao, Nannan; Niu, Lin; Guo, Liangfeng; Pei, Ying; Zhang, Yanyan; Xu, Minmin; Chen, Kerui

    2017-05-01

    The economic costs together with filter efficiency are taken as targets to optimize the parameter of passive filter. Furthermore, the method of combining pseudo-parallel genetic algorithm with adaptive genetic algorithm is adopted in this paper. In the early stages pseudo-parallel genetic algorithm is introduced to increase the population diversity, and adaptive genetic algorithm is used in the late stages to reduce the workload. At the same time, the migration rate of pseudo-parallel genetic algorithm is improved to change with population diversity adaptively. Simulation results show that the filter designed by the proposed method has better filtering effect with lower economic cost, and can be used in engineering.

  12. Fast Numerical Solution of the Plasma Response Matrix for Real-time Ideal MHD Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glasser, Alexander; Kolemen, Egemen; Glasser, Alan H.

    To help effectuate near real-time feedback control of ideal MHD instabilities in tokamak geometries, a parallelized version of A.H. Glasser’s DCON (Direct Criterion of Newcomb) code is developed. To motivate the numerical implementation, we first solve DCON’s δW formulation with a Hamilton-Jacobi theory, elucidating analytical and numerical features of the ideal MHD stability problem. The plasma response matrix is demonstrated to be the solution of an ideal MHD Riccati equation. We then describe our adaptation of DCON with numerical methods natural to solutions of the Riccati equation, parallelizing it to enable its operation in near real-time. We replace DCON’s serial integration of perturbed modes—which satisfy a singular Euler- Lagrange equation—with a domain-decomposed integration of state transition matrices. Output is shown to match results from DCON with high accuracy, and with computation time < 1s. Such computational speed may enable active feedback ideal MHD stability control, especially in plasmas whose ideal MHD equilibria evolve with inductive timescalemore » $$\\tau$$ ≳ 1s—as in ITER. Further potential applications of this theory are discussed.« less

  13. Fast Numerical Solution of the Plasma Response Matrix for Real-time Ideal MHD Control

    DOE PAGES

    Glasser, Alexander; Kolemen, Egemen; Glasser, Alan H.

    2018-03-26

    To help effectuate near real-time feedback control of ideal MHD instabilities in tokamak geometries, a parallelized version of A.H. Glasser’s DCON (Direct Criterion of Newcomb) code is developed. To motivate the numerical implementation, we first solve DCON’s δW formulation with a Hamilton-Jacobi theory, elucidating analytical and numerical features of the ideal MHD stability problem. The plasma response matrix is demonstrated to be the solution of an ideal MHD Riccati equation. We then describe our adaptation of DCON with numerical methods natural to solutions of the Riccati equation, parallelizing it to enable its operation in near real-time. We replace DCON’s serial integration of perturbed modes—which satisfy a singular Euler- Lagrange equation—with a domain-decomposed integration of state transition matrices. Output is shown to match results from DCON with high accuracy, and with computation time < 1s. Such computational speed may enable active feedback ideal MHD stability control, especially in plasmas whose ideal MHD equilibria evolve with inductive timescalemore » $$\\tau$$ ≳ 1s—as in ITER. Further potential applications of this theory are discussed.« less

  14. Divergent adaptation promotes reproductive isolation among experimental populations of the filamentous fungus Neurospora

    PubMed Central

    2008-01-01

    Background An open, focal issue in evolutionary biology is how reproductive isolation and speciation are initiated; elucidation of mechanisms with empirical evidence has lagged behind theory. Under ecological speciation, reproductive isolation between populations is predicted to evolve incidentally as a by-product of adaptation to divergent environments. The increased genetic diversity associated with interspecific hybridization has also been theorized to promote the development of reproductive isolation among independent populations. Using the fungal model Neurospora, we founded experimental lineages from both intra- and interspecific crosses, and evolved them in one of two sub-optimal, selective environments. We then measured the influence that initial genetic diversity and the direction of selection (parallel versus divergent) had on the evolution of reproductive isolation. Results When assayed in the selective environment in which they were evolved, lineages typically had greater asexual fitness than the progenitors and the lineages that were evolved in the alternate, selective environment. Assays for reproductive isolation showed that matings between lineages that were adapted to the same environment had greater sexual reproductive success than matings between lineages that were adapted to different environments. Evidence of this differential reproductive success was observed at two stages of the sexual cycle. For one of the two observed incompatibility phenotypes, results from genetic analyses were consistent with a two-locus, two-allele model with asymmetric (gender-specific), antagonistic epistasis. The effects of divergent adaptation on reproductive isolation were more pronounced for populations with greater initial genetic variation. Conclusion Divergent selection resulted in divergent adaptation and environmental specialization, consistent with fixation of different alleles in different environments. When brought together by mating, these alleles interacted negatively and had detrimental effects on sexual reproductive success, in agreement with the Dobzhansky-Muller model of genetic incompatibilities. As predicted by ecological speciation, greater reproductive isolation was observed among divergent-adapted lineages than among parallel-adapted lineages. These results support that, given adequate standing genetic variation, divergent adaptation can indirectly cause the evolution of reproductive isolation, and eventually lead to speciation. PMID:18237415

  15. Adapting high-level language programs for parallel processing using data flow

    NASA Technical Reports Server (NTRS)

    Standley, Hilda M.

    1988-01-01

    EASY-FLOW, a very high-level data flow language, is introduced for the purpose of adapting programs written in a conventional high-level language to a parallel environment. The level of parallelism provided is of the large-grained variety in which parallel activities take place between subprograms or processes. A program written in EASY-FLOW is a set of subprogram calls as units, structured by iteration, branching, and distribution constructs. A data flow graph may be deduced from an EASY-FLOW program.

  16. Parallel Adaptive Mesh Refinement Library

    NASA Technical Reports Server (NTRS)

    Mac-Neice, Peter; Olson, Kevin

    2005-01-01

    Parallel Adaptive Mesh Refinement Library (PARAMESH) is a package of Fortran 90 subroutines designed to provide a computer programmer with an easy route to extension of (1) a previously written serial code that uses a logically Cartesian structured mesh into (2) a parallel code with adaptive mesh refinement (AMR). Alternatively, in its simplest use, and with minimal effort, PARAMESH can operate as a domain-decomposition tool for users who want to parallelize their serial codes but who do not wish to utilize adaptivity. The package builds a hierarchy of sub-grids to cover the computational domain of a given application program, with spatial resolution varying to satisfy the demands of the application. The sub-grid blocks form the nodes of a tree data structure (a quad-tree in two or an oct-tree in three dimensions). Each grid block has a logically Cartesian mesh. The package supports one-, two- and three-dimensional models.

  17. Parallel computations and control of adaptive structures

    NASA Technical Reports Server (NTRS)

    Park, K. C.; Alvin, Kenneth F.; Belvin, W. Keith; Chong, K. P. (Editor); Liu, S. C. (Editor); Li, J. C. (Editor)

    1991-01-01

    The equations of motion for structures with adaptive elements for vibration control are presented for parallel computations to be used as a software package for real-time control of flexible space structures. A brief introduction of the state-of-the-art parallel computational capability is also presented. Time marching strategies are developed for an effective use of massive parallel mapping, partitioning, and the necessary arithmetic operations. An example is offered for the simulation of control-structure interaction on a parallel computer and the impact of the approach presented for applications in other disciplines than aerospace industry is assessed.

  18. Visual reaction times during prolonged angular acceleration parallel the subjective perception of rotation

    NASA Technical Reports Server (NTRS)

    Mattson, D. L.

    1975-01-01

    The effect of prolonged angular acceleration on choice reaction time to an accelerating visual stimulus was investigated, with 10 commercial airline pilots serving as subjects. The pattern of reaction times during and following acceleration was compared with the pattern of velocity estimates reported during identical trials. Both reaction times and velocity estimates increased at the onset of acceleration, declined prior to the termination of acceleration, and showed an aftereffect. These results are inconsistent with the torsion-pendulum theory of semicircular canal function and suggest that the vestibular adaptation is of central origin.

  19. A Parallel Implementation of Multilevel Recursive Spectral Bisection for Application to Adaptive Unstructured Meshes. Chapter 1

    NASA Technical Reports Server (NTRS)

    Barnard, Stephen T.; Simon, Horst; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    The design of a parallel implementation of multilevel recursive spectral bisection is described. The goal is to implement a code that is fast enough to enable dynamic repartitioning of adaptive meshes.

  20. Architecture-Adaptive Computing Environment: A Tool for Teaching Parallel Programming

    NASA Technical Reports Server (NTRS)

    Dorband, John E.; Aburdene, Maurice F.

    2002-01-01

    Recently, networked and cluster computation have become very popular. This paper is an introduction to a new C based parallel language for architecture-adaptive programming, aCe C. The primary purpose of aCe (Architecture-adaptive Computing Environment) is to encourage programmers to implement applications on parallel architectures by providing them the assurance that future architectures will be able to run their applications with a minimum of modification. A secondary purpose is to encourage computer architects to develop new types of architectures by providing an easily implemented software development environment and a library of test applications. This new language should be an ideal tool to teach parallel programming. In this paper, we will focus on some fundamental features of aCe C.

  1. Parallel evolution of a type IV secretion system in radiating lineages of the host-restricted bacterial pathogen Bartonella.

    PubMed

    Engel, Philipp; Salzburger, Walter; Liesch, Marius; Chang, Chao-Chin; Maruyama, Soichi; Lanz, Christa; Calteau, Alexandra; Lajus, Aurélie; Médigue, Claudine; Schuster, Stephan C; Dehio, Christoph

    2011-02-10

    Adaptive radiation is the rapid origination of multiple species from a single ancestor as the result of concurrent adaptation to disparate environments. This fundamental evolutionary process is considered to be responsible for the genesis of a great portion of the diversity of life. Bacteria have evolved enormous biological diversity by exploiting an exceptional range of environments, yet diversification of bacteria via adaptive radiation has been documented in a few cases only and the underlying molecular mechanisms are largely unknown. Here we show a compelling example of adaptive radiation in pathogenic bacteria and reveal their genetic basis. Our evolutionary genomic analyses of the α-proteobacterial genus Bartonella uncover two parallel adaptive radiations within these host-restricted mammalian pathogens. We identify a horizontally-acquired protein secretion system, which has evolved to target specific bacterial effector proteins into host cells as the evolutionary key innovation triggering these parallel adaptive radiations. We show that the functional versatility and adaptive potential of the VirB type IV secretion system (T4SS), and thereby translocated Bartonella effector proteins (Beps), evolved in parallel in the two lineages prior to their radiations. Independent chromosomal fixation of the virB operon and consecutive rounds of lineage-specific bep gene duplications followed by their functional diversification characterize these parallel evolutionary trajectories. Whereas most Beps maintained their ancestral domain constitution, strikingly, a novel type of effector protein emerged convergently in both lineages. This resulted in similar arrays of host cell-targeted effector proteins in the two lineages of Bartonella as the basis of their independent radiation. The parallel molecular evolution of the VirB/Bep system displays a striking example of a key innovation involved in independent adaptive processes and the emergence of bacterial pathogens. Furthermore, our study highlights the remarkable evolvability of T4SSs and their effector proteins, explaining their broad application in bacterial interactions with the environment.

  2. Parallel Evolution of a Type IV Secretion System in Radiating Lineages of the Host-Restricted Bacterial Pathogen Bartonella

    PubMed Central

    Engel, Philipp; Salzburger, Walter; Liesch, Marius; Chang, Chao-Chin; Maruyama, Soichi; Lanz, Christa; Calteau, Alexandra; Lajus, Aurélie; Médigue, Claudine; Schuster, Stephan C.; Dehio, Christoph

    2011-01-01

    Adaptive radiation is the rapid origination of multiple species from a single ancestor as the result of concurrent adaptation to disparate environments. This fundamental evolutionary process is considered to be responsible for the genesis of a great portion of the diversity of life. Bacteria have evolved enormous biological diversity by exploiting an exceptional range of environments, yet diversification of bacteria via adaptive radiation has been documented in a few cases only and the underlying molecular mechanisms are largely unknown. Here we show a compelling example of adaptive radiation in pathogenic bacteria and reveal their genetic basis. Our evolutionary genomic analyses of the α-proteobacterial genus Bartonella uncover two parallel adaptive radiations within these host-restricted mammalian pathogens. We identify a horizontally-acquired protein secretion system, which has evolved to target specific bacterial effector proteins into host cells as the evolutionary key innovation triggering these parallel adaptive radiations. We show that the functional versatility and adaptive potential of the VirB type IV secretion system (T4SS), and thereby translocated Bartonella effector proteins (Beps), evolved in parallel in the two lineages prior to their radiations. Independent chromosomal fixation of the virB operon and consecutive rounds of lineage-specific bep gene duplications followed by their functional diversification characterize these parallel evolutionary trajectories. Whereas most Beps maintained their ancestral domain constitution, strikingly, a novel type of effector protein emerged convergently in both lineages. This resulted in similar arrays of host cell-targeted effector proteins in the two lineages of Bartonella as the basis of their independent radiation. The parallel molecular evolution of the VirB/Bep system displays a striking example of a key innovation involved in independent adaptive processes and the emergence of bacterial pathogens. Furthermore, our study highlights the remarkable evolvability of T4SSs and their effector proteins, explaining their broad application in bacterial interactions with the environment. PMID:21347280

  3. Haptic adaptation to slant: No transfer between exploration modes

    PubMed Central

    van Dam, Loes C. J.; Plaisier, Myrthe A.; Glowania, Catharina; Ernst, Marc O.

    2016-01-01

    Human touch is an inherently active sense: to estimate an object’s shape humans often move their hand across its surface. This way the object is sampled both in a serial (sampling different parts of the object across time) and parallel fashion (sampling using different parts of the hand simultaneously). Both the serial (moving a single finger) and parallel (static contact with the entire hand) exploration modes provide reliable and similar global shape information, suggesting the possibility that this information is shared early in the sensory cortex. In contrast, we here show the opposite. Using an adaptation-and-transfer paradigm, a change in haptic perception was induced by slant-adaptation using either the serial or parallel exploration mode. A unified shape-based coding would predict that this would equally affect perception using other exploration modes. However, we found that adaptation-induced perceptual changes did not transfer between exploration modes. Instead, serial and parallel exploration components adapted simultaneously, but to different kinaesthetic aspects of exploration behaviour rather than object-shape per se. These results indicate that a potential combination of information from different exploration modes can only occur at down-stream cortical processing stages, at which adaptation is no longer effective. PMID:27698392

  4. New supervised learning theory applied to cerebellar modeling for suppression of variability of saccade end points.

    PubMed

    Fujita, Masahiko

    2013-06-01

    A new supervised learning theory is proposed for a hierarchical neural network with a single hidden layer of threshold units, which can approximate any continuous transformation, and applied to a cerebellar function to suppress the end-point variability of saccades. In motor systems, feedback control can reduce noise effects if the noise is added in a pathway from a motor center to a peripheral effector; however, it cannot reduce noise effects if the noise is generated in the motor center itself: a new control scheme is necessary for such noise. The cerebellar cortex is well known as a supervised learning system, and a novel theory of cerebellar cortical function developed in this study can explain the capability of the cerebellum to feedforwardly reduce noise effects, such as end-point variability of saccades. This theory assumes that a Golgi-granule cell system can encode the strength of a mossy fiber input as the state of neuronal activity of parallel fibers. By combining these parallel fiber signals with appropriate connection weights to produce a Purkinje cell output, an arbitrary continuous input-output relationship can be obtained. By incorporating such flexible computation and learning ability in a process of saccadic gain adaptation, a new control scheme in which the cerebellar cortex feedforwardly suppresses the end-point variability when it detects a variation in saccadic commands can be devised. Computer simulation confirmed the efficiency of such learning and showed a reduction in the variability of saccadic end points, similar to results obtained from experimental data.

  5. Widespread parallel population adaptation to climate variation across a radiation: implications for adaptation to climate change.

    PubMed

    Thorpe, Roger S; Barlow, Axel; Malhotra, Anita; Surget-Groba, Yann

    2015-03-01

    Global warming will impact species in a number of ways, and it is important to know the extent to which natural populations can adapt to anthropogenic climate change by natural selection. Parallel microevolution within separate species can demonstrate natural selection, but several studies of homoplasy have not yet revealed examples of widespread parallel evolution in a generic radiation. Taking into account primary phylogeographic divisions, we investigate numerous quantitative traits (size, shape, scalation, colour pattern and hue) in anole radiations from the mountainous Lesser Antillean islands. Adaptation to climatic differences can lead to very pronounced differences between spatially close populations with all studied traits showing some evidence of parallel evolution. Traits from shape, scalation, pattern and hue (particularly the latter) show widespread evolutionary parallels within these species in response to altitudinal climate variation greater than extreme anthropogenic climate change predicted for 2080. This gives strong evidence of the ability to adapt to climate variation by natural selection throughout this radiation. As anoles can evolve very rapidly, it suggests anthropogenic climate change is likely to be less of a conservation threat than other factors, such as habitat loss and invasive species, in this, Lesser Antillean, biodiversity hot spot. © 2015 John Wiley & Sons Ltd.

  6. On the dimensionally correct kinetic theory of turbulence for parallel propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaelzer, R., E-mail: rudi.gaelzer@ufrgs.br, E-mail: yoonp@umd.edu, E-mail: 007gasun@khu.ac.kr, E-mail: luiz.ziebell@ufrgs.br; Ziebell, L. F., E-mail: rudi.gaelzer@ufrgs.br, E-mail: yoonp@umd.edu, E-mail: 007gasun@khu.ac.kr, E-mail: luiz.ziebell@ufrgs.br; Yoon, P. H., E-mail: rudi.gaelzer@ufrgs.br, E-mail: yoonp@umd.edu, E-mail: 007gasun@khu.ac.kr, E-mail: luiz.ziebell@ufrgs.br

    2015-03-15

    Yoon and Fang [Phys. Plasmas 15, 122312 (2008)] formulated a second-order nonlinear kinetic theory that describes the turbulence propagating in directions parallel/anti-parallel to the ambient magnetic field. Their theory also includes discrete-particle effects, or the effects due to spontaneously emitted thermal fluctuations. However, terms associated with the spontaneous fluctuations in particle and wave kinetic equations in their theory contain proper dimensionality only for an artificial one-dimensional situation. The present paper extends the analysis and re-derives the dimensionally correct kinetic equations for three-dimensional case. The new formalism properly describes the effects of spontaneous fluctuations emitted in three-dimensional space, while the collectivelymore » emitted turbulence propagates predominantly in directions parallel/anti-parallel to the ambient magnetic field. As a first step, the present investigation focuses on linear wave-particle interaction terms only. A subsequent paper will include the dimensionally correct nonlinear wave-particle interaction terms.« less

  7. A New Approach to Parallel Dynamic Partitioning for Adaptive Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Heber, Gerd; Biswas, Rupak; Gao, Guang R.

    1999-01-01

    Classical mesh partitioning algorithms were designed for rather static situations, and their straightforward application in a dynamical framework may lead to unsatisfactory results, e.g., excessive data migration among processors. Furthermore, special attention should be paid to their amenability to parallelization. In this paper, a novel parallel method for the dynamic partitioning of adaptive unstructured meshes is described. It is based on a linear representation of the mesh using self-avoiding walks.

  8. What is adaptive about adaptive decision making? A parallel constraint satisfaction account.

    PubMed

    Glöckner, Andreas; Hilbig, Benjamin E; Jekel, Marc

    2014-12-01

    There is broad consensus that human cognition is adaptive. However, the vital question of how exactly this adaptivity is achieved has remained largely open. Herein, we contrast two frameworks which account for adaptive decision making, namely broad and general single-mechanism accounts vs. multi-strategy accounts. We propose and fully specify a single-mechanism model for decision making based on parallel constraint satisfaction processes (PCS-DM) and contrast it theoretically and empirically against a multi-strategy account. To achieve sufficiently sensitive tests, we rely on a multiple-measure methodology including choice, reaction time, and confidence data as well as eye-tracking. Results show that manipulating the environmental structure produces clear adaptive shifts in choice patterns - as both frameworks would predict. However, results on the process level (reaction time, confidence), in information acquisition (eye-tracking), and from cross-predicting choice consistently corroborate single-mechanisms accounts in general, and the proposed parallel constraint satisfaction model for decision making in particular. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Computation of free energy profiles with parallel adaptive dynamics

    NASA Astrophysics Data System (ADS)

    Lelièvre, Tony; Rousset, Mathias; Stoltz, Gabriel

    2007-04-01

    We propose a formulation of an adaptive computation of free energy differences, in the adaptive biasing force or nonequilibrium metadynamics spirit, using conditional distributions of samples of configurations which evolve in time. This allows us to present a truly unifying framework for these methods, and to prove convergence results for certain classes of algorithms. From a numerical viewpoint, a parallel implementation of these methods is very natural, the replicas interacting through the reconstructed free energy. We demonstrate how to improve this parallel implementation by resorting to some selection mechanism on the replicas. This is illustrated by computations on a model system of conformational changes.

  10. Dual-thread parallel control strategy for ophthalmic adaptive optics.

    PubMed

    Yu, Yongxin; Zhang, Yuhua

    To improve ophthalmic adaptive optics speed and compensate for ocular wavefront aberration of high temporal frequency, the adaptive optics wavefront correction has been implemented with a control scheme including 2 parallel threads; one is dedicated to wavefront detection and the other conducts wavefront reconstruction and compensation. With a custom Shack-Hartmann wavefront sensor that measures the ocular wave aberration with 193 subapertures across the pupil, adaptive optics has achieved a closed loop updating frequency up to 110 Hz, and demonstrated robust compensation for ocular wave aberration up to 50 Hz in an adaptive optics scanning laser ophthalmoscope.

  11. Dual-thread parallel control strategy for ophthalmic adaptive optics

    PubMed Central

    Yu, Yongxin; Zhang, Yuhua

    2015-01-01

    To improve ophthalmic adaptive optics speed and compensate for ocular wavefront aberration of high temporal frequency, the adaptive optics wavefront correction has been implemented with a control scheme including 2 parallel threads; one is dedicated to wavefront detection and the other conducts wavefront reconstruction and compensation. With a custom Shack-Hartmann wavefront sensor that measures the ocular wave aberration with 193 subapertures across the pupil, adaptive optics has achieved a closed loop updating frequency up to 110 Hz, and demonstrated robust compensation for ocular wave aberration up to 50 Hz in an adaptive optics scanning laser ophthalmoscope. PMID:25866498

  12. Predator-induced phenotypic plasticity of shape and behavior: parallel and unique patterns across sexes and species

    PubMed Central

    Kinnison, Michael T.

    2017-01-01

    Abstract Phenotypic plasticity is often an adaptation of organisms to cope with temporally or spatially heterogenous landscapes. Like other adaptations, one would predict that different species, populations, or sexes might thus show some degree of parallel evolution of plasticity, in the form of parallel reaction norms, when exposed to analogous environmental gradients. Indeed, one might even expect parallelism of plasticity to repeatedly evolve in multiple traits responding to the same gradient, resulting in integrated parallelism of plasticity. In this study, we experimentally tested for parallel patterns of predator-mediated plasticity of size, shape, and behavior of 2 species and sexes of mosquitofish. Examination of behavioral trials indicated that the 2 species showed unique patterns of behavioral plasticity, whereas the 2 sexes in each species showed parallel responses. Fish shape showed parallel patterns of plasticity for both sexes and species, albeit males showed evidence of unique plasticity related to reproductive anatomy. Moreover, patterns of shape plasticity due to predator exposure were broadly parallel to what has been depicted for predator-mediated population divergence in other studies (slender bodies, expanded caudal regions, ventrally located eyes, and reduced male gonopodia). We did not find evidence of phenotypic plasticity in fish size for either species or sex. Hence, our findings support broadly integrated parallelism of plasticity for sexes within species and less integrated parallelism for species. We interpret these findings with respect to their potential broader implications for the interacting roles of adaptation and constraint in the evolutionary origins of parallelism of plasticity in general. PMID:29491997

  13. Clinical quality needs complex adaptive systems and machine learning.

    PubMed

    Marsland, Stephen; Buchan, Iain

    2004-01-01

    The vast increase in clinical data has the potential to bring about large improvements in clinical quality and other aspects of healthcare delivery. However, such benefits do not come without cost. The analysis of such large datasets, particularly where the data may have to be merged from several sources and may be noisy and incomplete, is a challenging task. Furthermore, the introduction of clinical changes is a cyclical task, meaning that the processes under examination operate in an environment that is not static. We suggest that traditional methods of analysis are unsuitable for the task, and identify complexity theory and machine learning as areas that have the potential to facilitate the examination of clinical quality. By its nature the field of complex adaptive systems deals with environments that change because of the interactions that have occurred in the past. We draw parallels between health informatics and bioinformatics, which has already started to successfully use machine learning methods.

  14. Tile-based Level of Detail for the Parallel Age

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niski, K; Cohen, J D

    Today's PCs incorporate multiple CPUs and GPUs and are easily arranged in clusters for high-performance, interactive graphics. We present an approach based on hierarchical, screen-space tiles to parallelizing rendering with level of detail. Adapt tiles, render tiles, and machine tiles are associated with CPUs, GPUs, and PCs, respectively, to efficiently parallelize the workload with good resource utilization. Adaptive tile sizes provide load balancing while our level of detail system allows total and independent management of the load on CPUs and GPUs. We demonstrate our approach on parallel configurations consisting of both single PCs and a cluster of PCs.

  15. AdiosStMan: Parallelizing Casacore Table Data System using Adaptive IO System

    NASA Astrophysics Data System (ADS)

    Wang, R.; Harris, C.; Wicenec, A.

    2016-07-01

    In this paper, we investigate the Casacore Table Data System (CTDS) used in the casacore and CASA libraries, and methods to parallelize it. CTDS provides a storage manager plugin mechanism for third-party developers to design and implement their own CTDS storage managers. Having this in mind, we looked into various storage backend techniques that can possibly enable parallel I/O for CTDS by implementing new storage managers. After carrying on benchmarks showing the excellent parallel I/O throughput of the Adaptive IO System (ADIOS), we implemented an ADIOS based parallel CTDS storage manager. We then applied the CASA MSTransform frequency split task to verify the ADIOS Storage Manager. We also ran a series of performance tests to examine the I/O throughput in a massively parallel scenario.

  16. Non-adaptive plasticity potentiates rapid adaptive evolution of gene expression in nature.

    PubMed

    Ghalambor, Cameron K; Hoke, Kim L; Ruell, Emily W; Fischer, Eva K; Reznick, David N; Hughes, Kimberly A

    2015-09-17

    Phenotypic plasticity is the capacity for an individual genotype to produce different phenotypes in response to environmental variation. Most traits are plastic, but the degree to which plasticity is adaptive or non-adaptive depends on whether environmentally induced phenotypes are closer or further away from the local optimum. Existing theories make conflicting predictions about whether plasticity constrains or facilitates adaptive evolution. Debate persists because few empirical studies have tested the relationship between initial plasticity and subsequent adaptive evolution in natural populations. Here we show that the direction of plasticity in gene expression is generally opposite to the direction of adaptive evolution. We experimentally transplanted Trinidadian guppies (Poecilia reticulata) adapted to living with cichlid predators to cichlid-free streams, and tested for evolutionary divergence in brain gene expression patterns after three to four generations. We find 135 transcripts that evolved parallel changes in expression within the replicated introduction populations. These changes are in the same direction exhibited in a native cichlid-free population, suggesting rapid adaptive evolution. We find 89% of these transcripts exhibited non-adaptive plastic changes in expression when the source population was reared in the absence of predators, as they are in the opposite direction to the evolved changes. By contrast, the remaining transcripts exhibiting adaptive plasticity show reduced population divergence. Furthermore, the most plastic transcripts in the source population evolved reduced plasticity in the introduction populations, suggesting strong selection against non-adaptive plasticity. These results support models predicting that adaptive plasticity constrains evolution, whereas non-adaptive plasticity potentiates evolution by increasing the strength of directional selection. The role of non-adaptive plasticity in evolution has received relatively little attention; however, our results suggest that it may be an important mechanism that predicts evolutionary responses to new environments.

  17. Career Preparation: A Longitudinal, Process-Oriented Examination

    PubMed Central

    Stringer, Kate; Kerpelman, Jennifer; Skorikov, Vladimir

    2011-01-01

    Preparing for an adult career through careful planning, choosing a career, and gaining confidence to achieve career goals is a primary task during adolescence and early adulthood. The current study bridged identity process literature and career construction theory (Savickas, 2005) by examining the commitment component of career adaptability, career preparation (i.e., career planning, career decision-making, and career confidence), from an identity process perspective (Luyckx, Goossens, & Soenens, 2006). Research has suggested that career preparation dimensions are interrelated during adolescence and early adulthood; however, what remains to be known is how each dimension changes over time and the interrelationships among the dimensions during the transition from high school. Drawing parallels between career preparation and identity development dimensions, the current study addressed these questions by examining the patterns of change in each career preparation dimension and parallel process models that tested associations among the slopes and intercepts of the career preparation dimensions. Results showed that the career preparation dimensions were not developing similarly over time, although each dimension was associated cross-sectionally and longitudinally with the other dimensions. Results also suggested that career planning and decision-making precede career confidence. The results of the current study supported career construction theory and showed similarities between the processes of career preparation and identity development. PMID:21804641

  18. Career Preparation: A Longitudinal, Process-Oriented Examination.

    PubMed

    Stringer, Kate; Kerpelman, Jennifer; Skorikov, Vladimir

    2011-08-01

    Preparing for an adult career through careful planning, choosing a career, and gaining confidence to achieve career goals is a primary task during adolescence and early adulthood. The current study bridged identity process literature and career construction theory (Savickas, 2005) by examining the commitment component of career adaptability, career preparation (i.e., career planning, career decision-making, and career confidence), from an identity process perspective (Luyckx, Goossens, & Soenens, 2006). Research has suggested that career preparation dimensions are interrelated during adolescence and early adulthood; however, what remains to be known is how each dimension changes over time and the interrelationships among the dimensions during the transition from high school. Drawing parallels between career preparation and identity development dimensions, the current study addressed these questions by examining the patterns of change in each career preparation dimension and parallel process models that tested associations among the slopes and intercepts of the career preparation dimensions. Results showed that the career preparation dimensions were not developing similarly over time, although each dimension was associated cross-sectionally and longitudinally with the other dimensions. Results also suggested that career planning and decision-making precede career confidence. The results of the current study supported career construction theory and showed similarities between the processes of career preparation and identity development.

  19. A Comparison of Parallelism in Interface Designs for Computer-Based Learning Environments

    ERIC Educational Resources Information Center

    Min, Rik; Yu, Tao; Spenkelink, Gerd; Vos, Hans

    2004-01-01

    In this paper we discuss an experiment that was carried out with a prototype, designed in conformity with the concept of parallelism and the Parallel Instruction theory (the PI theory). We designed this prototype with five different interfaces, and ran an empirical study in which 18 participants completed an abstract task. The five basic designs…

  20. The Importance of Considering Differences in Study Design in Network Meta-analysis: An Application Using Anti-Tumor Necrosis Factor Drugs for Ulcerative Colitis.

    PubMed

    Cameron, Chris; Ewara, Emmanuel; Wilson, Florence R; Varu, Abhishek; Dyrda, Peter; Hutton, Brian; Ingham, Michael

    2017-11-01

    Adaptive trial designs present a methodological challenge when performing network meta-analysis (NMA), as data from such adaptive trial designs differ from conventional parallel design randomized controlled trials (RCTs). We aim to illustrate the importance of considering study design when conducting an NMA. Three NMAs comparing anti-tumor necrosis factor drugs for ulcerative colitis were compared and the analyses replicated using Bayesian NMA. The NMA comprised 3 RCTs comparing 4 treatments (adalimumab 40 mg, golimumab 50 mg, golimumab 100 mg, infliximab 5 mg/kg) and placebo. We investigated the impact of incorporating differences in the study design among the 3 RCTs and presented 3 alternative methods on how to convert outcome data derived from one form of adaptive design to more conventional parallel RCTs. Combining RCT results without considering variations in study design resulted in effect estimates that were biased against golimumab. In contrast, using the 3 alternative methods to convert outcome data from one form of adaptive design to a format more consistent with conventional parallel RCTs facilitated more transparent consideration of differences in study design. This approach is more likely to yield appropriate estimates of comparative efficacy when conducting an NMA, which includes treatments that use an alternative study design. RCTs based on adaptive study designs should not be combined with traditional parallel RCT designs in NMA. We have presented potential approaches to convert data from one form of adaptive design to more conventional parallel RCTs to facilitate transparent and less-biased comparisons.

  1. Patient-Adaptive Reconstruction and Acquisition in Dynamic Imaging with Sensitivity Encoding (PARADISE)

    PubMed Central

    Sharif, Behzad; Derbyshire, J. Andrew; Faranesh, Anthony Z.; Bresler, Yoram

    2010-01-01

    MR imaging of the human heart without explicit cardiac synchronization promises to extend the applicability of cardiac MR to a larger patient population and potentially expand its diagnostic capabilities. However, conventional non-gated imaging techniques typically suffer from low image quality or inadequate spatio-temporal resolution and fidelity. Patient-Adaptive Reconstruction and Acquisition in Dynamic Imaging with Sensitivity Encoding (PARADISE) is a highly-accelerated non-gated dynamic imaging method that enables artifact-free imaging with high spatio-temporal resolutions by utilizing novel computational techniques to optimize the imaging process. In addition to using parallel imaging, the method gains acceleration from a physiologically-driven spatio-temporal support model; hence, it is doubly accelerated. The support model is patient-adaptive, i.e., its geometry depends on dynamics of the imaged slice, e.g., subject’s heart-rate and heart location within the slice. The proposed method is also doubly adaptive as it adapts both the acquisition and reconstruction schemes. Based on the theory of time-sequential sampling, the proposed framework explicitly accounts for speed limitations of gradient encoding and provides performance guarantees on achievable image quality. The presented in-vivo results demonstrate the effectiveness and feasibility of the PARADISE method for high resolution non-gated cardiac MRI during a short breath-hold. PMID:20665794

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Weizhou, E-mail: wzw@lynu.edu.cn, E-mail: ybw@gzu.edu.cn; Zhang, Yu; Sun, Tao

    High-level coupled cluster singles, doubles, and perturbative triples [CCSD(T)] computations with up to the aug-cc-pVQZ basis set (1924 basis functions) and various extrapolations toward the complete basis set (CBS) limit are presented for the sandwich, T-shaped, and parallel-displaced benzene⋯naphthalene complex. Using the CCSD(T)/CBS interaction energies as a benchmark, the performance of some newly developed wave function and density functional theory methods has been evaluated. The best performing methods were found to be the dispersion-corrected PBE0 functional (PBE0-D3) and spin-component scaled zeroth-order symmetry-adapted perturbation theory (SCS-SAPT0). The success of SCS-SAPT0 is very encouraging because it provides one method for energy componentmore » analysis of π-stacked complexes with 200 atoms or more. Most newly developed methods do, however, overestimate the interaction energies. The results of energy component analysis show that interaction energies are overestimated mainly due to the overestimation of dispersion energy.« less

  3. A new parallelization scheme for adaptive mesh refinement

    DOE PAGES

    Loffler, Frank; Cao, Zhoujian; Brandt, Steven R.; ...

    2016-05-06

    Here, we present a new method for parallelization of adaptive mesh refinement called Concurrent Structured Adaptive Mesh Refinement (CSAMR). This new method offers the lower computational cost (i.e. wall time x processor count) of subcycling in time, but with the runtime performance (i.e. smaller wall time) of evolving all levels at once using the time step of the finest level (which does more work than subcycling but has less parallelism). We demonstrate our algorithm's effectiveness using an adaptive mesh refinement code, AMSS-NCKU, and show performance on Blue Waters and other high performance clusters. For the class of problem considered inmore » this paper, our algorithm achieves a speedup of 1.7-1.9 when the processor count for a given AMR run is doubled, consistent with our theoretical predictions.« less

  4. A new parallelization scheme for adaptive mesh refinement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loffler, Frank; Cao, Zhoujian; Brandt, Steven R.

    Here, we present a new method for parallelization of adaptive mesh refinement called Concurrent Structured Adaptive Mesh Refinement (CSAMR). This new method offers the lower computational cost (i.e. wall time x processor count) of subcycling in time, but with the runtime performance (i.e. smaller wall time) of evolving all levels at once using the time step of the finest level (which does more work than subcycling but has less parallelism). We demonstrate our algorithm's effectiveness using an adaptive mesh refinement code, AMSS-NCKU, and show performance on Blue Waters and other high performance clusters. For the class of problem considered inmore » this paper, our algorithm achieves a speedup of 1.7-1.9 when the processor count for a given AMR run is doubled, consistent with our theoretical predictions.« less

  5. IOPA: I/O-aware parallelism adaption for parallel programs

    PubMed Central

    Liu, Tao; Liu, Yi; Qian, Chen; Qian, Depei

    2017-01-01

    With the development of multi-/many-core processors, applications need to be written as parallel programs to improve execution efficiency. For data-intensive applications that use multiple threads to read/write files simultaneously, an I/O sub-system can easily become a bottleneck when too many of these types of threads exist; on the contrary, too few threads will cause insufficient resource utilization and hurt performance. Therefore, programmers must pay much attention to parallelism control to find the appropriate number of I/O threads for an application. This paper proposes a parallelism control mechanism named IOPA that can adjust the parallelism of applications to adapt to the I/O capability of a system and balance computing resources and I/O bandwidth. The programming interface of IOPA is also provided to programmers to simplify parallel programming. IOPA is evaluated using multiple applications with both solid state and hard disk drives. The results show that the parallel applications using IOPA can achieve higher efficiency than those with a fixed number of threads. PMID:28278236

  6. IOPA: I/O-aware parallelism adaption for parallel programs.

    PubMed

    Liu, Tao; Liu, Yi; Qian, Chen; Qian, Depei

    2017-01-01

    With the development of multi-/many-core processors, applications need to be written as parallel programs to improve execution efficiency. For data-intensive applications that use multiple threads to read/write files simultaneously, an I/O sub-system can easily become a bottleneck when too many of these types of threads exist; on the contrary, too few threads will cause insufficient resource utilization and hurt performance. Therefore, programmers must pay much attention to parallelism control to find the appropriate number of I/O threads for an application. This paper proposes a parallelism control mechanism named IOPA that can adjust the parallelism of applications to adapt to the I/O capability of a system and balance computing resources and I/O bandwidth. The programming interface of IOPA is also provided to programmers to simplify parallel programming. IOPA is evaluated using multiple applications with both solid state and hard disk drives. The results show that the parallel applications using IOPA can achieve higher efficiency than those with a fixed number of threads.

  7. A parallel adaptive mesh refinement algorithm

    NASA Technical Reports Server (NTRS)

    Quirk, James J.; Hanebutte, Ulf R.

    1993-01-01

    Over recent years, Adaptive Mesh Refinement (AMR) algorithms which dynamically match the local resolution of the computational grid to the numerical solution being sought have emerged as powerful tools for solving problems that contain disparate length and time scales. In particular, several workers have demonstrated the effectiveness of employing an adaptive, block-structured hierarchical grid system for simulations of complex shock wave phenomena. Unfortunately, from the parallel algorithm developer's viewpoint, this class of scheme is quite involved; these schemes cannot be distilled down to a small kernel upon which various parallelizing strategies may be tested. However, because of their block-structured nature such schemes are inherently parallel, so all is not lost. In this paper we describe the method by which Quirk's AMR algorithm has been parallelized. This method is built upon just a few simple message passing routines and so it may be implemented across a broad class of MIMD machines. Moreover, the method of parallelization is such that the original serial code is left virtually intact, and so we are left with just a single product to support. The importance of this fact should not be underestimated given the size and complexity of the original algorithm.

  8. Modeling the role of parallel processing in visual search.

    PubMed

    Cave, K R; Wolfe, J M

    1990-04-01

    Treisman's Feature Integration Theory and Julesz's Texton Theory explain many aspects of visual search. However, these theories require that parallel processing mechanisms not be used in many visual searches for which they would be useful, and they imply that visual processing should be much slower than it is. Most importantly, they cannot account for recent data showing that some subjects can perform some conjunction searches very efficiently. Feature Integration Theory can be modified so that it accounts for these data and helps to answer these questions. In this new theory, which we call Guided Search, the parallel stage guides the serial stage as it chooses display elements to process. A computer simulation of Guided Search produces the same general patterns as human subjects in a number of different types of visual search.

  9. Candidate genes and adaptive radiation: insights from transcriptional adaptation to the limnetic niche among coregonine fishes (Coregonus spp., Salmonidae).

    PubMed

    Jeukens, Julie; Bittner, David; Knudsen, Rune; Bernatchez, Louis

    2009-01-01

    In the past 40 years, there has been increasing acceptance that variation in levels of gene expression represents a major source of evolutionary novelty. Gene expression divergence is therefore likely to be involved in the emergence of incipient species, namely, in a context of adaptive radiation. In the lake whitefish species complex (Coregonus clupeaformis), previous microarray experiments have led to the identification of candidate genes potentially implicated in the parallel evolution of the limnetic dwarf lake whitefish, which is highly distinct from the benthic normal lake whitefish in life history, morphology, metabolism, and behavior, and yet diverged from it only approximately 15,000 years before present. The aim of the present study was to address transcriptional divergence for six candidate genes among lake whitefish and European whitefish (Coregonus lavaretus) species pairs, as well as lake cisco (Coregonus artedi) and vendace (Coregonus albula). The main goal was to test the hypothesis that parallel phenotypic adaptation toward the use of the limnetic niche in coregonine fishes is accompanied by parallelism in candidate gene transcription as measured by quantitative real-time polymerase chain reaction. Results obtained for three candidate genes, whereby parallelism in expression was observed across all whitefish species pairs, provide strong support for the hypothesis that divergent natural selection plays an important role in the adaptive radiation of whitefish species. However, this parallelism in expression did not extend to cisco and vendace, thereby infirming transcriptional convergence between limnetic whitefish species and their limnetic congeners for these genes. As recently proposed (Lynch 2007a. The evolution of genetic networks by non-adaptive processes. Nat Rev Genet. 8:803-813), these results may suggest that convergent phenotypic evolution can result from nonadaptive shaping of genome architecture in independently evolved coregonine lineages.

  10. Unstructured Adaptive (UA) NAS Parallel Benchmark. Version 1.0

    NASA Technical Reports Server (NTRS)

    Feng, Huiyu; VanderWijngaart, Rob; Biswas, Rupak; Mavriplis, Catherine

    2004-01-01

    We present a complete specification of a new benchmark for measuring the performance of modern computer systems when solving scientific problems featuring irregular, dynamic memory accesses. It complements the existing NAS Parallel Benchmark suite. The benchmark involves the solution of a stylized heat transfer problem in a cubic domain, discretized on an adaptively refined, unstructured mesh.

  11. Wavelet Transforms in Parallel Image Processing

    DTIC Science & Technology

    1994-01-27

    NUMBER OF PAGES Object Segmentation, Texture Segmentation, Image Compression, Image 137 Halftoning , Neural Network, Parallel Algorithms, 2D and 3D...Vector Quantization of Wavelet Transform Coefficients ........ ............................. 57 B.1.f Adaptive Image Halftoning based on Wavelet...application has been directed to the adaptive image halftoning . The gray information at a pixel, including its gray value and gradient, is represented by

  12. Adaptive multi-GPU Exchange Monte Carlo for the 3D Random Field Ising Model

    NASA Astrophysics Data System (ADS)

    Navarro, Cristóbal A.; Huang, Wei; Deng, Youjin

    2016-08-01

    This work presents an adaptive multi-GPU Exchange Monte Carlo approach for the simulation of the 3D Random Field Ising Model (RFIM). The design is based on a two-level parallelization. The first level, spin-level parallelism, maps the parallel computation as optimal 3D thread-blocks that simulate blocks of spins in shared memory with minimal halo surface, assuming a constant block volume. The second level, replica-level parallelism, uses multi-GPU computation to handle the simulation of an ensemble of replicas. CUDA's concurrent kernel execution feature is used in order to fill the occupancy of each GPU with many replicas, providing a performance boost that is more notorious at the smallest values of L. In addition to the two-level parallel design, the work proposes an adaptive multi-GPU approach that dynamically builds a proper temperature set free of exchange bottlenecks. The strategy is based on mid-point insertions at the temperature gaps where the exchange rate is most compromised. The extra work generated by the insertions is balanced across the GPUs independently of where the mid-point insertions were performed. Performance results show that spin-level performance is approximately two orders of magnitude faster than a single-core CPU version and one order of magnitude faster than a parallel multi-core CPU version running on 16-cores. Multi-GPU performance is highly convenient under a weak scaling setting, reaching up to 99 % efficiency as long as the number of GPUs and L increase together. The combination of the adaptive approach with the parallel multi-GPU design has extended our possibilities of simulation to sizes of L = 32 , 64 for a workstation with two GPUs. Sizes beyond L = 64 can eventually be studied using larger multi-GPU systems.

  13. Parallel Evolution of Cold Tolerance within Drosophila melanogaster

    PubMed Central

    Braun, Dylan T.; Lack, Justin B.

    2017-01-01

    Drosophila melanogaster originated in tropical Africa before expanding into strikingly different temperate climates in Eurasia and beyond. Here, we find elevated cold tolerance in three distinct geographic regions: beyond the well-studied non-African case, we show that populations from the highlands of Ethiopia and South Africa have significantly increased cold tolerance as well. We observe greater cold tolerance in outbred versus inbred flies, but only in populations with higher inversion frequencies. Each cold-adapted population shows lower inversion frequencies than a closely-related warm-adapted population, suggesting that inversion frequencies may decrease with altitude in addition to latitude. Using the FST-based “Population Branch Excess” statistic (PBE), we found only limited evidence for parallel genetic differentiation at the scale of ∼4 kb windows, specifically between Ethiopian and South African cold-adapted populations. And yet, when we looked for single nucleotide polymorphisms (SNPs) with codirectional frequency change in two or three cold-adapted populations, strong genomic enrichments were observed from all comparisons. These findings could reflect an important role for selection on standing genetic variation leading to “soft sweeps”. One SNP showed sufficient codirectional frequency change in all cold-adapted populations to achieve experiment-wide significance: an intronic variant in the synaptic gene Prosap. Another codirectional outlier SNP, at senseless-2, had a strong association with our cold trait measurements, but in the opposite direction as predicted. More generally, proteins involved in neurotransmission were enriched as potential targets of parallel adaptation. The ability to study cold tolerance evolution in a parallel framework will enhance this classic study system for climate adaptation. PMID:27777283

  14. Conceptual change and preschoolers' theory of mind: evidence from load-force adaptation.

    PubMed

    Sabbagh, Mark A; Hopkins, Sydney F R; Benson, Jeannette E; Flanagan, J Randall

    2010-01-01

    Prominent theories of preschoolers' theory of mind development have included a central role for changing or adapting existing conceptual structures in response to experiences. Because of the relatively protracted timetable of theory of mind development, it has been difficult to test this assumption about the role of adaptation directly. To gain evidence that cognitive adaptation is particularly important for theory of mind development, we sought to determine whether individual differences in cognitive adaptation in a non-social domain predicted preschoolers' theory of mind development. Twenty-five preschoolers were tested on batteries of theory of mind tasks, executive functioning tasks, and on their ability to adapt their lifting behavior to smoothly lift an unexpectedly heavy object. Results showed that children who adapted their lifting behavior more rapidly performed better on theory of mind tasks than those who adapted more slowly. These findings held up when age and performance on the executive functioning battery were statistically controlled. Although preliminary, we argue that this relation is attributable to individual differences in children's domain general abilities to efficiently change existing conceptual structures in response to experience. Copyright © 2010 Elsevier Ltd. All rights reserved.

  15. Parallel, adaptive finite element methods for conservation laws

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Devine, Karen D.; Flaherty, Joseph E.

    1994-01-01

    We construct parallel finite element methods for the solution of hyperbolic conservation laws in one and two dimensions. Spatial discretization is performed by a discontinuous Galerkin finite element method using a basis of piecewise Legendre polynomials. Temporal discretization utilizes a Runge-Kutta method. Dissipative fluxes and projection limiting prevent oscillations near solution discontinuities. A posteriori estimates of spatial errors are obtained by a p-refinement technique using superconvergence at Radau points. The resulting method is of high order and may be parallelized efficiently on MIMD computers. We compare results using different limiting schemes and demonstrate parallel efficiency through computations on an NCUBE/2 hypercube. We also present results using adaptive h- and p-refinement to reduce the computational cost of the method.

  16. Load Balancing Unstructured Adaptive Grids for CFD Problems

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Oliker, Leonid

    1996-01-01

    Mesh adaption is a powerful tool for efficient unstructured-grid computations but causes load imbalance among processors on a parallel machine. A dynamic load balancing method is presented that balances the workload across all processors with a global view. After each parallel tetrahedral mesh adaption, the method first determines if the new mesh is sufficiently unbalanced to warrant a repartitioning. If so, the adapted mesh is repartitioned, with new partitions assigned to processors so that the redistribution cost is minimized. The new partitions are accepted only if the remapping cost is compensated by the improved load balance. Results indicate that this strategy is effective for large-scale scientific computations on distributed-memory multiprocessors.

  17. Tensor contraction engine: Abstraction and automated parallel implementation of configuration-interaction, coupled-cluster, and many-body perturbation theories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirata, So

    2003-11-20

    We develop a symbolic manipulation program and program generator (Tensor Contraction Engine or TCE) that automatically derives the working equations of a well-defined model of second-quantized many-electron theories and synthesizes efficient parallel computer programs on the basis of these equations. Provided an ansatz of a many-electron theory model, TCE performs valid contractions of creation and annihilation operators according to Wick's theorem, consolidates identical terms, and reduces the expressions into the form of multiple tensor contractions acted by permutation operators. Subsequently, it determines the binary contraction order for each multiple tensor contraction with the minimal operation and memory cost, factorizes commonmore » binary contractions (defines intermediate tensors), and identifies reusable intermediates. The resulting ordered list of binary tensor contractions, additions, and index permutations is translated into an optimized program that is combined with the NWChem and UTChem computational chemistry software packages. The programs synthesized by TCE take advantage of spin symmetry, Abelian point-group symmetry, and index permutation symmetry at every stage of calculations to minimize the number of arithmetic operations and storage requirement, adjust the peak local memory usage by index range tiling, and support parallel I/O interfaces and dynamic load balancing for parallel executions. We demonstrate the utility of TCE through automatic derivation and implementation of parallel programs for various models of configuration-interaction theory (CISD, CISDT, CISDTQ), many-body perturbation theory [MBPT(2), MBPT(3), MBPT(4)], and coupled-cluster theory (LCCD, CCD, LCCSD, CCSD, QCISD, CCSDT, and CCSDTQ).« less

  18. Extent of QTL Reuse During Repeated Phenotypic Divergence of Sympatric Threespine Stickleback.

    PubMed

    Conte, Gina L; Arnegard, Matthew E; Best, Jacob; Chan, Yingguang Frank; Jones, Felicity C; Kingsley, David M; Schluter, Dolph; Peichel, Catherine L

    2015-11-01

    How predictable is the genetic basis of phenotypic adaptation? Answering this question begins by estimating the repeatability of adaptation at the genetic level. Here, we provide a comprehensive estimate of the repeatability of the genetic basis of adaptive phenotypic evolution in a natural system. We used quantitative trait locus (QTL) mapping to discover genomic regions controlling a large number of morphological traits that have diverged in parallel between pairs of threespine stickleback (Gasterosteus aculeatus species complex) in Paxton and Priest lakes, British Columbia. We found that nearly half of QTL affected the same traits in the same direction in both species pairs. Another 40% influenced a parallel phenotypic trait in one lake but not the other. The remaining 10% of QTL had phenotypic effects in opposite directions in the two species pairs. Similarity in the proportional contributions of all QTL to parallel trait differences was about 0.4. Surprisingly, QTL reuse was unrelated to phenotypic effect size. Our results indicate that repeated use of the same genomic regions is a pervasive feature of parallel phenotypic adaptation, at least in sticklebacks. Identifying the causes of this pattern would aid prediction of the genetic basis of phenotypic evolution. Copyright © 2015 by the Genetics Society of America.

  19. Advances and trends in structures and dynamics; Proceedings of the Symposium, Washington, DC, October 22-25, 1984

    NASA Technical Reports Server (NTRS)

    Noor, A. K. (Editor); Hayduk, R. J. (Editor)

    1985-01-01

    Among the topics discussed are developments in structural engineering hardware and software, computation for fracture mechanics, trends in numerical analysis and parallel algorithms, mechanics of materials, advances in finite element methods, composite materials and structures, determinations of random motion and dynamic response, optimization theory, automotive tire modeling methods and contact problems, the damping and control of aircraft structures, and advanced structural applications. Specific topics covered include structural design expert systems, the evaluation of finite element system architectures, systolic arrays for finite element analyses, nonlinear finite element computations, hierarchical boundary elements, adaptive substructuring techniques in elastoplastic finite element analyses, automatic tracking of crack propagation, a theory of rate-dependent plasticity, the torsional stability of nonlinear eccentric structures, a computation method for fluid-structure interaction, the seismic analysis of three-dimensional soil-structure interaction, a stress analysis for a composite sandwich panel, toughness criterion identification for unidirectional composite laminates, the modeling of submerged cable dynamics, and damping synthesis for flexible spacecraft structures.

  20. apGA: An adaptive parallel genetic algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liepins, G.E.; Baluja, S.

    1991-01-01

    We develop apGA, a parallel variant of the standard generational GA, that combines aggressive search with perpetual novelty, yet is able to preserve enough genetic structure to optimally solve variably scaled, non-uniform block deceptive and hierarchical deceptive problems. apGA combines elitism, adaptive mutation, adaptive exponential scaling, and temporal memory. We present empirical results for six classes of problems, including the DeJong test suite. Although we have not investigated hybrids, we note that apGA could be incorporated into other recent GA variants such as GENITOR, CHC, and the recombination stage of mGA. 12 refs., 2 figs., 2 tabs.

  1. In-memory integration of existing software components for parallel adaptive unstructured mesh workflows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Cameron W.; Granzow, Brian; Diamond, Gerrett

    Unstructured mesh methods, like finite elements and finite volumes, support the effective analysis of complex physical behaviors modeled by partial differential equations over general threedimensional domains. The most reliable and efficient methods apply adaptive procedures with a-posteriori error estimators that indicate where and how the mesh is to be modified. Although adaptive meshes can have two to three orders of magnitude fewer elements than a more uniform mesh for the same level of accuracy, there are many complex simulations where the meshes required are so large that they can only be solved on massively parallel systems.

  2. In-memory integration of existing software components for parallel adaptive unstructured mesh workflows

    DOE PAGES

    Smith, Cameron W.; Granzow, Brian; Diamond, Gerrett; ...

    2017-01-01

    Unstructured mesh methods, like finite elements and finite volumes, support the effective analysis of complex physical behaviors modeled by partial differential equations over general threedimensional domains. The most reliable and efficient methods apply adaptive procedures with a-posteriori error estimators that indicate where and how the mesh is to be modified. Although adaptive meshes can have two to three orders of magnitude fewer elements than a more uniform mesh for the same level of accuracy, there are many complex simulations where the meshes required are so large that they can only be solved on massively parallel systems.

  3. Genetic adaptations of the plateau zokor in high-elevation burrows.

    PubMed

    Shao, Yong; Li, Jin-Xiu; Ge, Ri-Li; Zhong, Li; Irwin, David M; Murphy, Robert W; Zhang, Ya-Ping

    2015-11-25

    The plateau zokor (Myospalax baileyi) spends its entire life underground in sealed burrows. Confronting limited oxygen and high carbon dioxide concentrations, and complete darkness, they epitomize a successful physiological adaptation. Here, we employ transcriptome sequencing to explore the genetic underpinnings of their adaptations to this unique habitat. Compared to Rattus norvegicus, genes belonging to GO categories related to energy metabolism (e.g. mitochondrion and fatty acid beta-oxidation) underwent accelerated evolution in the plateau zokor. Furthermore, the numbers of positively selected genes were significantly enriched in the gene categories involved in ATPase activity, blood vessel development and respiratory gaseous exchange, functional categories that are relevant to adaptation to high altitudes. Among the 787 genes with evidence of parallel evolution, and thus identified as candidate genes, several GO categories (e.g. response to hypoxia, oxygen homeostasis and erythrocyte homeostasis) are significantly enriched, are two genes, EPAS1 and AJUBA, involved in the response to hypoxia, where the parallel evolved sites are at positions that are highly conserved in sequence alignments from multiple species. Thus, accelerated evolution of GO categories, positive selection and parallel evolution at the molecular level provide evidences to parse the genetic adaptations of the plateau zokor for living in high-elevation burrows.

  4. The dynamics of diverse segmental amplifications in populations of Saccharomyces cerevisiae adapting to strong selection.

    PubMed

    Payen, Celia; Di Rienzi, Sara C; Ong, Giang T; Pogachar, Jamie L; Sanchez, Joseph C; Sunshine, Anna B; Raghuraman, M K; Brewer, Bonita J; Dunham, Maitreya J

    2014-03-20

    Population adaptation to strong selection can occur through the sequential or parallel accumulation of competing beneficial mutations. The dynamics, diversity, and rate of fixation of beneficial mutations within and between populations are still poorly understood. To study how the mutational landscape varies across populations during adaptation, we performed experimental evolution on seven parallel populations of Saccharomyces cerevisiae continuously cultured in limiting sulfate medium. By combining quantitative polymerase chain reaction, array comparative genomic hybridization, restriction digestion and contour-clamped homogeneous electric field gel electrophoresis, and whole-genome sequencing, we followed the trajectory of evolution to determine the identity and fate of beneficial mutations. During a period of 200 generations, the yeast populations displayed parallel evolutionary dynamics that were driven by the coexistence of independent beneficial mutations. Selective amplifications rapidly evolved under this selection pressure, in particular common inverted amplifications containing the sulfate transporter gene SUL1. Compared with single clones, detailed analysis of the populations uncovers a greater complexity whereby multiple subpopulations arise and compete despite a strong selection. The most common evolutionary adaptation to strong selection in these populations grown in sulfate limitation is determined by clonal interference, with adaptive variants both persisting and replacing one another.

  5. The Dynamics of Diverse Segmental Amplifications in Populations of Saccharomyces cerevisiae Adapting to Strong Selection

    PubMed Central

    Payen, Celia; Di Rienzi, Sara C.; Ong, Giang T.; Pogachar, Jamie L.; Sanchez, Joseph C.; Sunshine, Anna B.; Raghuraman, M. K.; Brewer, Bonita J.; Dunham, Maitreya J.

    2014-01-01

    Population adaptation to strong selection can occur through the sequential or parallel accumulation of competing beneficial mutations. The dynamics, diversity, and rate of fixation of beneficial mutations within and between populations are still poorly understood. To study how the mutational landscape varies across populations during adaptation, we performed experimental evolution on seven parallel populations of Saccharomyces cerevisiae continuously cultured in limiting sulfate medium. By combining quantitative polymerase chain reaction, array comparative genomic hybridization, restriction digestion and contour-clamped homogeneous electric field gel electrophoresis, and whole-genome sequencing, we followed the trajectory of evolution to determine the identity and fate of beneficial mutations. During a period of 200 generations, the yeast populations displayed parallel evolutionary dynamics that were driven by the coexistence of independent beneficial mutations. Selective amplifications rapidly evolved under this selection pressure, in particular common inverted amplifications containing the sulfate transporter gene SUL1. Compared with single clones, detailed analysis of the populations uncovers a greater complexity whereby multiple subpopulations arise and compete despite a strong selection. The most common evolutionary adaptation to strong selection in these populations grown in sulfate limitation is determined by clonal interference, with adaptive variants both persisting and replacing one another. PMID:24368781

  6. Parallel trait adaptation across opposing thermal environments in experimental Drosophila melanogaster populations

    PubMed Central

    Tobler, Ray; Hermisson, Joachim; Schlötterer, Christian

    2015-01-01

    Thermal stress is a pervasive selective agent in natural populations that impacts organismal growth, survival, and reproduction. Drosophila melanogaster exhibits a variety of putatively adaptive phenotypic responses to thermal stress in natural and experimental settings; however, accompanying assessments of fitness are typically lacking. Here, we quantify changes in fitness and known thermal tolerance traits in replicated experimental D. melanogaster populations following more than 40 generations of evolution to either cyclic cold or hot temperatures. By evaluating fitness for both evolved populations alongside a reconstituted starting population, we show that the evolved populations were the best adapted within their respective thermal environments. More strikingly, the evolved populations exhibited increased fitness in both environments and improved resistance to both acute heat and cold stress. This unexpected parallel response appeared to be an adaptation to the rapid temperature changes that drove the cycling thermal regimes, as parallel fitness changes were not observed when tested in a constant thermal environment. Our results add to a small, but growing group of studies that demonstrate the importance of fluctuating temperature changes for thermal adaptation and highlight the need for additional work in this area. PMID:26080903

  7. Architecture Adaptive Computing Environment

    NASA Technical Reports Server (NTRS)

    Dorband, John E.

    2006-01-01

    Architecture Adaptive Computing Environment (aCe) is a software system that includes a language, compiler, and run-time library for parallel computing. aCe was developed to enable programmers to write programs, more easily than was previously possible, for a variety of parallel computing architectures. Heretofore, it has been perceived to be difficult to write parallel programs for parallel computers and more difficult to port the programs to different parallel computing architectures. In contrast, aCe is supportable on all high-performance computing architectures. Currently, it is supported on LINUX clusters. aCe uses parallel programming constructs that facilitate writing of parallel programs. Such constructs were used in single-instruction/multiple-data (SIMD) programming languages of the 1980s, including Parallel Pascal, Parallel Forth, C*, *LISP, and MasPar MPL. In aCe, these constructs are extended and implemented for both SIMD and multiple- instruction/multiple-data (MIMD) architectures. Two new constructs incorporated in aCe are those of (1) scalar and virtual variables and (2) pre-computed paths. The scalar-and-virtual-variables construct increases flexibility in optimizing memory utilization in various architectures. The pre-computed-paths construct enables the compiler to pre-compute part of a communication operation once, rather than computing it every time the communication operation is performed.

  8. Parallel architectures for iterative methods on adaptive, block structured grids

    NASA Technical Reports Server (NTRS)

    Gannon, D.; Vanrosendale, J.

    1983-01-01

    A parallel computer architecture well suited to the solution of partial differential equations in complicated geometries is proposed. Algorithms for partial differential equations contain a great deal of parallelism. But this parallelism can be difficult to exploit, particularly on complex problems. One approach to extraction of this parallelism is the use of special purpose architectures tuned to a given problem class. The architecture proposed here is tuned to boundary value problems on complex domains. An adaptive elliptic algorithm which maps effectively onto the proposed architecture is considered in detail. Two levels of parallelism are exploited by the proposed architecture. First, by making use of the freedom one has in grid generation, one can construct grids which are locally regular, permitting a one to one mapping of grids to systolic style processor arrays, at least over small regions. All local parallelism can be extracted by this approach. Second, though there may be a regular global structure to the grids constructed, there will be parallelism at this level. One approach to finding and exploiting this parallelism is to use an architecture having a number of processor clusters connected by a switching network. The use of such a network creates a highly flexible architecture which automatically configures to the problem being solved.

  9. Multitasking as a choice: a perspective.

    PubMed

    Broeker, Laura; Liepelt, Roman; Poljac, Edita; Künzell, Stefan; Ewolds, Harald; de Oliveira, Rita F; Raab, Markus

    2018-01-01

    Performance decrements in multitasking have been explained by limitations in cognitive capacity, either modelled as static structural bottlenecks or as the scarcity of overall cognitive resources that prevent humans, or at least restrict them, from processing two tasks at the same time. However, recent research has shown that individual differences, flexible resource allocation, and prioritization of tasks cannot be fully explained by these accounts. We argue that understanding human multitasking as a choice and examining multitasking performance from the perspective of judgment and decision-making (JDM), may complement current dual-task theories. We outline two prominent theories from the area of JDM, namely Simple Heuristics and the Decision Field Theory, and adapt these theories to multitasking research. Here, we explain how computational modelling techniques and decision-making parameters used in JDM may provide a benefit to understanding multitasking costs and argue that these techniques and parameters have the potential to predict multitasking behavior in general, and also individual differences in behavior. Finally, we present the one-reason choice metaphor to explain a flexible use of limited capacity as well as changes in serial and parallel task processing. Based on this newly combined approach, we outline a concrete interdisciplinary future research program that we think will help to further develop multitasking research.

  10. Progress in the Simulation of Steady and Time-Dependent Flows with 3D Parallel Unstructured Cartesian Methods

    NASA Technical Reports Server (NTRS)

    Aftosmis, M. J.; Berger, M. J.; Murman, S. M.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    The proposed paper will present recent extensions in the development of an efficient Euler solver for adaptively-refined Cartesian meshes with embedded boundaries. The paper will focus on extensions of the basic method to include solution adaptation, time-dependent flow simulation, and arbitrary rigid domain motion. The parallel multilevel method makes use of on-the-fly parallel domain decomposition to achieve extremely good scalability on large numbers of processors, and is coupled with an automatic coarse mesh generation algorithm for efficient processing by a multigrid smoother. Numerical results are presented demonstrating parallel speed-ups of up to 435 on 512 processors. Solution-based adaptation may be keyed off truncation error estimates using tau-extrapolation or a variety of feature detection based refinement parameters. The multigrid method is extended to for time-dependent flows through the use of a dual-time approach. The extension to rigid domain motion uses an Arbitrary Lagrangian-Eulerlarian (ALE) formulation, and results will be presented for a variety of two- and three-dimensional example problems with both simple and complex geometry.

  11. Parallels between New Paradigms in Science and in Reading and Literary Theories.

    ERIC Educational Resources Information Center

    Weaver, Constance

    Drawing upon research from a number of fields, this paper explores parallels between new paradigms in the sciences--particularly physics, chemistry, and biology--and new paradigms in reading and literary theory--particularly a socio- or psycholinguistic, semiotic, transactional view of reading and a transactional view of the literary experience.…

  12. Parallel implementation of an adaptive and parameter-free N-body integrator

    NASA Astrophysics Data System (ADS)

    Pruett, C. David; Ingham, William H.; Herman, Ralph D.

    2011-05-01

    Previously, Pruett et al. (2003) [3] described an N-body integrator of arbitrarily high order M with an asymptotic operation count of O(MN). The algorithm's structure lends itself readily to data parallelization, which we document and demonstrate here in the integration of point-mass systems subject to Newtonian gravitation. High order is shown to benefit parallel efficiency. The resulting N-body integrator is robust, parameter-free, highly accurate, and adaptive in both time-step and order. Moreover, it exhibits linear speedup on distributed parallel processors, provided that each processor is assigned at least a handful of bodies. Program summaryProgram title: PNB.f90 Catalogue identifier: AEIK_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIK_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3052 No. of bytes in distributed program, including test data, etc.: 68 600 Distribution format: tar.gz Programming language: Fortran 90 and OpenMPI Computer: All shared or distributed memory parallel processors Operating system: Unix/Linux Has the code been vectorized or parallelized?: The code has been parallelized but has not been explicitly vectorized. RAM: Dependent upon N Classification: 4.3, 4.12, 6.5 Nature of problem: High accuracy numerical evaluation of trajectories of N point masses each subject to Newtonian gravitation. Solution method: Parallel and adaptive extrapolation in time via power series of arbitrary degree. Running time: 5.1 s for the demo program supplied with the package.

  13. Numerical Test of Analytical Theories for Perpendicular Diffusion in Small Kubo Number Turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heusen, M.; Shalchi, A., E-mail: husseinm@myumanitoba.ca, E-mail: andreasm4@yahoo.com

    In the literature, one can find various analytical theories for perpendicular diffusion of energetic particles interacting with magnetic turbulence. Besides quasi-linear theory, there are different versions of the nonlinear guiding center (NLGC) theory and the unified nonlinear transport (UNLT) theory. For turbulence with high Kubo numbers, such as two-dimensional turbulence or noisy reduced magnetohydrodynamic turbulence, the aforementioned nonlinear theories provide similar results. For slab and small Kubo number turbulence, however, this is not the case. In the current paper, we compare different linear and nonlinear theories with each other and test-particle simulations for a noisy slab model corresponding to smallmore » Kubo number turbulence. We show that UNLT theory agrees very well with all performed test-particle simulations. In the limit of long parallel mean free paths, the perpendicular mean free path approaches asymptotically the quasi-linear limit as predicted by the UNLT theory. For short parallel mean free paths we find a Rechester and Rosenbluth type of scaling as predicted by UNLT theory as well. The original NLGC theory disagrees with all performed simulations regardless what the parallel mean free path is. The random ballistic interpretation of the NLGC theory agrees much better with the simulations, but compared to UNLT theory the agreement is inferior. We conclude that for this type of small Kubo number turbulence, only the latter theory allows for an accurate description of perpendicular diffusion.« less

  14. Complete description of the optical path difference of a novel spectral zooming imaging spectrometer

    NASA Astrophysics Data System (ADS)

    Li, Jie; Wu, Haiying; Qi, Chun

    2018-03-01

    A complete description of the optical path difference of a novel spectral zooming imaging spectrometer (SZIS) is presented. SZIS is designed based on two identical Wollaston prisms with an adjustable air gap. Thus, interferogram with arbitrary spectral resolution and great reduction of spectral image size can be conveniently formed to adapt to different application requirements. Ray tracing modeling in arbitrary incidence with a quasi-parallel-plate approximation scheme is proposed to analyze the optical path difference of SZIS. In order to know the characteristics of the apparatus, exact calculations of the corresponding spectral resolution and field of view are both derived and analyzed in detail. We also present a comparison of calculation and experiment to prove the validity of the theory.

  15. A Parallel, Multi-Scale Watershed-Hydrologic-Inundation Model with Adaptively Switching Mesh for Capturing Flooding and Lake Dynamics

    NASA Astrophysics Data System (ADS)

    Ji, X.; Shen, C.

    2017-12-01

    Flood inundation presents substantial societal hazards and also changes biogeochemistry for systems like the Amazon. It is often expensive to simulate high-resolution flood inundation and propagation in a long-term watershed-scale model. Due to the Courant-Friedrichs-Lewy (CFL) restriction, high resolution and large local flow velocity both demand prohibitively small time steps even for parallel codes. Here we develop a parallel surface-subsurface process-based model enhanced by multi-resolution meshes that are adaptively switched on or off. The high-resolution overland flow meshes are enabled only when the flood wave invades to floodplains. This model applies semi-implicit, semi-Lagrangian (SISL) scheme in solving dynamic wave equations, and with the assistant of the multi-mesh method, it also adaptively chooses the dynamic wave equation only in the area of deep inundation. Therefore, the model achieves a balance between accuracy and computational cost.

  16. Wireless Adaptive Therapeutic TeleGaming in a Pervasive Computing Environment

    NASA Astrophysics Data System (ADS)

    Peters, James F.; Szturm, Tony; Borkowski, Maciej; Lockery, Dan; Ramanna, Sheela; Shay, Barbara

    This chapter introduces a wireless, pervasive computing approach to adaptive therapeutic telegaming considered in the context of near set theory. Near set theory provides a formal basis for observation, comparison and classification of perceptual granules. A perceptual granule is defined by a collection of objects that are graspable by the senses or by the mind. In the proposed pervasive computing approach to telegaming, a handicapped person (e.g., stroke patient with limited hand, finger, arm function) plays a video game by interacting with familiar instrumented objects such as cups, cutlery, soccer balls, nozzles, screw top-lids, spoons, so that the technology that makes therapeutic exercise game-playing possible is largely invisible (Archives of Physical Medicine and Rehabilitation 89:2213-2217, 2008). The basic approach to adaptive learning (AL) in the proposed telegaming environment is ethology-inspired and is quite different from the traditional approach to reinforcement learning. In biologically-inspired learning, organisms learn to achieve some goal by durable modification of behaviours in response to signals from the environment resulting from specific experiences (Animal Behavior, 1995). The term adaptive is used here in an ethological sense, where learning by an organism results from modifying behaviour in response to perceived changes in the environment. To instill adaptivity in a video game, it is assumed that learning by a video game is episodic. During an episode, the behaviour of a player is measured indirectly by tracking the occurrence of gaming events such as a hit or a miss of a target (e.g., hitting a moving ball with a game paddle). An ethogram provides a record of behaviour feature values that provide a basis a functional registry for handicapped players for gaming adaptivity. An important practical application of adaptive gaming is therapeutic rehabilitation exercise carried out in parallel with playing action video games. Enjoyable and engaging interactive gaming will motivate patients to complete the rehabilitation process. Adaptivity is seen as a way to make action games more accessible to those who have physical and cognitive impairments. The telegaming system connects to the internet and implements a feed-and-forward mechanism that transmits gaming session tables after each gaming session to a remote registry accessible to therapists and researchers. The contribution of this chapter is the introduction of a framework for wireless telegaming useful in therapeutic rehabilitation.

  17. Durham extremely large telescope adaptive optics simulation platform.

    PubMed

    Basden, Alastair; Butterley, Timothy; Myers, Richard; Wilson, Richard

    2007-03-01

    Adaptive optics systems are essential on all large telescopes for which image quality is important. These are complex systems with many design parameters requiring optimization before good performance can be achieved. The simulation of adaptive optics systems is therefore necessary to categorize the expected performance. We describe an adaptive optics simulation platform, developed at Durham University, which can be used to simulate adaptive optics systems on the largest proposed future extremely large telescopes as well as on current systems. This platform is modular, object oriented, and has the benefit of hardware application acceleration that can be used to improve the simulation performance, essential for ensuring that the run time of a given simulation is acceptable. The simulation platform described here can be highly parallelized using parallelization techniques suited for adaptive optics simulation, while still offering the user complete control while the simulation is running. The results from the simulation of a ground layer adaptive optics system are provided as an example to demonstrate the flexibility of this simulation platform.

  18. Kinetic theory of turbulence for parallel propagation revisited: Formal results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoon, Peter H., E-mail: yoonp@umd.edu

    2015-08-15

    In a recent paper, Gaelzer et al. [Phys. Plasmas 22, 032310 (2015)] revisited the second-order nonlinear kinetic theory for turbulence propagating in directions parallel/anti-parallel to the ambient magnetic field. The original work was according to Yoon and Fang [Phys. Plasmas 15, 122312 (2008)], but Gaelzer et al. noted that the terms pertaining to discrete-particle effects in Yoon and Fang's theory did not enjoy proper dimensionality. The purpose of Gaelzer et al. was to restore the dimensional consistency associated with such terms. However, Gaelzer et al. was concerned only with linear wave-particle interaction terms. The present paper completes the analysis bymore » considering the dimensional correction to nonlinear wave-particle interaction terms in the wave kinetic equation.« less

  19. Mechanisms for Rapid Adaptive Control of Motion Processing in Macaque Visual Cortex.

    PubMed

    McLelland, Douglas; Baker, Pamela M; Ahmed, Bashir; Kohn, Adam; Bair, Wyeth

    2015-07-15

    A key feature of neural networks is their ability to rapidly adjust their function, including signal gain and temporal dynamics, in response to changes in sensory inputs. These adjustments are thought to be important for optimizing the sensitivity of the system, yet their mechanisms remain poorly understood. We studied adaptive changes in temporal integration in direction-selective cells in macaque primary visual cortex, where specific hypotheses have been proposed to account for rapid adaptation. By independently stimulating direction-specific channels, we found that the control of temporal integration of motion at one direction was independent of motion signals driven at the orthogonal direction. We also found that individual neurons can simultaneously support two different profiles of temporal integration for motion in orthogonal directions. These findings rule out a broad range of adaptive mechanisms as being key to the control of temporal integration, including untuned normalization and nonlinearities of spike generation and somatic adaptation in the recorded direction-selective cells. Such mechanisms are too broadly tuned, or occur too far downstream, to explain the channel-specific and multiplexed temporal integration that we observe in single neurons. Instead, we are compelled to conclude that parallel processing pathways are involved, and we demonstrate one such circuit using a computer model. This solution allows processing in different direction/orientation channels to be separately optimized and is sensible given that, under typical motion conditions (e.g., translation or looming), speed on the retina is a function of the orientation of image components. Many neurons in visual cortex are understood in terms of their spatial and temporal receptive fields. It is now known that the spatiotemporal integration underlying visual responses is not fixed but depends on the visual input. For example, neurons that respond selectively to motion direction integrate signals over a shorter time window when visual motion is fast and a longer window when motion is slow. We investigated the mechanisms underlying this useful adaptation by recording from neurons as they responded to stimuli moving in two different directions at different speeds. Computer simulations of our results enabled us to rule out several candidate theories in favor of a model that integrates across multiple parallel channels that operate at different time scales. Copyright © 2015 the authors 0270-6474/15/3510268-13$15.00/0.

  20. Parallel evolutionary computation in bioinformatics applications.

    PubMed

    Pinho, Jorge; Sobral, João Luis; Rocha, Miguel

    2013-05-01

    A large number of optimization problems within the field of Bioinformatics require methods able to handle its inherent complexity (e.g. NP-hard problems) and also demand increased computational efforts. In this context, the use of parallel architectures is a necessity. In this work, we propose ParJECoLi, a Java based library that offers a large set of metaheuristic methods (such as Evolutionary Algorithms) and also addresses the issue of its efficient execution on a wide range of parallel architectures. The proposed approach focuses on the easiness of use, making the adaptation to distinct parallel environments (multicore, cluster, grid) transparent to the user. Indeed, this work shows how the development of the optimization library can proceed independently of its adaptation for several architectures, making use of Aspect-Oriented Programming. The pluggable nature of parallelism related modules allows the user to easily configure its environment, adding parallelism modules to the base source code when needed. The performance of the platform is validated with two case studies within biological model optimization. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  1. A parallel row-based algorithm with error control for standard-cell replacement on a hypercube multiprocessor

    NASA Technical Reports Server (NTRS)

    Sargent, Jeff Scott

    1988-01-01

    A new row-based parallel algorithm for standard-cell placement targeted for execution on a hypercube multiprocessor is presented. Key features of this implementation include a dynamic simulated-annealing schedule, row-partitioning of the VLSI chip image, and two novel new approaches to controlling error in parallel cell-placement algorithms; Heuristic Cell-Coloring and Adaptive (Parallel Move) Sequence Control. Heuristic Cell-Coloring identifies sets of noninteracting cells that can be moved repeatedly, and in parallel, with no buildup of error in the placement cost. Adaptive Sequence Control allows multiple parallel cell moves to take place between global cell-position updates. This feedback mechanism is based on an error bound derived analytically from the traditional annealing move-acceptance profile. Placement results are presented for real industry circuits and the performance is summarized of an implementation on the Intel iPSC/2 Hypercube. The runtime of this algorithm is 5 to 16 times faster than a previous program developed for the Hypercube, while producing equivalent quality placement. An integrated place and route program for the Intel iPSC/2 Hypercube is currently being developed.

  2. Modeling Cooperative Threads to Project GPU Performance for Adaptive Parallelism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meng, Jiayuan; Uram, Thomas; Morozov, Vitali A.

    Most accelerators, such as graphics processing units (GPUs) and vector processors, are particularly suitable for accelerating massively parallel workloads. On the other hand, conventional workloads are developed for multi-core parallelism, which often scale to only a few dozen OpenMP threads. When hardware threads significantly outnumber the degree of parallelism in the outer loop, programmers are challenged with efficient hardware utilization. A common solution is to further exploit the parallelism hidden deep in the code structure. Such parallelism is less structured: parallel and sequential loops may be imperfectly nested within each other, neigh boring inner loops may exhibit different concurrency patternsmore » (e.g. Reduction vs. Forall), yet have to be parallelized in the same parallel section. Many input-dependent transformations have to be explored. A programmer often employs a larger group of hardware threads to cooperatively walk through a smaller outer loop partition and adaptively exploit any encountered parallelism. This process is time-consuming and error-prone, yet the risk of gaining little or no performance remains high for such workloads. To reduce risk and guide implementation, we propose a technique to model workloads with limited parallelism that can automatically explore and evaluate transformations involving cooperative threads. Eventually, our framework projects the best achievable performance and the most promising transformations without implementing GPU code or using physical hardware. We envision our technique to be integrated into future compilers or optimization frameworks for autotuning.« less

  3. Divide-and-conquer density functional theory on hierarchical real-space grids: Parallel implementation and applications

    NASA Astrophysics Data System (ADS)

    Shimojo, Fuyuki; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya

    2008-02-01

    A linear-scaling algorithm based on a divide-and-conquer (DC) scheme has been designed to perform large-scale molecular-dynamics (MD) simulations, in which interatomic forces are computed quantum mechanically in the framework of the density functional theory (DFT). Electronic wave functions are represented on a real-space grid, which is augmented with a coarse multigrid to accelerate the convergence of iterative solutions and with adaptive fine grids around atoms to accurately calculate ionic pseudopotentials. Spatial decomposition is employed to implement the hierarchical-grid DC-DFT algorithm on massively parallel computers. The largest benchmark tests include 11.8×106 -atom ( 1.04×1012 electronic degrees of freedom) calculation on 131 072 IBM BlueGene/L processors. The DC-DFT algorithm has well-defined parameters to control the data locality, with which the solutions converge rapidly. Also, the total energy is well conserved during the MD simulation. We perform first-principles MD simulations based on the DC-DFT algorithm, in which large system sizes bring in excellent agreement with x-ray scattering measurements for the pair-distribution function of liquid Rb and allow the description of low-frequency vibrational modes of graphene. The band gap of a CdSe nanorod calculated by the DC-DFT algorithm agrees well with the available conventional DFT results. With the DC-DFT algorithm, the band gap is calculated for larger system sizes until the result reaches the asymptotic value.

  4. The Glass Menagerie as heuristic for explicating nursing theory.

    PubMed

    Pilkington, F Beryl; Frederickson, Keville; Velsasco-Whetsell, Martha

    2006-07-01

    Tennessee Williams' play, The Glass Menagerie, is interpreted through the lens of two different nursing theories, the Roy adaptation model and the human becoming theory. In the Roy adaptation model interpretation, adaptive levels of reality testing and stimuli that instigate withdrawal are explored, while in the human becoming theory interpretation, the themes of meaning, rhythmicity, and contranscendence are explicated.

  5. Distributed parameter system coupled ARMA expansion identification and adaptive parallel IIR filtering - A unified problem statement. [Auto Regressive Moving-Average

    NASA Technical Reports Server (NTRS)

    Johnson, C. R., Jr.; Balas, M. J.

    1980-01-01

    A novel interconnection of distributed parameter system (DPS) identification and adaptive filtering is presented, which culminates in a common statement of coupled autoregressive, moving-average expansion or parallel infinite impulse response configuration adaptive parameterization. The common restricted complexity filter objectives are seen as similar to the reduced-order requirements of the DPS expansion description. The interconnection presents the possibility of an exchange of problem formulations and solution approaches not yet easily addressed in the common finite dimensional lumped-parameter system context. It is concluded that the shared problems raised are nevertheless many and difficult.

  6. Parallelization of Unsteady Adaptive Mesh Refinement for Unstructured Navier-Stokes Solvers

    NASA Technical Reports Server (NTRS)

    Schwing, Alan M.; Nompelis, Ioannis; Candler, Graham V.

    2014-01-01

    This paper explores the implementation of the MPI parallelization in a Navier-Stokes solver using adaptive mesh re nement. Viscous and inviscid test problems are considered for the purpose of benchmarking, as are implicit and explicit time advancement methods. The main test problem for comparison includes e ects from boundary layers and other viscous features and requires a large number of grid points for accurate computation. Ex- perimental validation against double cone experiments in hypersonic ow are shown. The adaptive mesh re nement shows promise for a staple test problem in the hypersonic com- munity. Extension to more advanced techniques for more complicated ows is described.

  7. On adaptive learning rate that guarantees convergence in feedforward networks.

    PubMed

    Behera, Laxmidhar; Kumar, Swagat; Patnaik, Awhan

    2006-09-01

    This paper investigates new learning algorithms (LF I and LF II) based on Lyapunov function for the training of feedforward neural networks. It is observed that such algorithms have interesting parallel with the popular backpropagation (BP) algorithm where the fixed learning rate is replaced by an adaptive learning rate computed using convergence theorem based on Lyapunov stability theory. LF II, a modified version of LF I, has been introduced with an aim to avoid local minima. This modification also helps in improving the convergence speed in some cases. Conditions for achieving global minimum for these kind of algorithms have been studied in detail. The performances of the proposed algorithms are compared with BP algorithm and extended Kalman filtering (EKF) on three bench-mark function approximation problems: XOR, 3-bit parity, and 8-3 encoder. The comparisons are made in terms of number of learning iterations and computational time required for convergence. It is found that the proposed algorithms (LF I and II) are much faster in convergence than other two algorithms to attain same accuracy. Finally, the comparison is made on a complex two-dimensional (2-D) Gabor function and effect of adaptive learning rate for faster convergence is verified. In a nutshell, the investigations made in this paper help us better understand the learning procedure of feedforward neural networks in terms of adaptive learning rate, convergence speed, and local minima.

  8. Large-Scale Parallel Simulations of Turbulent Combustion using Combined Dimension Reduction and Tabulation of Chemistry

    DTIC Science & Technology

    2012-05-22

    tabulation of the reduced space is performed using the In Situ Adaptive Tabulation ( ISAT ) algorithm. In addition, we use x2f mpi – a Fortran library...for parallel vector-valued function evaluation (used with ISAT in this context) – to efficiently redistribute the chemistry workload among the...Constrained-Equilibrium (RCCE) method, and tabulation of the reduced space is performed using the In Situ Adaptive Tabulation ( ISAT ) algorithm. In addition

  9. Massively parallel algorithms for real-time wavefront control of a dense adaptive optics system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fijany, A.; Milman, M.; Redding, D.

    1994-12-31

    In this paper massively parallel algorithms and architectures for real-time wavefront control of a dense adaptive optic system (SELENE) are presented. The authors have already shown that the computation of a near optimal control algorithm for SELENE can be reduced to the solution of a discrete Poisson equation on a regular domain. Although, this represents an optimal computation, due the large size of the system and the high sampling rate requirement, the implementation of this control algorithm poses a computationally challenging problem since it demands a sustained computational throughput of the order of 10 GFlops. They develop a novel algorithm,more » designated as Fast Invariant Imbedding algorithm, which offers a massive degree of parallelism with simple communication and synchronization requirements. Due to these features, this algorithm is significantly more efficient than other Fast Poisson Solvers for implementation on massively parallel architectures. The authors also discuss two massively parallel, algorithmically specialized, architectures for low-cost and optimal implementation of the Fast Invariant Imbedding algorithm.« less

  10. Parallel implementation of a Lagrangian-based model on an adaptive mesh in C++: Application to sea-ice

    NASA Astrophysics Data System (ADS)

    Samaké, Abdoulaye; Rampal, Pierre; Bouillon, Sylvain; Ólason, Einar

    2017-12-01

    We present a parallel implementation framework for a new dynamic/thermodynamic sea-ice model, called neXtSIM, based on the Elasto-Brittle rheology and using an adaptive mesh. The spatial discretisation of the model is done using the finite-element method. The temporal discretisation is semi-implicit and the advection is achieved using either a pure Lagrangian scheme or an Arbitrary Lagrangian Eulerian scheme (ALE). The parallel implementation presented here focuses on the distributed-memory approach using the message-passing library MPI. The efficiency and the scalability of the parallel algorithms are illustrated by the numerical experiments performed using up to 500 processor cores of a cluster computing system. The performance obtained by the proposed parallel implementation of the neXtSIM code is shown being sufficient to perform simulations for state-of-the-art sea ice forecasting and geophysical process studies over geographical domain of several millions squared kilometers like the Arctic region.

  11. Phylogeographic differentiation versus transcriptomic adaptation to warm temperatures in Zostera marina, a globally important seagrass.

    PubMed

    Jueterbock, A; Franssen, S U; Bergmann, N; Gu, J; Coyer, J A; Reusch, T B H; Bornberg-Bauer, E; Olsen, J L

    2016-11-01

    Populations distributed across a broad thermal cline are instrumental in addressing adaptation to increasing temperatures under global warming. Using a space-for-time substitution design, we tested for parallel adaptation to warm temperatures along two independent thermal clines in Zostera marina, the most widely distributed seagrass in the temperate Northern Hemisphere. A North-South pair of populations was sampled along the European and North American coasts and exposed to a simulated heatwave in a common-garden mesocosm. Transcriptomic responses under control, heat stress and recovery were recorded in 99 RNAseq libraries with ~13 000 uniquely annotated, expressed genes. We corrected for phylogenetic differentiation among populations to discriminate neutral from adaptive differentiation. The two southern populations recovered faster from heat stress and showed parallel transcriptomic differentiation, as compared with northern populations. Among 2389 differentially expressed genes, 21 exceeded neutral expectations and were likely involved in parallel adaptation to warm temperatures. However, the strongest differentiation following phylogenetic correction was between the three Atlantic populations and the Mediterranean population with 128 of 4711 differentially expressed genes exceeding neutral expectations. Although adaptation to warm temperatures is expected to reduce sensitivity to heatwaves, the continued resistance of seagrass to further anthropogenic stresses may be impaired by heat-induced downregulation of genes related to photosynthesis, pathogen defence and stress tolerance. © 2016 John Wiley & Sons Ltd.

  12. Kinetic theory of turbulence for parallel propagation revisited: Low-to-intermediate frequency regime

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoon, Peter H., E-mail: yoonp@umd.edu; School of Space Research, Kyung Hee University, Yongin, Gyeonggi 446-701

    2015-09-15

    A previous paper [P. H. Yoon, “Kinetic theory of turbulence for parallel propagation revisited: Formal results,” Phys. Plasmas 22, 082309 (2015)] revisited the second-order nonlinear kinetic theory for turbulence propagating in directions parallel/anti-parallel to the ambient magnetic field, in which the original work according to Yoon and Fang [Phys. Plasmas 15, 122312 (2008)] was refined, following the paper by Gaelzer et al. [Phys. Plasmas 22, 032310 (2015)]. The main finding involved the dimensional correction pertaining to discrete-particle effects in Yoon and Fang's theory. However, the final result was presented in terms of formal linear and nonlinear susceptibility response functions. Inmore » the present paper, the formal equations are explicitly written down for the case of low-to-intermediate frequency regime by making use of approximate forms for the response functions. The resulting equations are sufficiently concrete so that they can readily be solved by numerical means or analyzed by theoretical means. The derived set of equations describe nonlinear interactions of quasi-parallel modes whose frequency range covers the Alfvén wave range to ion-cyclotron mode, but is sufficiently lower than the electron cyclotron mode. The application of the present formalism may range from the nonlinear evolution of whistler anisotropy instability in the high-beta regime, and the nonlinear interaction of electrons with whistler-range turbulence.« less

  13. Unstructured Adaptive Grid Computations on an Array of SMPs

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Pramanick, Ira; Sohn, Andrew; Simon, Horst D.

    1996-01-01

    Dynamic load balancing is necessary for parallel adaptive methods to solve unsteady CFD problems on unstructured grids. We have presented such a dynamic load balancing framework called JOVE, in this paper. Results on a four-POWERnode POWER CHALLENGEarray demonstrated that load balancing gives significant performance improvements over no load balancing for such adaptive computations. The parallel speedup of JOVE, implemented using MPI on the POWER CHALLENCEarray, was significant, being as high as 31 for 32 processors. An implementation of JOVE that exploits 'an array of SMPS' architecture was also studied; this hybrid JOVE outperformed flat JOVE by up to 28% on the meshes and adaption models tested. With large, realistic meshes and actual flow-solver and adaption phases incorporated into JOVE, hybrid JOVE can be expected to yield significant advantage over flat JOVE, especially as the number of processors is increased, thus demonstrating the scalability of an array of SMPs architecture.

  14. Parallel implementation of an adaptive scheme for 3D unstructured grids on the SP2

    NASA Technical Reports Server (NTRS)

    Strawn, Roger C.; Oliker, Leonid; Biswas, Rupak

    1996-01-01

    Dynamic mesh adaption on unstructured grids is a powerful tool for computing unsteady flows that require local grid modifications to efficiently resolve solution features. For this work, we consider an edge-based adaption scheme that has shown good single-processor performance on the C90. We report on our experience parallelizing this code for the SP2. Results show a 47.0X speedup on 64 processors when 10 percent of the mesh is randomly refined. Performance deteriorates to 7.7X when the same number of edges are refined in a highly-localized region. This is because almost all the mesh adaption is confined to a single processor. However, this problem can be remedied by repartitioning the mesh immediately after targeting edges for refinement but before the actual adaption takes place. With this change, the speedup improves dramatically to 43.6X.

  15. Parallel Implementation of an Adaptive Scheme for 3D Unstructured Grids on the SP2

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Biswas, Rupak; Strawn, Roger C.

    1996-01-01

    Dynamic mesh adaption on unstructured grids is a powerful tool for computing unsteady flows that require local grid modifications to efficiently resolve solution features. For this work, we consider an edge-based adaption scheme that has shown good single-processor performance on the C90. We report on our experience parallelizing this code for the SP2. Results show a 47.OX speedup on 64 processors when 10% of the mesh is randomly refined. Performance deteriorates to 7.7X when the same number of edges are refined in a highly-localized region. This is because almost all mesh adaption is confined to a single processor. However, this problem can be remedied by repartitioning the mesh immediately after targeting edges for refinement but before the actual adaption takes place. With this change, the speedup improves dramatically to 43.6X.

  16. Parallel trait adaptation across opposing thermal environments in experimental Drosophila melanogaster populations.

    PubMed

    Tobler, Ray; Hermisson, Joachim; Schlötterer, Christian

    2015-07-01

    Thermal stress is a pervasive selective agent in natural populations that impacts organismal growth, survival, and reproduction. Drosophila melanogaster exhibits a variety of putatively adaptive phenotypic responses to thermal stress in natural and experimental settings; however, accompanying assessments of fitness are typically lacking. Here, we quantify changes in fitness and known thermal tolerance traits in replicated experimental D. melanogaster populations following more than 40 generations of evolution to either cyclic cold or hot temperatures. By evaluating fitness for both evolved populations alongside a reconstituted starting population, we show that the evolved populations were the best adapted within their respective thermal environments. More strikingly, the evolved populations exhibited increased fitness in both environments and improved resistance to both acute heat and cold stress. This unexpected parallel response appeared to be an adaptation to the rapid temperature changes that drove the cycling thermal regimes, as parallel fitness changes were not observed when tested in a constant thermal environment. Our results add to a small, but growing group of studies that demonstrate the importance of fluctuating temperature changes for thermal adaptation and highlight the need for additional work in this area. © 2015 The Author(s). Evolution published by Wiley Periodicals, Inc. on behalf of The Society for the Study of Evolution.

  17. Mechanisms mediating parallel action monitoring in fronto-striatal circuits.

    PubMed

    Beste, Christian; Ness, Vanessa; Lukas, Carsten; Hoffmann, Rainer; Stüwe, Sven; Falkenstein, Michael; Saft, Carsten

    2012-08-01

    Flexible response adaptation and the control of conflicting information play a pivotal role in daily life. Yet, little is known about the neuronal mechanisms mediating parallel control of these processes. We examined these mechanisms using a multi-methodological approach that integrated data from event-related potentials (ERPs) with structural MRI data and source localisation using sLORETA. Moreover, we calculated evoked wavelet oscillations. We applied this multi-methodological approach in healthy subjects and patients in a prodromal phase of a major basal ganglia disorder (i.e., Huntington's disease), to directly focus on fronto-striatal networks. Behavioural data indicated, especially the parallel execution of conflict monitoring and flexible response adaptation was modulated across the examined cohorts. When both processes do not co-incide a high integrity of fronto-striatal loops seems to be dispensable. The neurophysiological data suggests that conflict monitoring (reflected by the N2 ERP) and working memory processes (reflected by the P3 ERP) differentially contribute to this pattern of results. Flexible response adaptation under the constraint of high conflict processing affected the N2 and P3 ERP, as well as their delta frequency band oscillations. Yet, modulatory effects were strongest for the N2 ERP and evoked wavelet oscillations in this time range. The N2 ERPs were localized in the anterior cingulate cortex (BA32, BA24). Modulations of the P3 ERP were localized in parietal areas (BA7). In addition, MRI-determined caudate head volume predicted modulations in conflict monitoring, but not working memory processes. The results show how parallel conflict monitoring and flexible adaptation of action is mediated via fronto-striatal networks. While both, response monitoring and working memory processes seem to play a role, especially response selection processes and ACC-basal ganglia networks seem to be the driving force in mediating parallel conflict monitoring and flexible adaptation of actions. Copyright © 2012 Elsevier Inc. All rights reserved.

  18. "Parallel Leadership in an "Unparallel" World"--Cultural Constraints on the Transferability of Western Educational Leadership Theories across Cultures

    ERIC Educational Resources Information Center

    Goh, Jonathan Wee Pin

    2009-01-01

    With the global economy becoming more integrated, the issues of cross-cultural relevance and transferability of leadership theories and practices have become increasingly urgent. Drawing upon the concept of parallel leadership in schools proposed by Crowther, Kaagan, Ferguson, and Hann as an example, the purpose of this paper is to examine the…

  19. An information-theoretic approach to motor action decoding with a reconfigurable parallel architecture.

    PubMed

    Craciun, Stefan; Brockmeier, Austin J; George, Alan D; Lam, Herman; Príncipe, José C

    2011-01-01

    Methods for decoding movements from neural spike counts using adaptive filters often rely on minimizing the mean-squared error. However, for non-Gaussian distribution of errors, this approach is not optimal for performance. Therefore, rather than using probabilistic modeling, we propose an alternate non-parametric approach. In order to extract more structure from the input signal (neuronal spike counts) we propose using minimum error entropy (MEE), an information-theoretic approach that minimizes the error entropy as part of an iterative cost function. However, the disadvantage of using MEE as the cost function for adaptive filters is the increase in computational complexity. In this paper we present a comparison between the decoding performance of the analytic Wiener filter and a linear filter trained with MEE, which is then mapped to a parallel architecture in reconfigurable hardware tailored to the computational needs of the MEE filter. We observe considerable speedup from the hardware design. The adaptation of filter weights for the multiple-input, multiple-output linear filters, necessary in motor decoding, is a highly parallelizable algorithm. It can be decomposed into many independent computational blocks with a parallel architecture readily mapped to a field-programmable gate array (FPGA) and scales to large numbers of neurons. By pipelining and parallelizing independent computations in the algorithm, the proposed parallel architecture has sublinear increases in execution time with respect to both window size and filter order.

  20. Modified Method of Adaptive Artificial Viscosity for Solution of Gas Dynamics Problems on Parallel Computer Systems

    NASA Astrophysics Data System (ADS)

    Popov, Igor; Sukov, Sergey

    2018-02-01

    A modification of the adaptive artificial viscosity (AAV) method is considered. This modification is based on one stage time approximation and is adopted to calculation of gasdynamics problems on unstructured grids with an arbitrary type of grid elements. The proposed numerical method has simplified logic, better performance and parallel efficiency compared to the implementation of the original AAV method. Computer experiments evidence the robustness and convergence of the method to difference solution.

  1. Adaptation as organism design

    PubMed Central

    Gardner, Andy

    2009-01-01

    The problem of adaptation is to explain the apparent design of organisms. Darwin solved this problem with the theory of natural selection. However, population geneticists, whose responsibility it is to formalize evolutionary theory, have long neglected the link between natural selection and organismal design. Here, I review the major historical developments in theory of organismal adaptation, clarifying what adaptation is and what it is not, and I point out future avenues for research. PMID:19793739

  2. Humorous Literature: A Doorway to Literacy.

    ERIC Educational Resources Information Center

    Fernandez, Melanie

    Many theories have been developed to try to explain humor, among them, the social theory; psychoanalytic theories based on Freud; cognitive theories which identify stages corresponding to those of Piaget; and eclectic theories which combine elements of all the theories. The developmental stages of humor parallel the intellectual and emotional…

  3. Massively parallel GPU-accelerated minimization of classical density functional theory

    NASA Astrophysics Data System (ADS)

    Stopper, Daniel; Roth, Roland

    2017-08-01

    In this paper, we discuss the ability to numerically minimize the grand potential of hard disks in two-dimensional and of hard spheres in three-dimensional space within the framework of classical density functional and fundamental measure theory on modern graphics cards. Our main finding is that a massively parallel minimization leads to an enormous performance gain in comparison to standard sequential minimization schemes. Furthermore, the results indicate that in complex multi-dimensional situations, a heavy parallel minimization of the grand potential seems to be mandatory in order to reach a reasonable balance between accuracy and computational cost.

  4. Computerized Adaptive Test (CAT) Applications and Item Response Theory Models for Polytomous Items

    ERIC Educational Resources Information Center

    Aybek, Eren Can; Demirtasli, R. Nukhet

    2017-01-01

    This article aims to provide a theoretical framework for computerized adaptive tests (CAT) and item response theory models for polytomous items. Besides that, it aims to introduce the simulation and live CAT software to the related researchers. Computerized adaptive test algorithm, assumptions of item response theory models, nominal response…

  5. Adapting the serial Alpgen parton-interaction generator to simulate LHC collisions on millions of parallel threads

    NASA Astrophysics Data System (ADS)

    Childers, J. T.; Uram, T. D.; LeCompte, T. J.; Papka, M. E.; Benjamin, D. P.

    2017-01-01

    As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the Worldwide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. This paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application and the performance that was achieved.

  6. Massively parallel sparse matrix function calculations with NTPoly

    NASA Astrophysics Data System (ADS)

    Dawson, William; Nakajima, Takahito

    2018-04-01

    We present NTPoly, a massively parallel library for computing the functions of sparse, symmetric matrices. The theory of matrix functions is a well developed framework with a wide range of applications including differential equations, graph theory, and electronic structure calculations. One particularly important application area is diagonalization free methods in quantum chemistry. When the input and output of the matrix function are sparse, methods based on polynomial expansions can be used to compute matrix functions in linear time. We present a library based on these methods that can compute a variety of matrix functions. Distributed memory parallelization is based on a communication avoiding sparse matrix multiplication algorithm. OpenMP task parallellization is utilized to implement hybrid parallelization. We describe NTPoly's interface and show how it can be integrated with programs written in many different programming languages. We demonstrate the merits of NTPoly by performing large scale calculations on the K computer.

  7. SIERRA Low Mach Module: Fuego Theory Manual Version 4.44

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sierra Thermal /Fluid Team

    2017-04-01

    The SIERRA Low Mach Module: Fuego along with the SIERRA Participating Media Radiation Module: Syrinx, henceforth referred to as Fuego and Syrinx, respectively, are the key elements of the ASCI fire environment simulation project. The fire environment simulation project is directed at characterizing both open large-scale pool fires and building enclosure fires. Fuego represents the turbulent, buoyantly-driven incompressible flow, heat transfer, mass transfer, combustion, soot, and absorption coefficient model portion of the simulation software. Syrinx represents the participating-media thermal radiation mechanics. This project is an integral part of the SIERRA multi-mechanics software development project. Fuego depends heavily upon the coremore » architecture developments provided by SIERRA for massively parallel computing, solution adaptivity, and mechanics coupling on unstructured grids.« less

  8. SIERRA Low Mach Module: Fuego Theory Manual Version 4.46.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sierra Thermal/Fluid Team

    The SIERRA Low Mach Module: Fuego along with the SIERRA Participating Media Radiation Module: Syrinx, henceforth referred to as Fuego and Syrinx, respectively, are the key elements of the ASCI fire environment simulation project. The fire environment simulation project is directed at characterizing both open large-scale pool fires and building enclosure fires. Fuego represents the turbulent, buoyantly-driven incompressible flow, heat transfer, mass transfer, combustion, soot, and absorption coefficient model portion of the simulation software. Syrinx represents the participating-media thermal radiation mechanics. This project is an integral part of the SIERRA multi-mechanics software development project. Fuego depends heavily upon the coremore » architecture developments provided by SIERRA for massively parallel computing, solution adaptivity, and mechanics coupling on unstructured grids.« less

  9. Chebyshev polynomial filtered subspace iteration in the discontinuous Galerkin method for large-scale electronic structure calculations

    DOE PAGES

    Banerjee, Amartya S.; Lin, Lin; Hu, Wei; ...

    2016-10-21

    The Discontinuous Galerkin (DG) electronic structure method employs an adaptive local basis (ALB) set to solve the Kohn-Sham equations of density functional theory in a discontinuous Galerkin framework. The adaptive local basis is generated on-the-fly to capture the local material physics and can systematically attain chemical accuracy with only a few tens of degrees of freedom per atom. A central issue for large-scale calculations, however, is the computation of the electron density (and subsequently, ground state properties) from the discretized Hamiltonian in an efficient and scalable manner. We show in this work how Chebyshev polynomial filtered subspace iteration (CheFSI) canmore » be used to address this issue and push the envelope in large-scale materials simulations in a discontinuous Galerkin framework. We describe how the subspace filtering steps can be performed in an efficient and scalable manner using a two-dimensional parallelization scheme, thanks to the orthogonality of the DG basis set and block-sparse structure of the DG Hamiltonian matrix. The on-the-fly nature of the ALB functions requires additional care in carrying out the subspace iterations. We demonstrate the parallel scalability of the DG-CheFSI approach in calculations of large-scale twodimensional graphene sheets and bulk three-dimensional lithium-ion electrolyte systems. In conclusion, employing 55 296 computational cores, the time per self-consistent field iteration for a sample of the bulk 3D electrolyte containing 8586 atoms is 90 s, and the time for a graphene sheet containing 11 520 atoms is 75 s.« less

  10. Type synthesis for 4-DOF parallel press mechanism using GF set theory

    NASA Astrophysics Data System (ADS)

    He, Jun; Gao, Feng; Meng, Xiangdun; Guo, Weizhong

    2015-07-01

    Parallel mechanisms is used in the large capacity servo press to avoid the over-constraint of the traditional redundant actuation. Currently, the researches mainly focus on the performance analysis for some specific parallel press mechanisms. However, the type synthesis and evaluation of parallel press mechanisms is seldom studied, especially for the four degrees of freedom(DOF) press mechanisms. The type synthesis of 4-DOF parallel press mechanisms is carried out based on the generalized function(GF) set theory. Five design criteria of 4-DOF parallel press mechanisms are firstly proposed. The general procedure of type synthesis of parallel press mechanisms is obtained, which includes number synthesis, symmetrical synthesis of constraint GF sets, decomposition of motion GF sets and design of limbs. Nine combinations of constraint GF sets of 4-DOF parallel press mechanisms, ten combinations of GF sets of active limbs, and eleven combinations of GF sets of passive limbs are synthesized. Thirty-eight kinds of press mechanisms are presented and then different structures of kinematic limbs are designed. Finally, the geometrical constraint complexity( GCC), kinematic pair complexity( KPC), and type complexity( TC) are proposed to evaluate the press types and the optimal press type is achieved. The general methodologies of type synthesis and evaluation for parallel press mechanism are suggested.

  11. Comment on Gallistel: behavior theory and information theory: some parallels.

    PubMed

    Nevin, John A

    2012-05-01

    In this article, Gallistel proposes information theory as an approach to some enduring problems in the study of operant and classical conditioning. Copyright © 2012 Elsevier B.V. All rights reserved.

  12. Identification of Behavioral Indicators in Political Protest Music

    DTIC Science & Technology

    2015-12-01

    to ways to influence that behavior. Political protest songs are one such source. Protest music is goal-oriented, and lyrics often parallel movement ... music is goal-oriented, and lyrics often parallel movement goals of potential TAs. This thesis examines how political protest music can help identify... movement theory in order to bridge the MISO doctrine with music theories and understand what influences people to change their behavior and act or

  13. Sparse distributed memory overview

    NASA Technical Reports Server (NTRS)

    Raugh, Mike

    1990-01-01

    The Sparse Distributed Memory (SDM) project is investigating the theory and applications of massively parallel computing architecture, called sparse distributed memory, that will support the storage and retrieval of sensory and motor patterns characteristic of autonomous systems. The immediate objectives of the project are centered in studies of the memory itself and in the use of the memory to solve problems in speech, vision, and robotics. Investigation of methods for encoding sensory data is an important part of the research. Examples of NASA missions that may benefit from this work are Space Station, planetary rovers, and solar exploration. Sparse distributed memory offers promising technology for systems that must learn through experience and be capable of adapting to new circumstances, and for operating any large complex system requiring automatic monitoring and control. Sparse distributed memory is a massively parallel architecture motivated by efforts to understand how the human brain works. Sparse distributed memory is an associative memory, able to retrieve information from cues that only partially match patterns stored in the memory. It is able to store long temporal sequences derived from the behavior of a complex system, such as progressive records of the system's sensory data and correlated records of the system's motor controls.

  14. Large-aperture Tunable Plasma Meta-material to Interact with Electromagnetic Waves

    NASA Astrophysics Data System (ADS)

    Corke, Thomas; Matlis, Eric

    2016-11-01

    The formation of spatially periodic arrangements of glow discharge plasma resulting from charge instabilities were investigated as a tuneable plasma meta-material. The plasma was formed between two 2-D parallel dielectric covered electrodes: one consisting of an Indium-Tin-Oxide coated glass sheet, and the other consisting of a glass-covered circular electrode. The dielectric covered electrodes were separated by a gap that formed a 2-D channel. The gap spacing was adjustable. The electrodes were powered by a variable amplitude AC generator. The parallel electrode arrangement was placed in a variable pressure vacuum chamber. Various combinations of gap spacing, pressure and voltage resulted in the formation of spatially periodic arrangements (lattice) of glow discharge plasma. The lattice spacing perfectly followed 2-D packing theory, and was fully adjustable through the three governing parameters. Lattice arrangements were designed to interact with electromagnetic (EM) waves in the frequency range between 10GHz-80GHz. Its feasibility was investigate through an EM wave simulation that we adapted to allow for plasma permittivity. The results showed a clear suppression of the EM wave amplitude through the plasma gratings. Supported by AFOSR.

  15. Quantum information, cognition, and music.

    PubMed

    Dalla Chiara, Maria L; Giuntini, Roberto; Leporini, Roberto; Negri, Eleonora; Sergioli, Giuseppe

    2015-01-01

    Parallelism represents an essential aspect of human mind/brain activities. One can recognize some common features between psychological parallelism and the characteristic parallel structures that arise in quantum theory and in quantum computation. The article is devoted to a discussion of the following questions: a comparison between classical probabilistic Turing machines and quantum Turing machines.possible applications of the quantum computational semantics to cognitive problems.parallelism in music.

  16. Quantum information, cognition, and music

    PubMed Central

    Dalla Chiara, Maria L.; Giuntini, Roberto; Leporini, Roberto; Negri, Eleonora; Sergioli, Giuseppe

    2015-01-01

    Parallelism represents an essential aspect of human mind/brain activities. One can recognize some common features between psychological parallelism and the characteristic parallel structures that arise in quantum theory and in quantum computation. The article is devoted to a discussion of the following questions: a comparison between classical probabilistic Turing machines and quantum Turing machines.possible applications of the quantum computational semantics to cognitive problems.parallelism in music. PMID:26539139

  17. The neurobiology of syntax: beyond string sets.

    PubMed

    Petersson, Karl Magnus; Hagoort, Peter

    2012-07-19

    The human capacity to acquire language is an outstanding scientific challenge to understand. Somehow our language capacities arise from the way the human brain processes, develops and learns in interaction with its environment. To set the stage, we begin with a summary of what is known about the neural organization of language and what our artificial grammar learning (AGL) studies have revealed. We then review the Chomsky hierarchy in the context of the theory of computation and formal learning theory. Finally, we outline a neurobiological model of language acquisition and processing based on an adaptive, recurrent, spiking network architecture. This architecture implements an asynchronous, event-driven, parallel system for recursive processing. We conclude that the brain represents grammars (or more precisely, the parser/generator) in its connectivity, and its ability for syntax is based on neurobiological infrastructure for structured sequence processing. The acquisition of this ability is accounted for in an adaptive dynamical systems framework. Artificial language learning (ALL) paradigms might be used to study the acquisition process within such a framework, as well as the processing properties of the underlying neurobiological infrastructure. However, it is necessary to combine and constrain the interpretation of ALL results by theoretical models and empirical studies on natural language processing. Given that the faculty of language is captured by classical computational models to a significant extent, and that these can be embedded in dynamic network architectures, there is hope that significant progress can be made in understanding the neurobiology of the language faculty.

  18. The neurobiology of syntax: beyond string sets

    PubMed Central

    Petersson, Karl Magnus; Hagoort, Peter

    2012-01-01

    The human capacity to acquire language is an outstanding scientific challenge to understand. Somehow our language capacities arise from the way the human brain processes, develops and learns in interaction with its environment. To set the stage, we begin with a summary of what is known about the neural organization of language and what our artificial grammar learning (AGL) studies have revealed. We then review the Chomsky hierarchy in the context of the theory of computation and formal learning theory. Finally, we outline a neurobiological model of language acquisition and processing based on an adaptive, recurrent, spiking network architecture. This architecture implements an asynchronous, event-driven, parallel system for recursive processing. We conclude that the brain represents grammars (or more precisely, the parser/generator) in its connectivity, and its ability for syntax is based on neurobiological infrastructure for structured sequence processing. The acquisition of this ability is accounted for in an adaptive dynamical systems framework. Artificial language learning (ALL) paradigms might be used to study the acquisition process within such a framework, as well as the processing properties of the underlying neurobiological infrastructure. However, it is necessary to combine and constrain the interpretation of ALL results by theoretical models and empirical studies on natural language processing. Given that the faculty of language is captured by classical computational models to a significant extent, and that these can be embedded in dynamic network architectures, there is hope that significant progress can be made in understanding the neurobiology of the language faculty. PMID:22688633

  19. Automated three-component synthesis of a library of γ-lactams

    PubMed Central

    Fenster, Erik; Hill, David; Reiser, Oliver

    2012-01-01

    Summary A three-component method for the synthesis of γ-lactams from commercially available maleimides, aldehydes, and amines was adapted to parallel library synthesis. Improvements to the chemistry over previous efforts include the optimization of the method to a one-pot process, the management of by-products and excess reagents, the development of an automated parallel sequence, and the adaption of the method to permit the preparation of enantiomerically enriched products. These efforts culminated in the preparation of a library of 169 γ-lactams. PMID:23209515

  20. LMC: Logarithmantic Monte Carlo

    NASA Astrophysics Data System (ADS)

    Mantz, Adam B.

    2017-06-01

    LMC is a Markov Chain Monte Carlo engine in Python that implements adaptive Metropolis-Hastings and slice sampling, as well as the affine-invariant method of Goodman & Weare, in a flexible framework. It can be used for simple problems, but the main use case is problems where expensive likelihood evaluations are provided by less flexible third-party software, which benefit from parallelization across many nodes at the sampling level. The parallel/adaptive methods use communication through MPI, or alternatively by writing/reading files, and mostly follow the approaches pioneered by CosmoMC (ascl:1106.025).

  1. Parallel adaptive discontinuous Galerkin approximation for thin layer avalanche modeling

    NASA Astrophysics Data System (ADS)

    Patra, A. K.; Nichita, C. C.; Bauer, A. C.; Pitman, E. B.; Bursik, M.; Sheridan, M. F.

    2006-08-01

    This paper describes the development of highly accurate adaptive discontinuous Galerkin schemes for the solution of the equations arising from a thin layer type model of debris flows. Such flows have wide applicability in the analysis of avalanches induced by many natural calamities, e.g. volcanoes, earthquakes, etc. These schemes are coupled with special parallel solution methodologies to produce a simulation tool capable of very high-order numerical accuracy. The methodology successfully replicates cold rock avalanches at Mount Rainier, Washington and hot volcanic particulate flows at Colima Volcano, Mexico.

  2. Unstructured grids on SIMD torus machines

    NASA Technical Reports Server (NTRS)

    Bjorstad, Petter E.; Schreiber, Robert

    1994-01-01

    Unstructured grids lead to unstructured communication on distributed memory parallel computers, a problem that has been considered difficult. Here, we consider adaptive, offline communication routing for a SIMD processor grid. Our approach is empirical. We use large data sets drawn from supercomputing applications instead of an analytic model of communication load. The chief contribution of this paper is an experimental demonstration of the effectiveness of certain routing heuristics. Our routing algorithm is adaptive, nonminimal, and is generally designed to exploit locality. We have a parallel implementation of the router, and we report on its performance.

  3. Independent Axes of Genetic Variation and Parallel Evolutionary Divergence Of Opercle Bone Shape in Threespine Stickleback

    PubMed Central

    Kimmel, Charles B.; Cresko, William A.; Phillips, Patrick C.; Ullmann, Bonnie; Currey, Mark; von Hippel, Frank; Kristjánsson, Bjarni K.; Gelmond, Ofer; McGuigan, Katrina

    2014-01-01

    Evolution of similar phenotypes in independent populations is often taken as evidence of adaptation to the same fitness optimum. However, the genetic architecture of traits might cause evolution to proceed more often toward particular phenotypes, and less often toward others, independently of the adaptive value of the traits. Freshwater populations of Alaskan threespine stickleback have repeatedly evolved the same distinctive opercle shape after divergence from an oceanic ancestor. Here we demonstrate that this pattern of parallel evolution is widespread, distinguishing oceanic and freshwater populations across the Pacific Coast of North America and Iceland. We test whether this parallel evolution reflects genetic bias by estimating the additive genetic variance– covariance matrix (G) of opercle shape in an Alaskan oceanic (putative ancestral) population. We find significant additive genetic variance for opercle shape and that G has the potential to be biasing, because of the existence of regions of phenotypic space with low additive genetic variation. However, evolution did not occur along major eigenvectors of G, rather it occurred repeatedly in the same directions of high evolvability. We conclude that the parallel opercle evolution is most likely due to selection during adaptation to freshwater habitats, rather than due to biasing effects of opercle genetic architecture. PMID:22276538

  4. The development of a scalable parallel 3-D CFD algorithm for turbomachinery. M.S. Thesis Final Report

    NASA Technical Reports Server (NTRS)

    Luke, Edward Allen

    1993-01-01

    Two algorithms capable of computing a transonic 3-D inviscid flow field about rotating machines are considered for parallel implementation. During the study of these algorithms, a significant new method of measuring the performance of parallel algorithms is developed. The theory that supports this new method creates an empirical definition of scalable parallel algorithms that is used to produce quantifiable evidence that a scalable parallel application was developed. The implementation of the parallel application and an automated domain decomposition tool are also discussed.

  5. Learning control system design based on 2-D theory - An application to parallel link manipulator

    NASA Technical Reports Server (NTRS)

    Geng, Z.; Carroll, R. L.; Lee, J. D.; Haynes, L. H.

    1990-01-01

    An approach to iterative learning control system design based on two-dimensional system theory is presented. A two-dimensional model for the iterative learning control system which reveals the connections between learning control systems and two-dimensional system theory is established. A learning control algorithm is proposed, and the convergence of learning using this algorithm is guaranteed by two-dimensional stability. The learning algorithm is applied successfully to the trajectory tracking control problem for a parallel link robot manipulator. The excellent performance of this learning algorithm is demonstrated by the computer simulation results.

  6. Time-dependent Perpendicular Transport of Energetic Particles for Different Turbulence Configurations and Parallel Transport Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lasuik, J.; Shalchi, A., E-mail: andreasm4@yahoo.com

    Recently, a new theory for the transport of energetic particles across a mean magnetic field was presented. Compared to other nonlinear theories the new approach has the advantage that it provides a full time-dependent description of the transport. Furthermore, a diffusion approximation is no longer part of that theory. The purpose of this paper is to combine this new approach with a time-dependent model for parallel transport and different turbulence configurations in order to explore the parameter regimes for which we get ballistic transport, compound subdiffusion, and normal Markovian diffusion.

  7. Design of Unstructured Adaptive (UA) NAS Parallel Benchmark Featuring Irregular, Dynamic Memory Accesses

    NASA Technical Reports Server (NTRS)

    Feng, Hui-Yu; VanderWijngaart, Rob; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2001-01-01

    We describe the design of a new method for the measurement of the performance of modern computer systems when solving scientific problems featuring irregular, dynamic memory accesses. The method involves the solution of a stylized heat transfer problem on an unstructured, adaptive grid. A Spectral Element Method (SEM) with an adaptive, nonconforming mesh is selected to discretize the transport equation. The relatively high order of the SEM lowers the fraction of wall clock time spent on inter-processor communication, which eases the load balancing task and allows us to concentrate on the memory accesses. The benchmark is designed to be three-dimensional. Parallelization and load balance issues of a reference implementation will be described in detail in future reports.

  8. Adapting the serial Alpgen parton-interaction generator to simulate LHC collisions on millions of parallel threads

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Childers, J. T.; Uram, T. D.; LeCompte, T. J.

    As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the World- wide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. This paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application andmore » the performance that was achieved.« less

  9. Adapting the serial Alpgen parton-interaction generator to simulate LHC collisions on millions of parallel threads

    DOE PAGES

    Childers, J. T.; Uram, T. D.; LeCompte, T. J.; ...

    2016-09-29

    As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the Worldwide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. Finally, this paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application andmore » the performance that was achieved.« less

  10. Adapting the serial Alpgen parton-interaction generator to simulate LHC collisions on millions of parallel threads

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Childers, J. T.; Uram, T. D.; LeCompte, T. J.

    As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the Worldwide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. Finally, this paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application andmore » the performance that was achieved.« less

  11. Carpet: Adaptive Mesh Refinement for the Cactus Framework

    NASA Astrophysics Data System (ADS)

    Schnetter, Erik; Hawley, Scott; Hawke, Ian

    2016-11-01

    Carpet is an adaptive mesh refinement and multi-patch driver for the Cactus Framework (ascl:1102.013). Cactus is a software framework for solving time-dependent partial differential equations on block-structured grids, and Carpet acts as driver layer providing adaptive mesh refinement, multi-patch capability, as well as parallelization and efficient I/O.

  12. Exploring types of play in an adapted robotics program for children with disabilities.

    PubMed

    Lindsay, Sally; Lam, Ashley

    2018-04-01

    Play is an important occupation in a child's development. Children with disabilities often have fewer opportunities to engage in meaningful play than typically developing children. The purpose of this study was to explore the types of play (i.e., solitary, parallel and co-operative) within an adapted robotics program for children with disabilities aged 6-8 years. This study draws on detailed observations of each of the six robotics workshops and interviews with 53 participants (21 children, 21 parents and 11 programme staff). Our findings showed that four children engaged in solitary play, where all but one showed signs of moving towards parallel play. Six children demonstrated parallel play during all workshops. The remainder of the children had mixed play types play (solitary, parallel and/or co-operative) throughout the robotics workshops. We observed more parallel and co-operative, and less solitary play as the programme progressed. Ten different children displayed co-operative behaviours throughout the workshops. The interviews highlighted how staff supported children's engagement in the programme. Meanwhile, parents reported on their child's development of play skills. An adapted LEGO ® robotics program has potential to develop the play skills of children with disabilities in moving from solitary towards more parallel and co-operative play. Implications for rehabilitation Educators and clinicians working with children who have disabilities should consider the potential of LEGO ® robotics programs for developing their play skills. Clinicians should consider how the extent of their involvement in prompting and facilitating children's engagement and play within a robotics program may influence their ability to interact with their peers. Educators and clinicians should incorporate both structured and unstructured free-play elements within a robotics program to facilitate children's social development.

  13. Adaptive Nulling for the Terrestrial Planet Finder Interferometer

    NASA Technical Reports Server (NTRS)

    Peters, Robert D.; Lay, Oliver P.; Jeganathan, Muthu; Hirai, Akiko

    2006-01-01

    A description of adaptive nulling for Terrestrial Planet Finder Interferometer (TPFI) is presented. The topics include: 1) Nulling in TPF-I; 2) Why Do Adaptive Nulling; 3) Parallel High-Order Compensator Design; 4) Phase and Amplitude Control; 5) Development Activates; 6) Requirements; 7) Simplified Experimental Setup; 8) Intensity Correction; and 9) Intensity Dispersion Stability. A short summary is also given on adaptive nulling for the TPFI.

  14. Contrast adaptation in cat visual cortex is not mediated by GABA.

    PubMed

    DeBruyn, E J; Bonds, A B

    1986-09-24

    The possible involvement of gamma-aminobutyric acid (GABA) in contrast adaptation in single cells in area 17 of the cat was investigated. Iontophoretic application of N-methyl bicuculline increased cell responses, but had no effect on the magnitude of adaptation. These results suggest that contrast adaptation is the result of inhibition through a parallel pathway, but that GABA does not mediate this process.

  15. PLUM: Parallel Load Balancing for Unstructured Adaptive Meshes. Degree awarded by Colorado Univ.

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid

    1998-01-01

    Dynamic mesh adaption on unstructured grids is a powerful tool for computing large-scale problems that require grid modifications to efficiently resolve solution features. By locally refining and coarsening the mesh to capture physical phenomena of interest, such procedures make standard computational methods more cost effective. Unfortunately, an efficient parallel implementation of these adaptive methods is rather difficult to achieve, primarily due to the load imbalance created by the dynamically-changing nonuniform grid. This requires significant communication at runtime, leading to idle processors and adversely affecting the total execution time. Nonetheless, it is generally thought that unstructured adaptive- grid techniques will constitute a significant fraction of future high-performance supercomputing. Various dynamic load balancing methods have been reported to date; however, most of them either lack a global view of loads across processors or do not apply their techniques to realistic large-scale applications.

  16. Phase reconstruction using compressive two-step parallel phase-shifting digital holography

    NASA Astrophysics Data System (ADS)

    Ramachandran, Prakash; Alex, Zachariah C.; Nelleri, Anith

    2018-04-01

    The linear relationship between the sample complex object wave and its approximated complex Fresnel field obtained using single shot parallel phase-shifting digital holograms (PPSDH) is used in compressive sensing framework and an accurate phase reconstruction is demonstrated. It is shown that the accuracy of phase reconstruction of this method is better than that of compressive sensing adapted single exposure inline holography (SEOL) method. It is derived that the measurement model of PPSDH method retains both the real and imaginary parts of the Fresnel field but with an approximation noise and the measurement model of SEOL retains only the real part exactly equal to the real part of the complex Fresnel field and its imaginary part is completely not available. Numerical simulation is performed for CS adapted PPSDH and CS adapted SEOL and it is demonstrated that the phase reconstruction is accurate for CS adapted PPSDH and can be used for single shot digital holographic reconstruction.

  17. PLUM: Parallel Load Balancing for Adaptive Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Biswas, Rupak; Saini, Subhash (Technical Monitor)

    1998-01-01

    Mesh adaption is a powerful tool for efficient unstructured-grid computations but causes load imbalance among processors on a parallel machine. We present a novel method called PLUM to dynamically balance the processor workloads with a global view. This paper presents the implementation and integration of all major components within our dynamic load balancing strategy for adaptive grid calculations. Mesh adaption, repartitioning, processor assignment, and remapping are critical components of the framework that must be accomplished rapidly and efficiently so as not to cause a significant overhead to the numerical simulation. A data redistribution model is also presented that predicts the remapping cost on the SP2. This model is required to determine whether the gain from a balanced workload distribution offsets the cost of data movement. Results presented in this paper demonstrate that PLUM is an effective dynamic load balancing strategy which remains viable on a large number of processors.

  18. Asynchronous multilevel adaptive methods for solving partial differential equations on multiprocessors - Performance results

    NASA Technical Reports Server (NTRS)

    Mccormick, S.; Quinlan, D.

    1989-01-01

    The fast adaptive composite grid method (FAC) is an algorithm that uses various levels of uniform grids (global and local) to provide adaptive resolution and fast solution of PDEs. Like all such methods, it offers parallelism by using possibly many disconnected patches per level, but is hindered by the need to handle these levels sequentially. The finest levels must therefore wait for processing to be essentially completed on all the coarser ones. A recently developed asynchronous version of FAC, called AFAC, completely eliminates this bottleneck to parallelism. This paper describes timing results for AFAC, coupled with a simple load balancing scheme, applied to the solution of elliptic PDEs on an Intel iPSC hypercube. These tests include performance of certain processes necessary in adaptive methods, including moving grids and changing refinement. A companion paper reports on numerical and analytical results for estimating convergence factors of AFAC applied to very large scale examples.

  19. A Roy model study of adapting to being HIV positive.

    PubMed

    Perrett, Stephanie E; Biley, Francis C

    2013-10-01

    Roy's adaptation model outlines a generic process of adaptation useful to nurses in any situation where a patient is facing change. To advance nursing practice, nursing theories and frameworks must be constantly tested and developed through research. This article describes how the results of a qualitative grounded theory study have been used to test components of the Roy adaptation model. A framework for "negotiating uncertainty" was the result of a grounded theory study exploring adaptation to HIV. This framework has been compared to the Roy adaptation model, strengthening concepts such as focal and contextual stimuli, Roy's definition of adaptation and her description of adaptive modes, while suggesting areas for further development including the role of perception. The comparison described in this article demonstrates the usefulness of qualitative research in developing nursing models, specifically highlighting opportunities to continue refining Roy's work.

  20. Two Contrasting Theories.

    ERIC Educational Resources Information Center

    Harris, Kevin

    1984-01-01

    The author compares the liberal idealist theory of education with a Marxist theory. He suggests that in the developing nations, changes in schooling will parallel stages in development, and specific norms, values, and habits will be fostered by the schools. (CT)

  1. A Model for Speedup of Parallel Programs

    DTIC Science & Technology

    1997-01-01

    Sanjeev. K Setia . The interaction between mem- ory allocation and adaptive partitioning in message- passing multicomputers. In IPPS 󈨣 Workshop on Job...Scheduling Strategies for Parallel Processing, pages 89{99, 1995. [15] Sanjeev K. Setia and Satish K. Tripathi. A compar- ative analysis of static

  2. Developing parallel GeoFEST(P) using the PYRAMID AMR library

    NASA Technical Reports Server (NTRS)

    Norton, Charles D.; Lyzenga, Greg; Parker, Jay; Tisdale, Robert E.

    2004-01-01

    The PYRAMID parallel unstructured adaptive mesh refinement (AMR) library has been coupled with the GeoFEST geophysical finite element simulation tool to support parallel active tectonics simulations. Specifically, we have demonstrated modeling of coseismic and postseismic surface displacement due to a simulated Earthquake for the Landers system of interacting faults in Southern California. The new software demonstrated a 25-times resolution improvement and a 4-times reduction in time to solution over the sequential baseline milestone case. Simulations on workstations using a few tens of thousands of stress displacement finite elements can now be expanded to multiple millions of elements with greater than 98% scaled efficiency on various parallel platforms over many hundreds of processors. Our most recent work has demonstrated that we can dynamically adapt the computational grid as stress grows on a fault. In this paper, we will describe the major issues and challenges associated with coupling these two programs to create GeoFEST(P). Performance and visualization results will also be described.

  3. Parallel goal-oriented adaptive finite element modeling for 3D electromagnetic exploration

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Key, K.; Ovall, J.; Holst, M.

    2014-12-01

    We present a parallel goal-oriented adaptive finite element method for accurate and efficient electromagnetic (EM) modeling of complex 3D structures. An unstructured tetrahedral mesh allows this approach to accommodate arbitrarily complex 3D conductivity variations and a priori known boundaries. The total electric field is approximated by the lowest order linear curl-conforming shape functions and the discretized finite element equations are solved by a sparse LU factorization. Accuracy of the finite element solution is achieved through adaptive mesh refinement that is performed iteratively until the solution converges to the desired accuracy tolerance. Refinement is guided by a goal-oriented error estimator that uses a dual-weighted residual method to optimize the mesh for accurate EM responses at the locations of the EM receivers. As a result, the mesh refinement is highly efficient since it only targets the elements where the inaccuracy of the solution corrupts the response at the possibly distant locations of the EM receivers. We compare the accuracy and efficiency of two approaches for estimating the primary residual error required at the core of this method: one uses local element and inter-element residuals and the other relies on solving a global residual system using a hierarchical basis. For computational efficiency our method follows the Bank-Holst algorithm for parallelization, where solutions are computed in subdomains of the original model. To resolve the load-balancing problem, this approach applies a spectral bisection method to divide the entire model into subdomains that have approximately equal error and the same number of receivers. The finite element solutions are then computed in parallel with each subdomain carrying out goal-oriented adaptive mesh refinement independently. We validate the newly developed algorithm by comparison with controlled-source EM solutions for 1D layered models and with 2D results from our earlier 2D goal oriented adaptive refinement code named MARE2DEM. We demonstrate the performance and parallel scaling of this algorithm on a medium-scale computing cluster with a marine controlled-source EM example that includes a 3D array of receivers located over a 3D model that includes significant seafloor bathymetry variations and a heterogeneous subsurface.

  4. A conceptual model of children's cognitive adaptation to physical disability.

    PubMed

    Bernardo, M L

    1982-11-01

    Increasing numbers of children are being required to adapt to lifelong illness and disability. While numerous studies exist on theories of adaptation, reaction to illness, and children's concepts of self and of illness, an integrated view of children's ability to conceptualize themselves, their disabilities and possible adaptations has not been formulated. In this article an attempt has been made to integrate models of adaptation to disability and knowledge about children's cognitive development using Piagetian theory of cognitive development and Crate's stages of adaptation to chronic illness. This conceptually integrated model can be used as a departure point for studies to validate the applicability of Piaget's theory to the development of the physically disabled child and to clinically assess the adaptational stages available to the child at various developmental stages.

  5. Flexbar 3.0 - SIMD and multicore parallelization.

    PubMed

    Roehr, Johannes T; Dieterich, Christoph; Reinert, Knut

    2017-09-15

    High-throughput sequencing machines can process many samples in a single run. For Illumina systems, sequencing reads are barcoded with an additional DNA tag that is contained in the respective sequencing adapters. The recognition of barcode and adapter sequences is hence commonly needed for the analysis of next-generation sequencing data. Flexbar performs demultiplexing based on barcodes and adapter trimming for such data. The massive amounts of data generated on modern sequencing machines demand that this preprocessing is done as efficiently as possible. We present Flexbar 3.0, the successor of the popular program Flexbar. It employs now twofold parallelism: multi-threading and additionally SIMD vectorization. Both types of parallelism are used to speed-up the computation of pair-wise sequence alignments, which are used for the detection of barcodes and adapters. Furthermore, new features were included to cover a wide range of applications. We evaluated the performance of Flexbar based on a simulated sequencing dataset. Our program outcompetes other tools in terms of speed and is among the best tools in the presented quality benchmark. https://github.com/seqan/flexbar. johannes.roehr@fu-berlin.de or knut.reinert@fu-berlin.de. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  6. Magnitude of parallel pseudo potential in a magnetosonic shock wave

    NASA Astrophysics Data System (ADS)

    Ohsawa, Yukiharu

    2018-05-01

    The parallel pseudo potential F, which is the integral of the parallel electric field along the magnetic field, in a large-amplitude magnetosonic pulse (shock wave) is theoretically studied. Particle simulations revealed in the late 1990's that the product of the elementary charge and F can be much larger than the electron temperature in shock waves, i.e., the parallel electric field can be quite strong. However, no theory was presented for this unexpected result. This paper first revisits the small-amplitude theory for F and then investigates the parallel pseudo potential F in large-amplitude pulses based on the two-fluid model with finite thermal pressures. It is found that the magnitude of F in a shock wave is determined by the wave amplitude, the electron temperature, and the kinetic energy of an ion moving with the Alfvén speed. This theoretically obtained expression for F is nearly identical to the empirical relation for F discovered in the previous simulation work.

  7. Implementation of a flexible and scalable particle-in-cell method for massively parallel computations in the mantle convection code ASPECT

    NASA Astrophysics Data System (ADS)

    Gassmöller, Rene; Bangerth, Wolfgang

    2016-04-01

    Particle-in-cell methods have a long history and many applications in geodynamic modelling of mantle convection, lithospheric deformation and crustal dynamics. They are primarily used to track material information, the strain a material has undergone, the pressure-temperature history a certain material region has experienced, or the amount of volatiles or partial melt present in a region. However, their efficient parallel implementation - in particular combined with adaptive finite-element meshes - is complicated due to the complex communication patterns and frequent reassignment of particles to cells. Consequently, many current scientific software packages accomplish this efficient implementation by specifically designing particle methods for a single purpose, like the advection of scalar material properties that do not evolve over time (e.g., for chemical heterogeneities). Design choices for particle integration, data storage, and parallel communication are then optimized for this single purpose, making the code relatively rigid to changing requirements. Here, we present the implementation of a flexible, scalable and efficient particle-in-cell method for massively parallel finite-element codes with adaptively changing meshes. Using a modular plugin structure, we allow maximum flexibility of the generation of particles, the carried tracer properties, the advection and output algorithms, and the projection of properties to the finite-element mesh. We present scaling tests ranging up to tens of thousands of cores and tens of billions of particles. Additionally, we discuss efficient load-balancing strategies for particles in adaptive meshes with their strengths and weaknesses, local particle-transfer between parallel subdomains utilizing existing communication patterns from the finite element mesh, and the use of established parallel output algorithms like the HDF5 library. Finally, we show some relevant particle application cases, compare our implementation to a modern advection-field approach, and demonstrate under which conditions which method is more efficient. We implemented the presented methods in ASPECT (aspect.dealii.org), a freely available open-source community code for geodynamic simulations. The structure of the particle code is highly modular, and segregated from the PDE solver, and can thus be easily transferred to other programs, or adapted for various application cases.

  8. A Domain-Decomposed Multilevel Method for Adaptively Refined Cartesian Grids with Embedded Boundaries

    NASA Technical Reports Server (NTRS)

    Aftosmis, M. J.; Berger, M. J.; Adomavicius, G.

    2000-01-01

    Preliminary verification and validation of an efficient Euler solver for adaptively refined Cartesian meshes with embedded boundaries is presented. The parallel, multilevel method makes use of a new on-the-fly parallel domain decomposition strategy based upon the use of space-filling curves, and automatically generates a sequence of coarse meshes for processing by the multigrid smoother. The coarse mesh generation algorithm produces grids which completely cover the computational domain at every level in the mesh hierarchy. A series of examples on realistically complex three-dimensional configurations demonstrate that this new coarsening algorithm reliably achieves mesh coarsening ratios in excess of 7 on adaptively refined meshes. Numerical investigations of the scheme's local truncation error demonstrate an achieved order of accuracy between 1.82 and 1.88. Convergence results for the multigrid scheme are presented for both subsonic and transonic test cases and demonstrate W-cycle multigrid convergence rates between 0.84 and 0.94. Preliminary parallel scalability tests on both simple wing and complex complete aircraft geometries shows a computational speedup of 52 on 64 processors using the run-time mesh partitioner.

  9. Understanding Students' Adaptation to Graduate School: An Integration of Social Support Theory and Social Learning Theory

    ERIC Educational Resources Information Center

    Tsay, Crystal Han-Huei

    2012-01-01

    The contemporary business world demands adaptive individuals (Friedman & Wyman, 2005). Adaptation is essential for any life transition. It often involves developing coping mechanisms, strategies, and seeking of social support. Adaptation occurs in many settings from moving to a new culture, taking a new job, starting or finishing an…

  10. Detecting free-mass common-mode motion induced by incident gravitational waves

    NASA Astrophysics Data System (ADS)

    Tobar, Michael Edmund; Suzuki, Toshikazu; Kuroda, Kazuaki

    1999-05-01

    In this paper we show that information on both the differential and common mode free-mass response to a gravitational wave can provide important information on discriminating the direction of the gravitational wave source and between different theories of gravitation. The conventional Michelson interferometer scheme only measures the differential free-mass response. By changing the orientation of the beam splitter, it is possible to configure the detector so it is sensitive to the common-mode of the free-mass motion. The proposed interferometer is an adaptation of the Fox-Smith interferometer. A major limitation to the new scheme is its enhanced sensitivity to laser frequency fluctuations over the conventional, and we propose a method of cancelling these fluctuations. The configuration could be used in parallel to the conventional differential detection scheme with a significant sensitivity and bandwidth.

  11. Nothing in the History of Spanish Anís Makes Sense, Except in the Light of Evolution

    NASA Astrophysics Data System (ADS)

    Delgado, Juan Antonio; Palma, Ricardo Luis

    2011-02-01

    We describe, discuss and illustrate a metaphoric parallel between the history of the most famous Spanish liqueur, " Anís del Mono" ( Anís of the Monkey), and the evolution of living organisms in the light of Darwinian theory and other biological hypotheses published subsequent to Charles Darwin's Origin of Species. Also, we report the use of a caricature of a simian Darwin with a positive connotation, perhaps the only one ever produced. We conclude that, like some species in the natural world, Anís of the Monkey has evolved, adapted, survived and become the fittest and most successful anís in the Spanish market and possibly the world. We hope this paper will contribute a new useful metaphor for the teaching of biological evolution.

  12. Mimesis: Linking Postmodern Theory to Human Behavior

    ERIC Educational Resources Information Center

    Dybicz, Phillip

    2010-01-01

    This article elaborates mimesis as a theory of causality used to explain human behavior. Drawing parallels to social constructionism's critique of positivism and naturalism, mimesis is offered as a theory of causality explaining human behavior that contests the current dominance of Newton's theory of causality as cause and effect. The contestation…

  13. Emergent "Quantum" Theory in Complex Adaptive Systems.

    PubMed

    Minic, Djordje; Pajevic, Sinisa

    2016-04-30

    Motivated by the question of stability, in this letter we argue that an effective quantum-like theory can emerge in complex adaptive systems. In the concrete example of stochastic Lotka-Volterra dynamics, the relevant effective "Planck constant" associated with such emergent "quantum" theory has the dimensions of the square of the unit of time. Such an emergent quantum-like theory has inherently non-classical stability as well as coherent properties that are not, in principle, endangered by thermal fluctuations and therefore might be of crucial importance in complex adaptive systems.

  14. Emergent “Quantum” Theory in Complex Adaptive Systems

    PubMed Central

    Minic, Djordje; Pajevic, Sinisa

    2017-01-01

    Motivated by the question of stability, in this letter we argue that an effective quantum-like theory can emerge in complex adaptive systems. In the concrete example of stochastic Lotka-Volterra dynamics, the relevant effective “Planck constant” associated with such emergent “quantum” theory has the dimensions of the square of the unit of time. Such an emergent quantum-like theory has inherently non-classical stability as well as coherent properties that are not, in principle, endangered by thermal fluctuations and therefore might be of crucial importance in complex adaptive systems. PMID:28890591

  15. Adaptation Research in Rehabilitation Counseling

    ERIC Educational Resources Information Center

    Parker, Randall M.

    2007-01-01

    This paper reviews current research concerning psychosocial adaptation to chronic illness and disability and presents recommendations for future development of theories in this area. First, those who craft or adapt theories must use nondisabling, respectful, and empowering language. Rehabilitation professionals must avoid terms that connote…

  16. Parallel implementation of Hartree-Fock and density functional theory analytical second derivatives

    NASA Astrophysics Data System (ADS)

    Baker, Jon; Wolinski, Krzysztof; Malagoli, Massimo; Pulay, Peter

    2004-01-01

    We present an efficient, parallel implementation for the calculation of Hartree-Fock and density functional theory analytical Hessian (force constant, nuclear second derivative) matrices. These are important for the determination of harmonic vibrational frequencies, and to classify stationary points on potential energy surfaces. Our program is designed for modest parallelism (4-16 CPUs) as exemplified by our standard eight-processor QuantumCube™. We can routinely handle systems with up to 100+ atoms and 1000+ basis functions using under 0.5 GB of RAM memory per CPU. Timings are presented for several systems, ranging in size from aspirin (C9H8O4) to nickel octaethylporphyrin (C36H44N4Ni).

  17. Plasma and energetic particle structure of a collisionless quasi-parallel shock

    NASA Technical Reports Server (NTRS)

    Kennel, C. F.; Scarf, F. L.; Coroniti, F. V.; Russell, C. T.; Smith, E. J.; Wenzel, K. P.; Reinhard, R.; Sanderson, T. R.; Feldman, W. C.; Parks, G. K.

    1983-01-01

    The quasi-parallel interplanetary shock of November 11-12, 1978 from both the collisionless shock and energetic particle points of view were studied using measurements of the interplanetary magnetic and electric fields, solar wind electrons, plasma and MHD waves, and intermediate and high energy ions obtained on ISEE-1, -2, and -3. The interplanetary environment through which the shock was propagating when it encountered the three spacecraft was characterized; the observations of this shock are documented and current theories of quasi-parallel shock structure and particle acceleration are tested. These observations tend to confirm present self consistent theories of first order Fermi acceleration by shocks and of collisionless shock dissipation involving firehouse instability.

  18. Memory Retrieval Given Two Independent Cues: Cue Selection or Parallel Access?

    ERIC Educational Resources Information Center

    Rickard, Timothy C.; Bajic, Daniel

    2004-01-01

    A basic but unresolved issue in the study of memory retrieval is whether multiple independent cues can be used concurrently (i.e., in parallel) to recall a single, common response. A number of empirical results, as well as potentially applicable theories, suggest that retrieval can proceed in parallel, though Rickard (1997) set forth a model that…

  19. Parallel and Serial Grouping of Image Elements in Visual Perception

    ERIC Educational Resources Information Center

    Houtkamp, Roos; Roelfsema, Pieter R.

    2010-01-01

    The visual system groups image elements that belong to an object and segregates them from other objects and the background. Important cues for this grouping process are the Gestalt criteria, and most theories propose that these are applied in parallel across the visual scene. Here, we find that Gestalt grouping can indeed occur in parallel in some…

  20. Massively parallel and linear-scaling algorithm for second-order Møller-Plesset perturbation theory applied to the study of supramolecular wires

    NASA Astrophysics Data System (ADS)

    Kjærgaard, Thomas; Baudin, Pablo; Bykov, Dmytro; Eriksen, Janus Juul; Ettenhuber, Patrick; Kristensen, Kasper; Larkin, Jeff; Liakh, Dmitry; Pawłowski, Filip; Vose, Aaron; Wang, Yang Min; Jørgensen, Poul

    2017-03-01

    We present a scalable cross-platform hybrid MPI/OpenMP/OpenACC implementation of the Divide-Expand-Consolidate (DEC) formalism with portable performance on heterogeneous HPC architectures. The Divide-Expand-Consolidate formalism is designed to reduce the steep computational scaling of conventional many-body methods employed in electronic structure theory to linear scaling, while providing a simple mechanism for controlling the error introduced by this approximation. Our massively parallel implementation of this general scheme has three levels of parallelism, being a hybrid of the loosely coupled task-based parallelization approach and the conventional MPI +X programming model, where X is either OpenMP or OpenACC. We demonstrate strong and weak scalability of this implementation on heterogeneous HPC systems, namely on the GPU-based Cray XK7 Titan supercomputer at the Oak Ridge National Laboratory. Using the "resolution of the identity second-order Møller-Plesset perturbation theory" (RI-MP2) as the physical model for simulating correlated electron motion, the linear-scaling DEC implementation is applied to 1-aza-adamantane-trione (AAT) supramolecular wires containing up to 40 monomers (2440 atoms, 6800 correlated electrons, 24 440 basis functions and 91 280 auxiliary functions). This represents the largest molecular system treated at the MP2 level of theory, demonstrating an efficient removal of the scaling wall pertinent to conventional quantum many-body methods.

  1. Adaptive Environment for Supercompiling with Optimized Parallelism (AESOP)

    DTIC Science & Technology

    2011-09-01

    DATES COVERED (From - To) September 2011 Final 09 March 2009 – 31 July 2011 4 . TITLE AND SUBTITLE ADAPTIVE ENVIRONMENT FOR SUPERCOMPILING WITH... 4 2.1 System characterization loop...Integration Points for AESOP .......................................................................................10 4 . LLVM and the AESOP Compiler

  2. Adaptive implicit-explicit and parallel element-by-element iteration schemes

    NASA Technical Reports Server (NTRS)

    Tezduyar, T. E.; Liou, J.; Nguyen, T.; Poole, S.

    1989-01-01

    Adaptive implicit-explicit (AIE) and grouped element-by-element (GEBE) iteration schemes are presented for the finite element solution of large-scale problems in computational mechanics and physics. The AIE approach is based on the dynamic arrangement of the elements into differently treated groups. The GEBE procedure, which is a way of rewriting the EBE formulation to make its parallel processing potential and implementation more clear, is based on the static arrangement of the elements into groups with no inter-element coupling within each group. Various numerical tests performed demonstrate the savings in the CPU time and memory.

  3. Intelligent flight control systems

    NASA Technical Reports Server (NTRS)

    Stengel, Robert F.

    1993-01-01

    The capabilities of flight control systems can be enhanced by designing them to emulate functions of natural intelligence. Intelligent control functions fall in three categories. Declarative actions involve decision-making, providing models for system monitoring, goal planning, and system/scenario identification. Procedural actions concern skilled behavior and have parallels in guidance, navigation, and adaptation. Reflexive actions are spontaneous, inner-loop responses for control and estimation. Intelligent flight control systems learn knowledge of the aircraft and its mission and adapt to changes in the flight environment. Cognitive models form an efficient basis for integrating 'outer-loop/inner-loop' control functions and for developing robust parallel-processing algorithms.

  4. Evolving institutional and policy frameworks to support adaptation strategies

    Treesearch

    Dave Cleaves

    2014-01-01

    Given the consequences and opportunities of the Anthropocene, what is our underlying theory or vision of successful adaptation? This essay discusses the building blocks of this theory, and how will we translate this theory into guiding principles for management and policy.

  5. A Theory of Complex Adaptive Inquiring Organizations: Application to Continuous Assurance of Corporate Financial Information

    ERIC Educational Resources Information Center

    Kuhn, John R., Jr.

    2009-01-01

    Drawing upon the theories of complexity and complex adaptive systems and the Singerian Inquiring System from C. West Churchman's seminal work "The Design of Inquiring Systems" the dissertation herein develops a systems design theory for continuous auditing systems. The dissertation consists of discussion of the two foundational theories,…

  6. Similar traits, different genes? Examining convergent evolution in related weedy rice populations

    USDA-ARS?s Scientific Manuscript database

    Convergent phenotypic evolution may or may not be associated with parallel genotypic evolution. Agricultural weeds have repeatedly been selected for weed-adaptive traits such as rapid growth, increased seed dispersal and dormancy, thus providing an ideal system for the study of parallel evolution. H...

  7. Distributed Learning, Recognition, and Prediction by ART and ARTMAP Neural Networks.

    PubMed

    Carpenter, Gail A.

    1997-11-01

    A class of adaptive resonance theory (ART) models for learning, recognition, and prediction with arbitrarily distributed code representations is introduced. Distributed ART neural networks combine the stable fast learning capabilities of winner-take-all ART systems with the noise tolerance and code compression capabilities of multilayer perceptrons. With a winner-take-all code, the unsupervised model dART reduces to fuzzy ART and the supervised model dARTMAP reduces to fuzzy ARTMAP. With a distributed code, these networks automatically apportion learned changes according to the degree of activation of each coding node, which permits fast as well as slow learning without catastrophic forgetting. Distributed ART models replace the traditional neural network path weight with a dynamic weight equal to the rectified difference between coding node activation and an adaptive threshold. Thresholds increase monotonically during learning according to a principle of atrophy due to disuse. However, monotonic change at the synaptic level manifests itself as bidirectional change at the dynamic level, where the result of adaptation resembles long-term potentiation (LTP) for single-pulse or low frequency test inputs but can resemble long-term depression (LTD) for higher frequency test inputs. This paradoxical behavior is traced to dual computational properties of phasic and tonic coding signal components. A parallel distributed match-reset-search process also helps stabilize memory. Without the match-reset-search system, dART becomes a type of distributed competitive learning network.

  8. The effects of training group exercise class instructors to adopt a motivationally adaptive communication style.

    PubMed

    Ntoumanis, N; Thøgersen-Ntoumani, C; Quested, E; Hancox, J

    2017-09-01

    Drawing from self-determination theory (Deci & Ryan, 2002), we developed and tested an intervention to train fitness instructors to adopt a motivationally adaptive communication style when interacting with exercisers. This was a parallel group, two-arm quasi-experimental design. Participants in the intervention arm were 29 indoor cycling instructors (n = 10 for the control arm) and 246 class members (n = 75 for the control arm). The intervention consisted of face-to-face workshops, education/information video clips, group discussions and activities, brainstorming, individual planning, and practical tasks in the cycling studio. Instructors and exercisers responded to validated questionnaires about instructors' use of motivational strategies and other motivation-related variables before the first workshop and at the end of the third and final workshop (4 months later). Time × arm interactions revealed no significant effects, possibly due to the large attrition of instructors and exercisers in the control arm. Within-group analyses in the intervention arm showed that exercisers' perceptions of instructor motivationally adaptive strategies, psychological need satisfaction, and intentions to remain in the class increased over time. Similarly, instructors in the intervention arm reported being less controlling and experiencing more need satisfaction over time. These results offer initial promising evidence for the positive impact of the training. © 2016 The Authors Scandinavian Journal of Medicine & Science in Sports Published by John Wiley & Sons Ltd.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liakh, Dmitry I

    While the formalism of multiresolution analysis (MRA), based on wavelets and adaptive integral representations of operators, is actively progressing in electronic structure theory (mostly on the independent-particle level and, recently, second-order perturbation theory), the concepts of multiresolution and adaptivity can also be utilized within the traditional formulation of correlated (many-particle) theory which is based on second quantization and the corresponding (generally nonorthogonal) tensor algebra. In this paper, we present a formalism called scale-adaptive tensor algebra (SATA) which exploits an adaptive representation of tensors of many-body operators via the local adjustment of the basis set quality. Given a series of locallymore » supported fragment bases of a progressively lower quality, we formulate the explicit rules for tensor algebra operations dealing with adaptively resolved tensor operands. The formalism suggested is expected to enhance the applicability and reliability of local correlated many-body methods of electronic structure theory, especially those directly based on atomic orbitals (or any other localized basis functions).« less

  10. Qualitative Differences between Naive and Scientific Theories of Evolution

    ERIC Educational Resources Information Center

    Shtulman, Andrew

    2006-01-01

    Philosophers of biology have long argued that Darwin's theory of evolution was qualitatively different from all earlier theories of evolution. Whereas Darwin's predecessors and contemporaries explained adaptation as the transformation of a species' ''essence,'' Darwin explained adaptation as the selective propagation of randomly occurring…

  11. Synthetic consciousness: the distributed adaptive control perspective

    PubMed Central

    2016-01-01

    Understanding the nature of consciousness is one of the grand outstanding scientific challenges. The fundamental methodological problem is how phenomenal first person experience can be accounted for in a third person verifiable form, while the conceptual challenge is to both define its function and physical realization. The distributed adaptive control theory of consciousness (DACtoc) proposes answers to these three challenges. The methodological challenge is answered relative to the hard problem and DACtoc proposes that it can be addressed using a convergent synthetic methodology using the analysis of synthetic biologically grounded agents, or quale parsing. DACtoc hypothesizes that consciousness in both its primary and secondary forms serves the ability to deal with the hidden states of the world and emerged during the Cambrian period, affording stable multi-agent environments to emerge. The process of consciousness is an autonomous virtualization memory, which serializes and unifies the parallel and subconscious simulations of the hidden states of the world that are largely due to other agents and the self with the objective to extract norms. These norms are in turn projected as value onto the parallel simulation and control systems that are driving action. This functional hypothesis is mapped onto the brainstem, midbrain and the thalamo-cortical and cortico-cortical systems and analysed with respect to our understanding of deficits of consciousness. Subsequently, some of the implications and predictions of DACtoc are outlined, in particular, the prediction that normative bootstrapping of conscious agents is predicated on an intentionality prior. In the view advanced here, human consciousness constitutes the ultimate evolutionary transition by allowing agents to become autonomous with respect to their evolutionary priors leading to a post-biological Anthropocene. This article is part of the themed issue ‘The major synthetic evolutionary transitions’. PMID:27431526

  12. Synthetic consciousness: the distributed adaptive control perspective.

    PubMed

    Verschure, Paul F M J

    2016-08-19

    Understanding the nature of consciousness is one of the grand outstanding scientific challenges. The fundamental methodological problem is how phenomenal first person experience can be accounted for in a third person verifiable form, while the conceptual challenge is to both define its function and physical realization. The distributed adaptive control theory of consciousness (DACtoc) proposes answers to these three challenges. The methodological challenge is answered relative to the hard problem and DACtoc proposes that it can be addressed using a convergent synthetic methodology using the analysis of synthetic biologically grounded agents, or quale parsing. DACtoc hypothesizes that consciousness in both its primary and secondary forms serves the ability to deal with the hidden states of the world and emerged during the Cambrian period, affording stable multi-agent environments to emerge. The process of consciousness is an autonomous virtualization memory, which serializes and unifies the parallel and subconscious simulations of the hidden states of the world that are largely due to other agents and the self with the objective to extract norms. These norms are in turn projected as value onto the parallel simulation and control systems that are driving action. This functional hypothesis is mapped onto the brainstem, midbrain and the thalamo-cortical and cortico-cortical systems and analysed with respect to our understanding of deficits of consciousness. Subsequently, some of the implications and predictions of DACtoc are outlined, in particular, the prediction that normative bootstrapping of conscious agents is predicated on an intentionality prior. In the view advanced here, human consciousness constitutes the ultimate evolutionary transition by allowing agents to become autonomous with respect to their evolutionary priors leading to a post-biological Anthropocene.This article is part of the themed issue 'The major synthetic evolutionary transitions'. © 2016 The Author(s).

  13. Towards a large-scale scalable adaptive heart model using shallow tree meshes

    NASA Astrophysics Data System (ADS)

    Krause, Dorian; Dickopf, Thomas; Potse, Mark; Krause, Rolf

    2015-10-01

    Electrophysiological heart models are sophisticated computational tools that place high demands on the computing hardware due to the high spatial resolution required to capture the steep depolarization front. To address this challenge, we present a novel adaptive scheme for resolving the deporalization front accurately using adaptivity in space. Our adaptive scheme is based on locally structured meshes. These tensor meshes in space are organized in a parallel forest of trees, which allows us to resolve complicated geometries and to realize high variations in the local mesh sizes with a minimal memory footprint in the adaptive scheme. We discuss both a non-conforming mortar element approximation and a conforming finite element space and present an efficient technique for the assembly of the respective stiffness matrices using matrix representations of the inclusion operators into the product space on the so-called shallow tree meshes. We analyzed the parallel performance and scalability for a two-dimensional ventricle slice as well as for a full large-scale heart model. Our results demonstrate that the method has good performance and high accuracy.

  14. Linearized potential solution for an airfoil in nonuniform parallel streams

    NASA Technical Reports Server (NTRS)

    Prabhu, R. K.; Tiwari, S. N.

    1983-01-01

    A small perturbation potential flow theory is applied to the problem of determining the chordwise pressure distribution, lift and pitching moment of a thin airfoil in the middle of five parallel streams. This theory is then extended to the case of an undisturbed stream having a given smooth velocity profile. Two typical examples are considered and the results obtained are compared with available solutions of Euler's equations. The agreement between these two results is not quite satisfactory. Possible reasons for the differences are indicated.

  15. Transitioning NWChem to the Next Generation of Manycore Machines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bylaska, Eric J.; Apra, Edoardo; Kowalski, Karol

    The NorthWest Chemistry (NWChem) modeling software is a popular molecular chemistry simulation software that was designed from the start to work on massively parallel processing supercomputers[6, 28, 49]. It contains an umbrella of modules that today includes Self Consistent Field (SCF), second order Mller-Plesset perturbation theory (MP2), Coupled Cluster, multi-conguration selfconsistent eld (MCSCF), selected conguration interaction (CI), tensor contraction engine (TCE) many body methods, density functional theory (DFT), time-dependent density functional theory (TDDFT), real time time-dependent density functional theory, pseudopotential plane-wave density functional theory (PSPW), band structure (BAND), ab initio molecular dynamics, Car-Parrinello molecular dynamics, classical molecular dynamics (MD), QM/MM,more » AIMD/MM, GIAO NMR, COSMO, COSMO-SMD, and RISM solvation models, free energy simulations, reaction path optimization, parallel in time, among other capabilities[ 22]. Moreover new capabilities continue to be added with each new release.« less

  16. Early-Life Stressors, Personality Development, and Fast Life Strategies: An Evolutionary Perspective on Malevolent Personality Features.

    PubMed

    Csathó, Árpád; Birkás, Béla

    2018-01-01

    Life history theory posits that behavioral adaptation to various environmental (ecological and/or social) conditions encountered during childhood is regulated by a wide variety of different traits resulting in various behavioral strategies. Unpredictable and harsh conditions tend to produce fast life history strategies, characterized by early maturation, a higher number of sexual partners to whom one is less attached, and less parenting of offspring. Unpredictability and harshness not only affects dispositional social and emotional functioning, but may also promote the development of personality traits linked to higher rates of instability in social relationships or more self-interested behavior. Similarly, detrimental childhood experiences, such as poor parental care or high parent-child conflict, affect personality development and may create a more distrustful, malicious interpersonal style. The aim of this brief review is to survey and summarize findings on the impact of negative early-life experiences on the development of personality and fast life history strategies. By demonstrating that there are parallels in adaptations to adversity in these two domains, we hope to lend weight to current and future attempts to provide a comprehensive insight of personality traits and functions at the ultimate and proximate levels.

  17. Time-dependent density-functional theory in massively parallel computer architectures: the octopus project

    NASA Astrophysics Data System (ADS)

    Andrade, Xavier; Alberdi-Rodriguez, Joseba; Strubbe, David A.; Oliveira, Micael J. T.; Nogueira, Fernando; Castro, Alberto; Muguerza, Javier; Arruabarrena, Agustin; Louie, Steven G.; Aspuru-Guzik, Alán; Rubio, Angel; Marques, Miguel A. L.

    2012-06-01

    Octopus is a general-purpose density-functional theory (DFT) code, with a particular emphasis on the time-dependent version of DFT (TDDFT). In this paper we present the ongoing efforts to achieve the parallelization of octopus. We focus on the real-time variant of TDDFT, where the time-dependent Kohn-Sham equations are directly propagated in time. This approach has great potential for execution in massively parallel systems such as modern supercomputers with thousands of processors and graphics processing units (GPUs). For harvesting the potential of conventional supercomputers, the main strategy is a multi-level parallelization scheme that combines the inherent scalability of real-time TDDFT with a real-space grid domain-partitioning approach. A scalable Poisson solver is critical for the efficiency of this scheme. For GPUs, we show how using blocks of Kohn-Sham states provides the required level of data parallelism and that this strategy is also applicable for code optimization on standard processors. Our results show that real-time TDDFT, as implemented in octopus, can be the method of choice for studying the excited states of large molecular systems in modern parallel architectures.

  18. Considering Horn's Parallel Analysis from a Random Matrix Theory Point of View.

    PubMed

    Saccenti, Edoardo; Timmerman, Marieke E

    2017-03-01

    Horn's parallel analysis is a widely used method for assessing the number of principal components and common factors. We discuss the theoretical foundations of parallel analysis for principal components based on a covariance matrix by making use of arguments from random matrix theory. In particular, we show that (i) for the first component, parallel analysis is an inferential method equivalent to the Tracy-Widom test, (ii) its use to test high-order eigenvalues is equivalent to the use of the joint distribution of the eigenvalues, and thus should be discouraged, and (iii) a formal test for higher-order components can be obtained based on a Tracy-Widom approximation. We illustrate the performance of the two testing procedures using simulated data generated under both a principal component model and a common factors model. For the principal component model, the Tracy-Widom test performs consistently in all conditions, while parallel analysis shows unpredictable behavior for higher-order components. For the common factor model, including major and minor factors, both procedures are heuristic approaches, with variable performance. We conclude that the Tracy-Widom procedure is preferred over parallel analysis for statistically testing the number of principal components based on a covariance matrix.

  19. Time-dependent density-functional theory in massively parallel computer architectures: the OCTOPUS project.

    PubMed

    Andrade, Xavier; Alberdi-Rodriguez, Joseba; Strubbe, David A; Oliveira, Micael J T; Nogueira, Fernando; Castro, Alberto; Muguerza, Javier; Arruabarrena, Agustin; Louie, Steven G; Aspuru-Guzik, Alán; Rubio, Angel; Marques, Miguel A L

    2012-06-13

    Octopus is a general-purpose density-functional theory (DFT) code, with a particular emphasis on the time-dependent version of DFT (TDDFT). In this paper we present the ongoing efforts to achieve the parallelization of octopus. We focus on the real-time variant of TDDFT, where the time-dependent Kohn-Sham equations are directly propagated in time. This approach has great potential for execution in massively parallel systems such as modern supercomputers with thousands of processors and graphics processing units (GPUs). For harvesting the potential of conventional supercomputers, the main strategy is a multi-level parallelization scheme that combines the inherent scalability of real-time TDDFT with a real-space grid domain-partitioning approach. A scalable Poisson solver is critical for the efficiency of this scheme. For GPUs, we show how using blocks of Kohn-Sham states provides the required level of data parallelism and that this strategy is also applicable for code optimization on standard processors. Our results show that real-time TDDFT, as implemented in octopus, can be the method of choice for studying the excited states of large molecular systems in modern parallel architectures.

  20. Adaptive Disturbance Tracking Theory with State Estimation and State Feedback for Region II Control of Large Wind Turbines

    NASA Technical Reports Server (NTRS)

    Balas, Mark J.; Thapa Magar, Kaman S.; Frost, Susan A.

    2013-01-01

    A theory called Adaptive Disturbance Tracking Control (ADTC) is introduced and used to track the Tip Speed Ratio (TSR) of 5 MW Horizontal Axis Wind Turbine (HAWT). Since ADTC theory requires wind speed information, a wind disturbance generator model is combined with lower order plant model to estimate the wind speed as well as partial states of the wind turbine. In this paper, we present a proof of stability and convergence of ADTC theory with lower order estimator and show that the state feedback can be adaptive.

  1. Performance evaluation of GPU parallelization, space-time adaptive algorithms, and their combination for simulating cardiac electrophysiology.

    PubMed

    Sachetto Oliveira, Rafael; Martins Rocha, Bernardo; Burgarelli, Denise; Meira, Wagner; Constantinides, Christakis; Weber Dos Santos, Rodrigo

    2018-02-01

    The use of computer models as a tool for the study and understanding of the complex phenomena of cardiac electrophysiology has attained increased importance nowadays. At the same time, the increased complexity of the biophysical processes translates into complex computational and mathematical models. To speed up cardiac simulations and to allow more precise and realistic uses, 2 different techniques have been traditionally exploited: parallel computing and sophisticated numerical methods. In this work, we combine a modern parallel computing technique based on multicore and graphics processing units (GPUs) and a sophisticated numerical method based on a new space-time adaptive algorithm. We evaluate each technique alone and in different combinations: multicore and GPU, multicore and GPU and space adaptivity, multicore and GPU and space adaptivity and time adaptivity. All the techniques and combinations were evaluated under different scenarios: 3D simulations on slabs, 3D simulations on a ventricular mouse mesh, ie, complex geometry, sinus-rhythm, and arrhythmic conditions. Our results suggest that multicore and GPU accelerate the simulations by an approximate factor of 33×, whereas the speedups attained by the space-time adaptive algorithms were approximately 48. Nevertheless, by combining all the techniques, we obtained speedups that ranged between 165 and 498. The tested methods were able to reduce the execution time of a simulation by more than 498× for a complex cellular model in a slab geometry and by 165× in a realistic heart geometry simulating spiral waves. The proposed methods will allow faster and more realistic simulations in a feasible time with no significant loss of accuracy. Copyright © 2017 John Wiley & Sons, Ltd.

  2. Implementation of an Improved Adaptive Testing Theory

    ERIC Educational Resources Information Center

    Al-A'ali, Mansoor

    2007-01-01

    Computer adaptive testing is the study of scoring tests and questions based on assumptions concerning the mathematical relationship between examinees' ability and the examinees' responses. Adaptive student tests, which are based on item response theory (IRT), have many advantages over conventional tests. We use the least square method, a…

  3. A Theory of Secondary Teachers' Adaptations When Implementing a Reading Intervention Program

    ERIC Educational Resources Information Center

    Leko, Melinda M.; Roberts, Carly A.; Pek, Yvonne

    2015-01-01

    This study examined the causes and consequences of secondary teachers' adaptations when implementing a research-based reading intervention program. Interview, observation, and artifact data were collected on five middle school intervention teachers, leading to a grounded theory composed of the core component, reconciliation through adaptation, and…

  4. Visualization of Octree Adaptive Mesh Refinement (AMR) in Astrophysical Simulations

    NASA Astrophysics Data System (ADS)

    Labadens, M.; Chapon, D.; Pomaréde, D.; Teyssier, R.

    2012-09-01

    Computer simulations are important in current cosmological research. Those simulations run in parallel on thousands of processors, and produce huge amount of data. Adaptive mesh refinement is used to reduce the computing cost while keeping good numerical accuracy in regions of interest. RAMSES is a cosmological code developed by the Commissariat à l'énergie atomique et aux énergies alternatives (English: Atomic Energy and Alternative Energies Commission) which uses Octree adaptive mesh refinement. Compared to grid based AMR, the Octree AMR has the advantage to fit very precisely the adaptive resolution of the grid to the local problem complexity. However, this specific octree data type need some specific software to be visualized, as generic visualization tools works on Cartesian grid data type. This is why the PYMSES software has been also developed by our team. It relies on the python scripting language to ensure a modular and easy access to explore those specific data. In order to take advantage of the High Performance Computer which runs the RAMSES simulation, it also uses MPI and multiprocessing to run some parallel code. We would like to present with more details our PYMSES software with some performance benchmarks. PYMSES has currently two visualization techniques which work directly on the AMR. The first one is a splatting technique, and the second one is a custom ray tracing technique. Both have their own advantages and drawbacks. We have also compared two parallel programming techniques with the python multiprocessing library versus the use of MPI run. The load balancing strategy has to be smartly defined in order to achieve a good speed up in our computation. Results obtained with this software are illustrated in the context of a massive, 9000-processor parallel simulation of a Milky Way-like galaxy.

  5. Development and Standardization of the Diagnostic Adaptive Behavior Scale: Application of Item Response Theory to the Assessment of Adaptive Behavior

    ERIC Educational Resources Information Center

    Tassé, Marc J.; Schalock, Robert L.; Thissen, David; Balboni, Giulia; Bersani, Henry, Jr.; Borthwick-Duffy, Sharon A.; Spreat, Scott; Widaman, Keith F.; Zhang, Dalun; Navas, Patricia

    2016-01-01

    The Diagnostic Adaptive Behavior Scale (DABS) was developed using item response theory (IRT) methods and was constructed to provide the most precise and valid adaptive behavior information at or near the cutoff point of making a decision regarding a diagnosis of intellectual disability. The DABS initial item pool consisted of 260 items. Using IRT…

  6. Darwin's "Gigantic Blunder."

    ERIC Educational Resources Information Center

    Barrett, Paul H.

    1973-01-01

    Darwin's attempt at unraveling the Glen Roy parallel road mystery is discussed. He admitted that Louis Agassiz's glacier theory seemed reasonable but he was reluctant to give up on his own marine theory. (DF)

  7. Ultrastrong coupling in supersymmetric gauge theories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buchel, Alex

    1999-10-04

    We study 'ultrastrong' coupling points in scale-invariant N=2 gauge theories. These are theories where, naively, the coupling becomes infinite, and is not related by S-duality to a weak coupling point. These theories have been somewhat of a mystery, since in the M-theory description they correspond to points where parallel M 5-branes coincide. Using the low-energy effective field theory arguments we relate these theories to other known N=2 CFT.

  8. Gyrokinetic theory of turbulent acceleration and momentum conservation in tokamak plasmas

    NASA Astrophysics Data System (ADS)

    Lu, WANG; Shuitao, PENG; P, H. DIAMOND

    2018-07-01

    Understanding the generation of intrinsic rotation in tokamak plasmas is crucial for future fusion reactors such as ITER. We proposed a new mechanism named turbulent acceleration for the origin of the intrinsic parallel rotation based on gyrokinetic theory. The turbulent acceleration acts as a local source or sink of parallel rotation, i.e., volume force, which is different from the divergence of residual stress, i.e., surface force. However, the order of magnitude of turbulent acceleration can be comparable to that of the divergence of residual stress for electrostatic ion temperature gradient (ITG) turbulence. A possible theoretical explanation for the experimental observation of electron cyclotron heating induced decrease of co-current rotation was also proposed via comparison between the turbulent acceleration driven by ITG turbulence and that driven by collisionless trapped electron mode turbulence. We also extended this theory to electromagnetic ITG turbulence and investigated the electromagnetic effects on intrinsic parallel rotation drive. Finally, we demonstrated that the presence of turbulent acceleration does not conflict with momentum conservation.

  9. Testing the adaptive radiation hypothesis for the lemurs of Madagascar.

    PubMed

    Herrera, James P

    2017-01-01

    Lemurs, the diverse, endemic primates of Madagascar, are thought to represent a classic example of adaptive radiation. Based on the most complete phylogeny of living and extinct lemurs yet assembled, I tested predictions of adaptive radiation theory by estimating rates of speciation, extinction and adaptive phenotypic evolution. As predicted, lemur speciation rate exceeded that of their sister clade by nearly twofold, indicating the diversification dynamics of lemurs and mainland relatives may have been decoupled. Lemur diversification rates did not decline over time, however, as predicted by adaptive radiation theory. Optimal body masses diverged among dietary and activity pattern niches as lineages diversified into unique multidimensional ecospace. Based on these results, lemurs only partially fulfil the predictions of adaptive radiation theory, with phenotypic evolution corresponding to an 'early burst' of adaptive differentiation. The results must be interpreted with caution, however, because over the long evolutionary history of lemurs (approx. 50 million years), the 'early burst' signal of adaptive radiation may have been eroded by extinction.

  10. Testing the adaptive radiation hypothesis for the lemurs of Madagascar

    PubMed Central

    2017-01-01

    Lemurs, the diverse, endemic primates of Madagascar, are thought to represent a classic example of adaptive radiation. Based on the most complete phylogeny of living and extinct lemurs yet assembled, I tested predictions of adaptive radiation theory by estimating rates of speciation, extinction and adaptive phenotypic evolution. As predicted, lemur speciation rate exceeded that of their sister clade by nearly twofold, indicating the diversification dynamics of lemurs and mainland relatives may have been decoupled. Lemur diversification rates did not decline over time, however, as predicted by adaptive radiation theory. Optimal body masses diverged among dietary and activity pattern niches as lineages diversified into unique multidimensional ecospace. Based on these results, lemurs only partially fulfil the predictions of adaptive radiation theory, with phenotypic evolution corresponding to an ‘early burst’ of adaptive differentiation. The results must be interpreted with caution, however, because over the long evolutionary history of lemurs (approx. 50 million years), the ‘early burst’ signal of adaptive radiation may have been eroded by extinction. PMID:28280597

  11. Knowledge representation into Ada parallel processing

    NASA Technical Reports Server (NTRS)

    Masotto, Tom; Babikyan, Carol; Harper, Richard

    1990-01-01

    The Knowledge Representation into Ada Parallel Processing project is a joint NASA and Air Force funded project to demonstrate the execution of intelligent systems in Ada on the Charles Stark Draper Laboratory fault-tolerant parallel processor (FTPP). Two applications were demonstrated - a portion of the adaptive tactical navigator and a real time controller. Both systems are implemented as Activation Framework Objects on the Activation Framework intelligent scheduling mechanism developed by Worcester Polytechnic Institute. The implementations, results of performance analyses showing speedup due to parallelism and initial efficiency improvements are detailed and further areas for performance improvements are suggested.

  12. Robust adaptive antiswing control of underactuated crane systems with two parallel payloads and rail length constraint.

    PubMed

    Zhang, Zhongcai; Wu, Yuqiang; Huang, Jinming

    2016-11-01

    The antiswing control and accurate positioning are simultaneously investigated for underactuated crane systems in the presence of two parallel payloads on the trolley and rail length limitation. The equations of motion for the crane system in question are established via the Euler-Lagrange equation. An adaptive control strategy is proposed with the help of system energy function and energy shaping technique. Stability analysis shows that under the designed adaptive controller, the payload swings can be suppressed ultimately and the trolley can be regulated to the destination while not exceeding the pre-specified boundaries. Simulation results are provided to show the satisfactory control performances of the presented control method in terms of working efficiency as well as robustness with respect to external disturbances. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Performance Analysis and Portability of the PLUM Load Balancing System

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Biswas, Rupak; Gabow, Harold N.

    1998-01-01

    The ability to dynamically adapt an unstructured mesh is a powerful tool for solving computational problems with evolving physical features; however, an efficient parallel implementation is rather difficult. To address this problem, we have developed PLUM, an automatic portable framework for performing adaptive numerical computations in a message-passing environment. PLUM requires that all data be globally redistributed after each mesh adaption to achieve load balance. We present an algorithm for minimizing this remapping overhead by guaranteeing an optimal processor reassignment. We also show that the data redistribution cost can be significantly reduced by applying our heuristic processor reassignment algorithm to the default mapping of the parallel partitioner. Portability is examined by comparing performance on a SP2, an Origin2000, and a T3E. Results show that PLUM can be successfully ported to different platforms without any code modifications.

  14. Physics Structure Analysis of Parallel Waves Concept of Physics Teacher Candidate

    NASA Astrophysics Data System (ADS)

    Sarwi, S.; Supardi, K. I.; Linuwih, S.

    2017-04-01

    The aim of this research was to find a parallel structure concept of wave physics and the factors that influence on the formation of parallel conceptions of physics teacher candidates. The method used qualitative research which types of cross-sectional design. These subjects were five of the third semester of basic physics and six of the fifth semester of wave course students. Data collection techniques used think aloud and written tests. Quantitative data were analysed with descriptive technique-percentage. The data analysis technique for belief and be aware of answers uses an explanatory analysis. Results of the research include: 1) the structure of the concept can be displayed through the illustration of a map containing the theoretical core, supplements the theory and phenomena that occur daily; 2) the trend of parallel conception of wave physics have been identified on the stationary waves, resonance of the sound and the propagation of transverse electromagnetic waves; 3) the influence on the parallel conception that reading textbooks less comprehensive and knowledge is partial understanding as forming the structure of the theory.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gamblin, T; de Supinski, B R; Schulz, M

    Good load balance is crucial on very large parallel systems, but the most sophisticated algorithms introduce dynamic imbalances through adaptation in domain decomposition or use of adaptive solvers. To observe and diagnose imbalance, developers need system-wide, temporally-ordered measurements from full-scale runs. This potentially requires data collection from multiple code regions on all processors over the entire execution. Doing this instrumentation naively can, in combination with the application itself, exceed available I/O bandwidth and storage capacity, and can induce severe behavioral perturbations. We present and evaluate a novel technique for scalable, low-error load balance measurement. This uses a parallel wavelet transformmore » and other parallel encoding methods. We show that our technique collects and reconstructs system-wide measurements with low error. Compression time scales sublinearly with system size and data volume is several orders of magnitude smaller than the raw data. The overhead is low enough for online use in a production environment.« less

  16. The purpose of adaptation

    PubMed Central

    2017-01-01

    A central feature of Darwin's theory of natural selection is that it explains the purpose of biological adaptation. Here, I: emphasize the scientific importance of understanding what adaptations are for, in terms of facilitating the derivation of empirically testable predictions; discuss the population genetical basis for Darwin's theory of the purpose of adaptation, with reference to Fisher's ‘fundamental theorem of natural selection'; and show that a deeper understanding of the purpose of adaptation is achieved in the context of social evolution, with reference to inclusive fitness and superorganisms. PMID:28839927

  17. The purpose of adaptation.

    PubMed

    Gardner, Andy

    2017-10-06

    A central feature of Darwin's theory of natural selection is that it explains the purpose of biological adaptation. Here, I: emphasize the scientific importance of understanding what adaptations are for, in terms of facilitating the derivation of empirically testable predictions; discuss the population genetical basis for Darwin's theory of the purpose of adaptation, with reference to Fisher's 'fundamental theorem of natural selection'; and show that a deeper understanding of the purpose of adaptation is achieved in the context of social evolution, with reference to inclusive fitness and superorganisms.

  18. Parallel Signal Processing and System Simulation using aCe

    NASA Technical Reports Server (NTRS)

    Dorband, John E.; Aburdene, Maurice F.

    2003-01-01

    Recently, networked and cluster computation have become very popular for both signal processing and system simulation. A new language is ideally suited for parallel signal processing applications and system simulation since it allows the programmer to explicitly express the computations that can be performed concurrently. In addition, the new C based parallel language (ace C) for architecture-adaptive programming allows programmers to implement algorithms and system simulation applications on parallel architectures by providing them with the assurance that future parallel architectures will be able to run their applications with a minimum of modification. In this paper, we will focus on some fundamental features of ace C and present a signal processing application (FFT).

  19. Using Item Response Theory and Adaptive Testing in Online Career Assessment

    ERIC Educational Resources Information Center

    Betz, Nancy E.; Turner, Brandon M.

    2011-01-01

    The present article describes the potential utility of item response theory (IRT) and adaptive testing for scale evaluation and for web-based career assessment. The article describes the principles of both IRT and adaptive testing and then illustrates these with reference to data analyses and simulation studies of the Career Confidence Inventory…

  20. Method for six-legged robot stepping on obstacles by indirect force estimation

    NASA Astrophysics Data System (ADS)

    Xu, Yilin; Gao, Feng; Pan, Yang; Chai, Xun

    2016-07-01

    Adaptive gaits for legged robots often requires force sensors installed on foot-tips, however impact, temperature or humidity can affect or even damage those sensors. Efforts have been made to realize indirect force estimation on the legged robots using leg structures based on planar mechanisms. Robot Octopus III is a six-legged robot using spatial parallel mechanism(UP-2UPS) legs. This paper proposed a novel method to realize indirect force estimation on walking robot based on a spatial parallel mechanism. The direct kinematics model and the inverse kinematics model are established. The force Jacobian matrix is derived based on the kinematics model. Thus, the indirect force estimation model is established. Then, the relation between the output torques of the three motors installed on one leg to the external force exerted on the foot tip is described. Furthermore, an adaptive tripod static gait is designed. The robot alters its leg trajectory to step on obstacles by using the proposed adaptive gait. Both the indirect force estimation model and the adaptive gait are implemented and optimized in a real time control system. An experiment is carried out to validate the indirect force estimation model. The adaptive gait is tested in another experiment. Experiment results show that the robot can successfully step on a 0.2 m-high obstacle. This paper proposes a novel method to overcome obstacles for the six-legged robot using spatial parallel mechanism legs and to avoid installing the electric force sensors in harsh environment of the robot's foot tips.

  1. Optimizing trial design in pharmacogenetics research: comparing a fixed parallel group, group sequential, and adaptive selection design on sample size requirements.

    PubMed

    Boessen, Ruud; van der Baan, Frederieke; Groenwold, Rolf; Egberts, Antoine; Klungel, Olaf; Grobbee, Diederick; Knol, Mirjam; Roes, Kit

    2013-01-01

    Two-stage clinical trial designs may be efficient in pharmacogenetics research when there is some but inconclusive evidence of effect modification by a genomic marker. Two-stage designs allow to stop early for efficacy or futility and can offer the additional opportunity to enrich the study population to a specific patient subgroup after an interim analysis. This study compared sample size requirements for fixed parallel group, group sequential, and adaptive selection designs with equal overall power and control of the family-wise type I error rate. The designs were evaluated across scenarios that defined the effect sizes in the marker positive and marker negative subgroups and the prevalence of marker positive patients in the overall study population. Effect sizes were chosen to reflect realistic planning scenarios, where at least some effect is present in the marker negative subgroup. In addition, scenarios were considered in which the assumed 'true' subgroup effects (i.e., the postulated effects) differed from those hypothesized at the planning stage. As expected, both two-stage designs generally required fewer patients than a fixed parallel group design, and the advantage increased as the difference between subgroups increased. The adaptive selection design added little further reduction in sample size, as compared with the group sequential design, when the postulated effect sizes were equal to those hypothesized at the planning stage. However, when the postulated effects deviated strongly in favor of enrichment, the comparative advantage of the adaptive selection design increased, which precisely reflects the adaptive nature of the design. Copyright © 2013 John Wiley & Sons, Ltd.

  2. Unstructured Adaptive Meshes: Bad for Your Memory?

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Feng, Hui-Yu; VanderWijngaart, Rob

    2003-01-01

    This viewgraph presentation explores the need for a NASA Advanced Supercomputing (NAS) parallel benchmark for problems with irregular dynamical memory access. This benchmark is important and necessary because: 1) Problems with localized error source benefit from adaptive nonuniform meshes; 2) Certain machines perform poorly on such problems; 3) Parallel implementation may provide further performance improvement but is difficult. Some examples of problems which use irregular dynamical memory access include: 1) Heat transfer problem; 2) Heat source term; 3) Spectral element method; 4) Base functions; 5) Elemental discrete equations; 6) Global discrete equations. Nonconforming Mesh and Mortar Element Method are covered in greater detail in this presentation.

  3. Parallel processors and nonlinear structural dynamics algorithms and software

    NASA Technical Reports Server (NTRS)

    Belytschko, Ted; Gilbertsen, Noreen D.; Neal, Mark O.; Plaskacz, Edward J.

    1989-01-01

    The adaptation of a finite element program with explicit time integration to a massively parallel SIMD (single instruction multiple data) computer, the CONNECTION Machine is described. The adaptation required the development of a new algorithm, called the exchange algorithm, in which all nodal variables are allocated to the element with an exchange of nodal forces at each time step. The architectural and C* programming language features of the CONNECTION Machine are also summarized. Various alternate data structures and associated algorithms for nonlinear finite element analysis are discussed and compared. Results are presented which demonstrate that the CONNECTION Machine is capable of outperforming the CRAY XMP/14.

  4. Job Satisfaction: A Possible Integration of Two Theories

    ERIC Educational Resources Information Center

    Hazer, John T.

    1976-01-01

    The author proposes an integration of Herzberg's two-factor theory of job satisfaction (job satisfaction/dissatisfaction as two separate, parallel continua) and traditional theory (job satisfaction/dissatisfaction sharing the same continuum) and a rationale for deciding which motivation methods to use for employees with differeing levels of…

  5. Host shifts result in parallel genetic changes when viruses evolve in closely related species

    PubMed Central

    Day, Jonathan P.; Smith, Sophia C. L.; Houslay, Thomas M.; Tagliaferri, Lucia

    2018-01-01

    Host shifts, where a pathogen invades and establishes in a new host species, are a major source of emerging infectious diseases. They frequently occur between related host species and often rely on the pathogen evolving adaptations that increase their fitness in the novel host species. To investigate genetic changes in novel hosts, we experimentally evolved replicate lineages of an RNA virus (Drosophila C Virus) in 19 different species of Drosophilidae and deep sequenced the viral genomes. We found a strong pattern of parallel evolution, where viral lineages from the same host were genetically more similar to each other than to lineages from other host species. When we compared viruses that had evolved in different host species, we found that parallel genetic changes were more likely to occur if the two host species were closely related. This suggests that when a virus adapts to one host it might also become better adapted to closely related host species. This may explain in part why host shifts tend to occur between related species, and may mean that when a new pathogen appears in a given species, closely related species may become vulnerable to the new disease. PMID:29649296

  6. Global analysis of genes involved in freshwater adaptation in threespine sticklebacks (Gasterosteus aculeatus).

    PubMed

    DeFaveri, Jacquelin; Shikano, Takahito; Shimada, Yukinori; Goto, Akira; Merilä, Juha

    2011-06-01

    Examples of parallel evolution of phenotypic traits have been repeatedly demonstrated in threespine sticklebacks (Gasterosteus aculeatus) across their global distribution. Using these as a model, we performed a targeted genome scan--focusing on physiologically important genes potentially related to freshwater adaptation--to identify genetic signatures of parallel physiological evolution on a global scale. To this end, 50 microsatellite loci, including 26 loci within or close to (<6 kb) physiologically important genes, were screened in paired marine and freshwater populations from six locations across the Northern Hemisphere. Signatures of directional selection were detected in 24 loci, including 17 physiologically important genes, in at least one location. Although no loci showed consistent signatures of selection in all divergent population pairs, several outliers were common in multiple locations. In particular, seven physiologically important genes, as well as reference ectodysplasin gene (EDA), showed signatures of selection in three or more locations. Hence, although these results give some evidence for consistent parallel molecular evolution in response to freshwater colonization, they suggest that different evolutionary pathways may underlie physiological adaptation to freshwater habitats within the global distribution of the threespine stickleback. © 2011 The Author(s). Evolution© 2011 The Society for the Study of Evolution.

  7. Edge reconstruction in armchair phosphorene nanoribbons revealed by discontinuous Galerkin density functional theory.

    PubMed

    Hu, Wei; Lin, Lin; Yang, Chao

    2015-12-21

    With the help of our recently developed massively parallel DGDFT (Discontinuous Galerkin Density Functional Theory) methodology, we perform large-scale Kohn-Sham density functional theory calculations on phosphorene nanoribbons with armchair edges (ACPNRs) containing a few thousands to ten thousand atoms. The use of DGDFT allows us to systematically achieve a conventional plane wave basis set type of accuracy, but with a much smaller number (about 15) of adaptive local basis (ALB) functions per atom for this system. The relatively small number of degrees of freedom required to represent the Kohn-Sham Hamiltonian, together with the use of the pole expansion the selected inversion (PEXSI) technique that circumvents the need to diagonalize the Hamiltonian, results in a highly efficient and scalable computational scheme for analyzing the electronic structures of ACPNRs as well as their dynamics. The total wall clock time for calculating the electronic structures of large-scale ACPNRs containing 1080-10,800 atoms is only 10-25 s per self-consistent field (SCF) iteration, with accuracy fully comparable to that obtained from conventional planewave DFT calculations. For the ACPNR system, we observe that the DGDFT methodology can scale to 5000-50,000 processors. We use DGDFT based ab initio molecular dynamics (AIMD) calculations to study the thermodynamic stability of ACPNRs. Our calculations reveal that a 2 × 1 edge reconstruction appears in ACPNRs at room temperature.

  8. Adaptive Control of Linear Modal Systems Using Residual Mode Filters and a Simple Disturbance Estimator

    NASA Technical Reports Server (NTRS)

    Balas, Mark; Frost, Susan

    2012-01-01

    Flexible structures containing a large number of modes can benefit from adaptive control techniques which are well suited to applications that have unknown modeling parameters and poorly known operating conditions. In this paper, we focus on a direct adaptive control approach that has been extended to handle adaptive rejection of persistent disturbances. We extend our adaptive control theory to accommodate troublesome modal subsystems of a plant that might inhibit the adaptive controller. In some cases the plant does not satisfy the requirements of Almost Strict Positive Realness. Instead, there maybe be a modal subsystem that inhibits this property. This section will present new results for our adaptive control theory. We will modify the adaptive controller with a Residual Mode Filter (RMF) to compensate for the troublesome modal subsystem, or the Q modes. Here we present the theory for adaptive controllers modified by RMFs, with attention to the issue of disturbances propagating through the Q modes. We apply the theoretical results to a flexible structure example to illustrate the behavior with and without the residual mode filter.

  9. Exploring the utility of institutional theory in analysing international health agency stasis and change.

    PubMed

    Gómez, Eduardo J

    2013-10-01

    Of recent interest is the capacity of international health agencies to adapt to changes in the global health environment and country needs. Yet, little is known about the potential benefits of using social science institutional theory, such as path dependency and institutional change theory, to explain why some international agencies, such as the WHO and the Global Fund to Fight AIDS, Tuberculosis and Malaria, fail to adapt, whereas others, such as the World Bank and UNAIDS, have. This article suggests that these institutional theories can help to better understand these differences in international agency adaptive capacity, while highlighting new areas of policy research and analysis.

  10. The Adaptive Basis of Psychosocial Acceleration: Comment on beyond Mental Health, Life History Strategies Articles

    ERIC Educational Resources Information Center

    Nettle, Daniel; Frankenhuis, Willem E.; Rickard, Ian J.

    2012-01-01

    Four of the articles published in this special section of "Developmental Psychology" build on and refine psychosocial acceleration theory. In this short commentary, we discuss some of the adaptive assumptions of psychosocial acceleration theory that have not received much attention. Psychosocial acceleration theory relies on the behavior of…

  11. Complexity, Chaos, and Nonlinear Dynamics: A New Perspective on Career Development Theory

    ERIC Educational Resources Information Center

    Bloch, Deborah P.

    2005-01-01

    The author presents a theory of career development drawing on nonlinear dynamics and chaos and complexity theories. Career is presented as a complex adaptive entity, a fractal of the human entity. Characteristics of complex adaptive entities, including (a) autopiesis, or self-regeneration; (b) open exchange; (c) participation in networks; (d)…

  12. A parallel finite element simulator for ion transport through three-dimensional ion channel systems.

    PubMed

    Tu, Bin; Chen, Minxin; Xie, Yan; Zhang, Linbo; Eisenberg, Bob; Lu, Benzhuo

    2013-09-15

    A parallel finite element simulator, ichannel, is developed for ion transport through three-dimensional ion channel systems that consist of protein and membrane. The coordinates of heavy atoms of the protein are taken from the Protein Data Bank and the membrane is represented as a slab. The simulator contains two components: a parallel adaptive finite element solver for a set of Poisson-Nernst-Planck (PNP) equations that describe the electrodiffusion process of ion transport, and a mesh generation tool chain for ion channel systems, which is an essential component for the finite element computations. The finite element method has advantages in modeling irregular geometries and complex boundary conditions. We have built a tool chain to get the surface and volume mesh for ion channel systems, which consists of a set of mesh generation tools. The adaptive finite element solver in our simulator is implemented using the parallel adaptive finite element package Parallel Hierarchical Grid (PHG) developed by one of the authors, which provides the capability of doing large scale parallel computations with high parallel efficiency and the flexibility of choosing high order elements to achieve high order accuracy. The simulator is applied to a real transmembrane protein, the gramicidin A (gA) channel protein, to calculate the electrostatic potential, ion concentrations and I - V curve, with which both primitive and transformed PNP equations are studied and their numerical performances are compared. To further validate the method, we also apply the simulator to two other ion channel systems, the voltage dependent anion channel (VDAC) and α-Hemolysin (α-HL). The simulation results agree well with Brownian dynamics (BD) simulation results and experimental results. Moreover, because ionic finite size effects can be included in PNP model now, we also perform simulations using a size-modified PNP (SMPNP) model on VDAC and α-HL. It is shown that the size effects in SMPNP can effectively lead to reduced current in the channel, and the results are closer to BD simulation results. Copyright © 2013 Wiley Periodicals, Inc.

  13. Behavioral and neural Darwinism: selectionist function and mechanism in adaptive behavior dynamics.

    PubMed

    McDowell, J J

    2010-05-01

    An evolutionary theory of behavior dynamics and a theory of neuronal group selection share a common selectionist framework. The theory of behavior dynamics instantiates abstractly the idea that behavior is selected by its consequences. It implements Darwinian principles of selection, reproduction, and mutation to generate adaptive behavior in virtual organisms. The behavior generated by the theory has been shown to be quantitatively indistinguishable from that of live organisms. The theory of neuronal group selection suggests a mechanism whereby the abstract principles of the evolutionary theory may be implemented in the nervous systems of biological organisms. According to this theory, groups of neurons subserving behavior may be selected by synaptic modifications that occur when the consequences of behavior activate value systems in the brain. Together, these theories constitute a framework for a comprehensive account of adaptive behavior that extends from brain function to the behavior of whole organisms in quantitative detail. Copyright (c) 2009 Elsevier B.V. All rights reserved.

  14. Parallel Evolution in Science: The Historical Roots and Central Concepts of General Systems Theory; and "General Systems Theory,""Modern Organizational Theory," and Organizational Communication.

    ERIC Educational Resources Information Center

    Lederman, Linda Costigan; Rogers, Don

    The two papers in this document focus on general systems theory. In her paper, Linda Lederman discusses the emergence and evolution of general systems theory, defines its central concepts, and draws some conclusions regarding the nature of the theory and its value as an epistemology. Don Rogers, in his paper, relates some of the important features…

  15. Enacting Glasser's (1998) Choice Theory in a Grade 3 Classroom: A Case Study

    ERIC Educational Resources Information Center

    Irvine, Jeff

    2015-01-01

    Choice theory identifies five psychological needs: survival, freedom, power, belonging, and fun (Glasser, 1998). There are close parallels with self-determination theory (SDT), which specifies autonomy, competence, and relatedness as essential needs (Deci & Ryan, 2000). This case study examines a very successful example of choice theory…

  16. Magnetic Field Effects and Electromagnetic Wave Propagation in Highly Collisional Plasmas.

    NASA Astrophysics Data System (ADS)

    Bozeman, Steven Paul

    The homogeneity and size of radio frequency (RF) and microwave driven plasmas are often limited by insufficient penetration of the electromagnetic radiation. To investigate increasing the skin depth of the radiation, we consider the propagation of electromagnetic waves in a weakly ionized plasma immersed in a steady magnetic field where the dominant collision processes are electron-neutral and ion-neutral collisions. Retaining both the electron and ion dynamics, we have adapted the theory for cold collisionless plasmas to include the effects of these collisions and obtained the dispersion relation at arbitrary frequency omega for plane waves propagating at arbitrary angles with respect to the magnetic field. We discuss in particular the cases of magnetic field enhanced wave penetration for parallel and perpendicular propagation, examining the experimental parameters which lead to electromagnetic wave propagation beyond the collisional skin depth. Our theory predicts that the most favorable scaling of skin depth with magnetic field occurs for waves propagating nearly parallel to B and for omega << Omega_{rm e} where Omega_{rm e} is the electron cyclotron frequency. The scaling is less favorable for propagation perpendicular to B, but the skin depth does increase for this case as well. Still, to achieve optimal wave penetration, we find that one must design the plasma configuration and antenna geometry so that one generates primarily the appropriate angles of propagation. We have measured plasma wave amplitudes and phases using an RF magnetic probe and densities using Stark line broadening. These measurements were performed in inductively coupled plasmas (ICP's) driven with a standard helical coil, a reverse turn (Stix) coil, and a flat spiral coil. Density measurements were also made in a microwave generated plasma. The RF magnetic probe measurements of wave propagation in a conventional ICP with wave propagation approximately perpendicular to B show an increase in skin depth with magnetic field and a damping of the effect of B with pressure. The flat coil geometry which launches waves more nearly parallel to B allows enhanced wave penetration at higher pressures than the standard helical coil.

  17. Numerical simulation of h-adaptive immersed boundary method for freely falling disks

    NASA Astrophysics Data System (ADS)

    Zhang, Pan; Xia, Zhenhua; Cai, Qingdong

    2018-05-01

    In this work, a freely falling disk with aspect ratio 1/10 is directly simulated by using an adaptive numerical model implemented on a parallel computation framework JASMIN. The adaptive numerical model is a combination of the h-adaptive mesh refinement technique and the implicit immersed boundary method (IBM). Our numerical results agree well with the experimental results in all of the six degrees of freedom of the disk. Furthermore, very similar vortex structures observed in the experiment were also obtained.

  18. Self-Avoiding Walks Over Adaptive Triangular Grids

    NASA Technical Reports Server (NTRS)

    Heber, Gerd; Biswas, Rupak; Gao, Guang R.; Saini, Subhash (Technical Monitor)

    1999-01-01

    Space-filling curves is a popular approach based on a geometric embedding for linearizing computational meshes. We present a new O(n log n) combinatorial algorithm for constructing a self avoiding walk through a two dimensional mesh containing n triangles. We show that for hierarchical adaptive meshes, the algorithm can be locally adapted and easily parallelized by taking advantage of the regularity of the refinement rules. The proposed approach should be very useful in the runtime partitioning and load balancing of adaptive unstructured grids.

  19. Parallel Algorithm Solves Coupled Differential Equations

    NASA Technical Reports Server (NTRS)

    Hayashi, A.

    1987-01-01

    Numerical methods adapted to concurrent processing. Algorithm solves set of coupled partial differential equations by numerical integration. Adapted to run on hypercube computer, algorithm separates problem into smaller problems solved concurrently. Increase in computing speed with concurrent processing over that achievable with conventional sequential processing appreciable, especially for large problems.

  20. Adaptive Identification by Systolic Arrays.

    DTIC Science & Technology

    1987-12-01

    BIBLIOGRIAPHY Anton , Howard, Elementary Linear Algebra , John Wiley & Sons, 19S4. Cristi, Roberto, A Parallel Structure Jor Adaptive Pole Placement...10 11. SYSTEM IDENTIFICATION M*YETHODS ....................... 12 A. LINEAR SYSTEM MODELING ......................... 12 B. SOLUTION OF SYSTEMS OF... LINEAR EQUATIONS ......... 13 C. QR DECOMPOSITION ................................ 14 D. RECURSIVE LEAST SQUARES ......................... 16 E. BLOCK

  1. Innovative Language-Based & Object-Oriented Structured AMR Using Fortran 90 and OpenMP

    NASA Technical Reports Server (NTRS)

    Norton, C.; Balsara, D.

    1999-01-01

    Parallel adaptive mesh refinement (AMR) is an important numerical technique that leads to the efficient solution of many physical and engineering problems. In this paper, we describe how AMR programing can be performed in an object-oreinted way using the modern aspects of Fortran 90 combined with the parallelization features of OpenMP.

  2. Adaptive capacity of geographical clusters: Complexity science and network theory approach

    NASA Astrophysics Data System (ADS)

    Albino, Vito; Carbonara, Nunzia; Giannoccaro, Ilaria

    This paper deals with the adaptive capacity of geographical clusters (GCs), that is a relevant topic in the literature. To address this topic, GC is considered as a complex adaptive system (CAS). Three theoretical propositions concerning the GC adaptive capacity are formulated by using complexity theory. First, we identify three main properties of CAS s that affect the adaptive capacity, namely the interconnectivity, the heterogeneity, and the level of control, and define how the value of these properties influence the adaptive capacity. Then, we associate these properties with specific GC characteristics so obtaining the key conditions of GCs that give them the adaptive capacity so assuring their competitive advantage. To test these theoretical propositions, a case study on two real GCs is carried out. The considered GCs are modeled as networks where firms are nodes and inter-firms relationships are links. Heterogeneity, interconnectivity, and level of control are considered as network properties and thus measured by using the methods of the network theory.

  3. Research in Parallel Algorithms and Software for Computational Aerosciences

    NASA Technical Reports Server (NTRS)

    Domel, Neal D.

    1996-01-01

    Phase I is complete for the development of a Computational Fluid Dynamics parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.

  4. Research in Parallel Algorithms and Software for Computational Aerosciences

    NASA Technical Reports Server (NTRS)

    Domel, Neal D.

    1996-01-01

    Phase 1 is complete for the development of a computational fluid dynamics CFD) parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.

  5. The effect of anisotropic heat transport on magnetic islands in 3-D configurations

    NASA Astrophysics Data System (ADS)

    Schlutt, M. G.; Hegna, C. C.

    2012-08-01

    An analytic theory of nonlinear pressure-induced magnetic island formation using a boundary layer analysis is presented. This theory extends previous work by including the effects of finite parallel heat transport and is applicable to general three dimensional magnetic configurations. In this work, particular attention is paid to the role of finite parallel heat conduction in the context of pressure-induced island physics. It is found that localized currents that require self-consistent deformation of the pressure profile, such as resistive interchange and bootstrap currents, are attenuated by finite parallel heat conduction when the magnetic islands are sufficiently small. However, these anisotropic effects do not change saturated island widths caused by Pfirsch-Schlüter current effects. Implications for finite pressure-induced island healing are discussed.

  6. The Theory of a Free Jet of a Compressible Gas

    NASA Technical Reports Server (NTRS)

    Abramovich, G. N.

    1944-01-01

    In the present report the theory of free turbulence propagation and the boundary layer theory are developed for a plane-parallel free stream of a compressible fluid. In constructing the theory use was made of the turbulence hypothesis by Taylor (transport of vorticity) which gives best agreement with test results for problems involving heat transfer in free jets.

  7. Parallel and serial grouping of image elements in visual perception.

    PubMed

    Houtkamp, Roos; Roelfsema, Pieter R

    2010-12-01

    The visual system groups image elements that belong to an object and segregates them from other objects and the background. Important cues for this grouping process are the Gestalt criteria, and most theories propose that these are applied in parallel across the visual scene. Here, we find that Gestalt grouping can indeed occur in parallel in some situations, but we demonstrate that there are also situations where Gestalt grouping becomes serial. We observe substantial time delays when image elements have to be grouped indirectly through a chain of local groupings. We call this chaining process incremental grouping and demonstrate that it can occur for only a single object at a time. We suggest that incremental grouping requires the gradual spread of object-based attention so that eventually all the object's parts become grouped explicitly by an attentional labeling process. Our findings inspire a new incremental grouping theory that relates the parallel, local grouping process to feedforward processing and the serial, incremental grouping process to recurrent processing in the visual cortex.

  8. A general parallel sparse-blocked matrix multiply for linear scaling SCF theory

    NASA Astrophysics Data System (ADS)

    Challacombe, Matt

    2000-06-01

    A general approach to the parallel sparse-blocked matrix-matrix multiply is developed in the context of linear scaling self-consistent-field (SCF) theory. The data-parallel message passing method uses non-blocking communication to overlap computation and communication. The space filling curve heuristic is used to achieve data locality for sparse matrix elements that decay with “separation”. Load balance is achieved by solving the bin packing problem for blocks with variable size.With this new method as the kernel, parallel performance of the simplified density matrix minimization (SDMM) for solution of the SCF equations is investigated for RHF/6-31G ∗∗ water clusters and RHF/3-21G estane globules. Sustained rates above 5.7 GFLOPS for the SDMM have been achieved for (H 2 O) 200 with 95 Origin 2000 processors. Scalability is found to be limited by load imbalance, which increases with decreasing granularity, due primarily to the inhomogeneous distribution of variable block sizes.

  9. Parallel implementation of geometrical shock dynamics for two dimensional converging shock waves

    NASA Astrophysics Data System (ADS)

    Qiu, Shi; Liu, Kuang; Eliasson, Veronica

    2016-10-01

    Geometrical shock dynamics (GSD) theory is an appealing method to predict the shock motion in the sense that it is more computationally efficient than solving the traditional Euler equations, especially for converging shock waves. However, to solve and optimize large scale configurations, the main bottleneck is the computational cost. Among the existing numerical GSD schemes, there is only one that has been implemented on parallel computers, with the purpose to analyze detonation waves. To extend the computational advantage of the GSD theory to more general applications such as converging shock waves, a numerical implementation using a spatial decomposition method has been coupled with a front tracking approach on parallel computers. In addition, an efficient tridiagonal system solver for massively parallel computers has been applied to resolve the most expensive function in this implementation, resulting in an efficiency of 0.93 while using 32 HPCC cores. Moreover, symmetric boundary conditions have been developed to further reduce the computational cost, achieving a speedup of 19.26 for a 12-sided polygonal converging shock.

  10. Adaptive mesh refinement and load balancing based on multi-level block-structured Cartesian mesh

    NASA Astrophysics Data System (ADS)

    Misaka, Takashi; Sasaki, Daisuke; Obayashi, Shigeru

    2017-11-01

    We developed a framework for a distributed-memory parallel computer that enables dynamic data management for adaptive mesh refinement and load balancing. We employed simple data structure of the building cube method (BCM) where a computational domain is divided into multi-level cubic domains and each cube has the same number of grid points inside, realising a multi-level block-structured Cartesian mesh. Solution adaptive mesh refinement, which works efficiently with the help of the dynamic load balancing, was implemented by dividing cubes based on mesh refinement criteria. The framework was investigated with the Laplace equation in terms of adaptive mesh refinement, load balancing and the parallel efficiency. It was then applied to the incompressible Navier-Stokes equations to simulate a turbulent flow around a sphere. We considered wall-adaptive cube refinement where a non-dimensional wall distance y+ near the sphere is used for a criterion of mesh refinement. The result showed the load imbalance due to y+ adaptive mesh refinement was corrected by the present approach. To utilise the BCM framework more effectively, we also tested a cube-wise algorithm switching where an explicit and implicit time integration schemes are switched depending on the local Courant-Friedrichs-Lewy (CFL) condition in each cube.

  11. Adapted RF pulse design for SAR reduction in parallel excitation with experimental verification at 9.4 T.

    PubMed

    Wu, Xiaoping; Akgün, Can; Vaughan, J Thomas; Andersen, Peter; Strupp, John; Uğurbil, Kâmil; Van de Moortele, Pierre-François

    2010-07-01

    Parallel excitation holds strong promises to mitigate the impact of large transmit B1 (B+1) distortion at very high magnetic field. Accelerated RF pulses, however, inherently tend to require larger values in RF peak power which may result in substantial increase in Specific Absorption Rate (SAR) in tissues, which is a constant concern for patient safety at very high field. In this study, we demonstrate adapted rate RF pulse design allowing for SAR reduction while preserving excitation target accuracy. Compared with other proposed implementations of adapted rate RF pulses, our approach is compatible with any k-space trajectories, does not require an analytical expression of the gradient waveform and can be used for large flip angle excitation. We demonstrate our method with numerical simulations based on electromagnetic modeling and we include an experimental verification of transmit pattern accuracy on an 8 transmit channel 9.4 T system.

  12. Parallel Processing of Adaptive Meshes with Load Balancing

    NASA Technical Reports Server (NTRS)

    Das, Sajal K.; Harvey, Daniel J.; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2001-01-01

    Many scientific applications involve grids that lack a uniform underlying structure. These applications are often also dynamic in nature in that the grid structure significantly changes between successive phases of execution. In parallel computing environments, mesh adaptation of unstructured grids through selective refinement/coarsening has proven to be an effective approach. However, achieving load balance while minimizing interprocessor communication and redistribution costs is a difficult problem. Traditional dynamic load balancers are mostly inadequate because they lack a global view of system loads across processors. In this paper, we propose a novel and general-purpose load balancer that utilizes symmetric broadcast networks (SBN) as the underlying communication topology, and compare its performance with a successful global load balancing environment, called PLUM, specifically created to handle adaptive unstructured applications. Our experimental results on an IBM SP2 demonstrate that the SBN-based load balancer achieves lower redistribution costs than that under PLUM by overlapping processing and data migration.

  13. Brief Report: Translation and Adaptation of the Theory of Mind Inventory to Spanish

    ERIC Educational Resources Information Center

    Pujals, Elena; Batlle, Santiago; Camprodon, Ester; Pujals, Sílvia; Estrada, Xavier; Aceña, Marta; Petrizan, Araitz; Duñó, Lurdes; Martí, Josep; Martin, Luis Miguel; Pérez-Solá, Víctor

    2016-01-01

    The Theory of Mind Inventory is an informant measure designed to evaluate children's theory of mind competence. We describe the translation and cultural adaptation of the inventory by the following process: (1) translation from English to Spanish by two independent certified translators; (2) production of an agreed version by a multidisciplinary…

  14. A framework for grand scale parallelization of the combined finite discrete element method in 2d

    NASA Astrophysics Data System (ADS)

    Lei, Z.; Rougier, E.; Knight, E. E.; Munjiza, A.

    2014-09-01

    Within the context of rock mechanics, the Combined Finite-Discrete Element Method (FDEM) has been applied to many complex industrial problems such as block caving, deep mining techniques (tunneling, pillar strength, etc.), rock blasting, seismic wave propagation, packing problems, dam stability, rock slope stability, rock mass strength characterization problems, etc. The reality is that most of these were accomplished in a 2D and/or single processor realm. In this work a hardware independent FDEM parallelization framework has been developed using the Virtual Parallel Machine for FDEM, (V-FDEM). With V-FDEM, a parallel FDEM software can be adapted to different parallel architecture systems ranging from just a few to thousands of cores.

  15. Validation of neoclassical bootstrap current models in the edge of an H-mode plasma.

    PubMed

    Wade, M R; Murakami, M; Politzer, P A

    2004-06-11

    Analysis of the parallel electric field E(parallel) evolution following an L-H transition in the DIII-D tokamak indicates the generation of a large negative pulse near the edge which propagates inward, indicative of the generation of a noninductive edge current. Modeling indicates that the observed E(parallel) evolution is consistent with a narrow current density peak generated in the plasma edge. Very good quantitative agreement is found between the measured E(parallel) evolution and that expected from neoclassical theory predictions of the bootstrap current.

  16. Introduction to Fuzzy Set Theory

    NASA Technical Reports Server (NTRS)

    Kosko, Bart

    1990-01-01

    An introduction to fuzzy set theory is described. Topics covered include: neural networks and fuzzy systems; the dynamical systems approach to machine intelligence; intelligent behavior as adaptive model-free estimation; fuzziness versus probability; fuzzy sets; the entropy-subsethood theorem; adaptive fuzzy systems for backing up a truck-and-trailer; product-space clustering with differential competitive learning; and adaptive fuzzy system for target tracking.

  17. Adaptive management of natural resources: theory, concepts, and management institutions.

    Treesearch

    George H. Stankey; Roger N. Clark; Bernard T. Bormann

    2005-01-01

    This report reviews the extensive and growing literature on the concept and application of adaptive management. Adaptive management is a central element of the Northwest Forest Plan and there is a need for an informed understanding of the key theories, concepts, and frameworks upon which it is founded. Literature from a diverse range of fields including social learning...

  18. [How to properly use the fear in AIDS intervention-the history and further of fear appeal development].

    PubMed

    Zhang, Ke; Du, Xiufang; Tao, Xiaorun; Zhang, Yuanyuan; Kang, Dianmin

    2015-08-01

    The AIDS epidemic in men who have sex wlth men (MSM) in recent years showed a sharp upward trend, looking for behavioral intervention strategies should be imperative. Fear appeals by fear prompted intervention received intervention information, provide a new breakthrough to achieve better effect of propaganda and intervention. After over 70 years development, the Fear Appeal generated from the driver model that proposed the fear decided the effectiveness of behavior intervention, to the extended parallel process model theory which integrated protection motivation theory and parallel process theory, both of which believed the fear is just one of the estimators, suggested fear is the key factor. The fear appeal theory is turning to be even more comprehensive and accurate. As an important theoretical basement, the fear appeal is still developing, and need more work to make it perfection.

  19. Ellipsis and discourse coherence

    PubMed Central

    Frazier, Lyn; Clifton, Charles

    2006-01-01

    VP-ellipsis generally requires a syntactically matching antecedent. However, many documented examples exist where the antecedent is not appropriate. Kehler (2000, 2002) proposed an elegant theory which predicts a syntactic antecedent for an elided VP is required only for a certain discourse coherence relation (resemblance) not for cause-effect relations. Most of the data Kehler used to motivate his theory come from corpus studies and thus do not consist of true minimal pairs. We report five experiments testing predictions of the coherence theory, using standard minimal pair materials. The results raise questions about the empirical basis for coherence theory because parallelism is preferred for all coherence relations, not just resemblance relations. Further, strict identity readings, which should not be available when a syntactic antecedent is required, are influenced by parallelism per se, holding the discourse coherence relation constant. This draws into question the causal role of coherence relations in processing VP ellipsis. PMID:16896367

  20. Using Game Theory and the Bible to Build Critical Thinking Skills

    ERIC Educational Resources Information Center

    McCannon, Bryan C.

    2007-01-01

    The author describes a course designed to build the critical thinking skills of undergraduate economics students. The course introduces and uses game theory to study the Bible. Students gain experience using game theory to formalize events and, by drawing parallels between the Bible and common economic concepts, illustrate the pervasiveness of…

  1. Building a Middle-Range Theory of Adaptive Spirituality.

    PubMed

    Dobratz, Marjorie C

    2016-04-01

    The purpose of this article is to describe a Roy adaptation model based- research abstraction, the findings of which were synthesized into a middle-range theory (MRT) of adaptive spirituality. The published literature yielded 21 empirical studies that investigated religion/spirituality. Quantitative results supported the influence of spirituality on quality of life, psychosocial adjustment, well-being, adaptive coping, and the self-concept mode. Qualitative findings showed the importance of spiritual expressions, values, and beliefs in adapting to chronic illness, bereavement, death, and other life transitions. These findings were abstracted into six theoretical statements, a conceptual definition of adaptive spirituality, and three hypotheses for future testing. © The Author(s) 2016.

  2. Why do parallel cortical systems exist for the perception of static form and moving form?

    PubMed

    Grossberg, S

    1991-02-01

    This article analyzes computational properties that clarify why the parallel cortical systems V1----V2, V1----MT, and V1----V2----MT exist for the perceptual processing of static visual forms and moving visual forms. The article describes a symmetry principle, called FM symmetry, that is predicted to govern the development of these parallel cortical systems by computing all possible ways of symmetrically gating sustained cells with transient cells and organizing these sustained-transient cells into opponent pairs of on-cells and off-cells whose output signals are insensitive to direction of contrast. This symmetric organization explains how the static form system (static BCS) generates emergent boundary segmentations whose outputs are insensitive to direction of contrast and insensitive to direction of motion, whereas the motion form system (motion BCS) generates emergent boundary segmentations whose outputs are insensitive to direction of contrast but sensitive to direction of motion. FM symmetry clarifies why the geometries of static and motion form perception differ--for example, why the opposite orientation of vertical is horizontal (90 degrees), but the opposite direction of up is down (180 degrees). Opposite orientations and directions are embedded in gated dipole opponent processes that are capable of antagonistic rebound. Negative afterimages, such as the MacKay and waterfall illusions, are hereby explained as are aftereffects of long-range apparent motion. These antagonistic rebounds help to control a dynamic balance between complementary perceptual states of resonance and reset. Resonance cooperatively links features into emergent boundary segmentations via positive feedback in a CC loop, and reset terminates a resonance when the image changes, thereby preventing massive smearing of percepts. These complementary preattentive states of resonance and reset are related to analogous states that govern attentive feature integration, learning, and memory search in adaptive resonance theory. The mechanism used in the V1----MT system to generate a wave of apparent motion between discrete flashes may also be used in other cortical systems to generate spatial shifts of attention. The theory suggests how the V1----V2----MT cortical stream helps to compute moving form in depth and how long-range apparent motion of illusory contours occurs. These results collectively argue against vision theories that espouse independent processing modules. Instead, specialized subsystems interact to overcome computational uncertainties and complementary deficiencies, to cooperatively bind features into context-sensitive resonances, and to realize symmetry principles that are predicted to govern the development of the visual cortex.

  3. Nonlinear adaptive networks: A little theory, a few applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, R.D.; Qian, S.; Barnes, C.W.

    1990-01-01

    We present the theory of nonlinear adaptive networks and discuss a few applications. In particular, we review the theory of feedforward backpropagation networks. We than present the theory of the Connectionist Normalized Linear Spline network in both its feedforward and iterated modes. Also, we briefly discuss the theory of stochastic cellular automata. We then discuss applications to chaotic time series tidal prediction in Venice Lagoon, sonar transient detection, control of nonlinear processes, balancing a double inverted pendulum and design advice for free electron lasers. 26 refs., 23 figs.

  4. Prolegomena to the field

    NASA Astrophysics Data System (ADS)

    Chen, Su Shing; Caulfield, H. John

    1994-03-01

    Adaptive Computing, vs. Classical Computing, is emerging to be a field which is the culmination during the last 40 and more years of various scientific and technological areas, including cybernetics, neural networks, pattern recognition networks, learning machines, selfreproducing automata, genetic algorithms, fuzzy logics, probabilistic logics, chaos, electronics, optics, and quantum devices. This volume of "Critical Reviews on Adaptive Computing: Mathematics, Electronics, and Optics" is intended as a synergistic approach to this emerging field. There are many researchers in these areas working on important results. However, we have not seen a general effort to summarize and synthesize these results in theory as well as implementation. In order to reach a higher level of synergism, we propose Adaptive Computing as the field which comprises of the above mentioned computational paradigms and various realizations. The field should include both the Theory (or Mathematics) and the Implementation. Our emphasis is on the interplay of Theory and Implementation. The interplay, an adaptive process itself, of Theory and Implementation is the only "holistic" way to advance our understanding and realization of brain-like computation. We feel that a theory without implementation has the tendency to become unrealistic and "out-of-touch" with reality, while an implementation without theory runs the risk to be superficial and obsolete.

  5. A Knowledge-Based Approach for Item Exposure Control in Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Doong, Shing H.

    2009-01-01

    The purpose of this study is to investigate a functional relation between item exposure parameters (IEPs) and item parameters (IPs) over parallel pools. This functional relation is approximated by a well-known tool in machine learning. Let P and Q be parallel item pools and suppose IEPs for P have been obtained via a Sympson and Hetter-type…

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koniges, A.E.

    The author describes the new T3D parallel computer at NERSC. The adaptive mesh ICF3D code is one of the current applications being ported and developed for use on the T3D. It has been stressed in other papers in this proceedings that the development environment and tools available on the parallel computer is similar to any planned for the future including networks of workstations.

  7. The island dynamics model on parallel quadtree grids

    NASA Astrophysics Data System (ADS)

    Mistani, Pouria; Guittet, Arthur; Bochkov, Daniil; Schneider, Joshua; Margetis, Dionisios; Ratsch, Christian; Gibou, Frederic

    2018-05-01

    We introduce an approach for simulating epitaxial growth by use of an island dynamics model on a forest of quadtree grids, and in a parallel environment. To this end, we use a parallel framework introduced in the context of the level-set method. This framework utilizes: discretizations that achieve a second-order accurate level-set method on non-graded adaptive Cartesian grids for solving the associated free boundary value problem for surface diffusion; and an established library for the partitioning of the grid. We consider the cases with: irreversible aggregation, which amounts to applying Dirichlet boundary conditions at the island boundary; and an asymmetric (Ehrlich-Schwoebel) energy barrier for attachment/detachment of atoms at the island boundary, which entails the use of a Robin boundary condition. We provide the scaling analyses performed on the Stampede supercomputer and numerical examples that illustrate the capability of our methodology to efficiently simulate different aspects of epitaxial growth. The combination of adaptivity and parallelism in our approach enables simulations that are several orders of magnitude faster than those reported in the recent literature and, thus, provides a viable framework for the systematic study of mound formation on crystal surfaces.

  8. Parallel design of JPEG-LS encoder on graphics processing units

    NASA Astrophysics Data System (ADS)

    Duan, Hao; Fang, Yong; Huang, Bormin

    2012-01-01

    With recent technical advances in graphic processing units (GPUs), GPUs have outperformed CPUs in terms of compute capability and memory bandwidth. Many successful GPU applications to high performance computing have been reported. JPEG-LS is an ISO/IEC standard for lossless image compression which utilizes adaptive context modeling and run-length coding to improve compression ratio. However, adaptive context modeling causes data dependency among adjacent pixels and the run-length coding has to be performed in a sequential way. Hence, using JPEG-LS to compress large-volume hyperspectral image data is quite time-consuming. We implement an efficient parallel JPEG-LS encoder for lossless hyperspectral compression on a NVIDIA GPU using the computer unified device architecture (CUDA) programming technology. We use the block parallel strategy, as well as such CUDA techniques as coalesced global memory access, parallel prefix sum, and asynchronous data transfer. We also show the relation between GPU speedup and AVIRIS block size, as well as the relation between compression ratio and AVIRIS block size. When AVIRIS images are divided into blocks, each with 64×64 pixels, we gain the best GPU performance with 26.3x speedup over its original CPU code.

  9. Some Considerations in Maintaining Adaptive Test Item Pools.

    ERIC Educational Resources Information Center

    Stocking, Martha L.

    The construction of parallel editions of conventional tests for purposes of test security while maintaining score comparability has always been a recognized and difficult problem in psychometrics and test construction. The introduction of new modes of test construction, e.g., adaptive testing, changes the nature of the problem, but does not make…

  10. Master surgeons' operative teaching philosophies: a qualitative analysis of parallels to learning theory.

    PubMed

    Pernar, Luise I M; Ashley, Stanley W; Smink, Douglas S; Zinner, Michael J; Peyre, Sarah E

    2012-01-01

    Practicing within the Halstedian model of surgical education, academic surgeons serve dual roles as physicians to their patients and educators of their trainees. Despite this significant responsibility, few surgeons receive formal training in educational theory to inform their practice. The goal of this work was to gain an understanding of how master surgeons approach teaching uncommon and highly complex operations and to determine the educational constructs that frame their teaching philosophies and approaches. Individuals included in the study were queried using electronically distributed open-ended, structured surveys. Responses to the surveys were analyzed and grouped using grounded theory and were examined for parallels to concepts of learning theory. Academic teaching hospital. Twenty-two individuals identified as master surgeons. Twenty-one (95.5%) individuals responded to the survey. Two primary thematic clusters were identified: global approach to teaching (90.5% of respondents) and approach to intraoperative teaching (76.2%). Many of the emergent themes paralleled principles of transfer learning theory outlined in the psychology and education literature. Key elements included: conferring graduated responsibility (57.1%), encouraging development of a mental set (47.6%), fostering or expecting deliberate practice (42.9%), deconstructing complex tasks (38.1%), vertical transfer of information (33.3%), and identifying general principles to structure knowledge (9.5%). Master surgeons employ many of the principles of learning theory when teaching uncommon and highly complex operations. The findings may hold significant implications for faculty development in surgical education. Copyright © 2012 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  11. Electrooptical adaptive switching network for the hypercube computer

    NASA Technical Reports Server (NTRS)

    Chow, E.; Peterson, J.

    1988-01-01

    An all-optical network design for the hyperswitch network using regular free-space interconnects between electronic processor nodes is presented. The adaptive routing model used is described, and an adaptive routing control example is presented. The design demonstrates that existing electrooptical techniques are sufficient for implementing efficient parallel architectures without the need for more complex means of implementing arbitrary interconnection schemes. The electrooptical hyperswitch network significantly improves the communication performance of the hypercube computer.

  12. Tracking the Continuity of Language Comprehension: Computer Mouse Trajectories Suggest Parallel Syntactic Processing

    ERIC Educational Resources Information Center

    Farmer, Thomas A.; Cargill, Sarah A.; Hindy, Nicholas C.; Dale, Rick; Spivey, Michael J.

    2007-01-01

    Although several theories of online syntactic processing assume the parallel activation of multiple syntactic representations, evidence supporting simultaneous activation has been inconclusive. Here, the continuous and non-ballistic properties of computer mouse movements are exploited, by recording their streaming x, y coordinates to procure…

  13. Approximation algorithms for scheduling unrelated parallel machines with release dates

    NASA Astrophysics Data System (ADS)

    Avdeenko, T. V.; Mesentsev, Y. A.; Estraykh, I. V.

    2017-01-01

    In this paper we propose approaches to optimal scheduling of unrelated parallel machines with release dates. One approach is based on the scheme of dynamic programming modified with adaptive narrowing of search domain ensuring its computational effectiveness. We discussed complexity of the exact schedules synthesis and compared it with approximate, close to optimal, solutions. Also we explain how the algorithm works for the example of two unrelated parallel machines and five jobs with release dates. Performance results that show the efficiency of the proposed approach have been given.

  14. The relativistic theory of the chemical shift

    NASA Astrophysics Data System (ADS)

    Pyper, N. C.

    1983-04-01

    A relativistic theory of the NMR chemical shift for a closed-shell system is presented. The final expression for the shielding, derived by, applying two Gordon decompositions to the Dirac current operator, closely parallels the Ramsey non-relativistic result.

  15. Longitudinal trends in climate drive flowering time clines in North American Arabidopsis thaliana.

    PubMed

    Samis, Karen E; Murren, Courtney J; Bossdorf, Oliver; Donohue, Kathleen; Fenster, Charles B; Malmberg, Russell L; Purugganan, Michael D; Stinchcombe, John R

    2012-06-01

    Introduced species frequently show geographic differentiation, and when differentiation mirrors the ancestral range, it is often taken as evidence of adaptive evolution. The mouse-ear cress (Arabidopsis thaliana) was introduced to North America from Eurasia 150-200 years ago, providing an opportunity to study parallel adaptation in a genetic model organism. Here, we test for clinal variation in flowering time using 199 North American (NA) accessions of A. thaliana, and evaluate the contributions of major flowering time genes FRI, FLC, and PHYC as well as potential ecological mechanisms underlying differentiation. We find evidence for substantial within population genetic variation in quantitative traits and flowering time, and putatively adaptive longitudinal differentiation, despite low levels of variation at FRI, FLC, and PHYC and genome-wide reductions in population structure relative to Eurasian (EA) samples. The observed longitudinal cline in flowering time in North America is parallel to an EA cline, robust to the effects of population structure, and associated with geographic variation in winter precipitation and temperature. We detected major effects of FRI on quantitative traits associated with reproductive fitness, although the haplotype associated with higher fitness remains rare in North America. Collectively, our results suggest the evolution of parallel flowering time clines through novel genetic mechanisms.

  16. Increased performance in the short-term water demand forecasting through the use of a parallel adaptive weighting strategy

    NASA Astrophysics Data System (ADS)

    Sardinha-Lourenço, A.; Andrade-Campos, A.; Antunes, A.; Oliveira, M. S.

    2018-03-01

    Recent research on water demand short-term forecasting has shown that models using univariate time series based on historical data are useful and can be combined with other prediction methods to reduce errors. The behavior of water demands in drinking water distribution networks focuses on their repetitive nature and, under meteorological conditions and similar consumers, allows the development of a heuristic forecast model that, in turn, combined with other autoregressive models, can provide reliable forecasts. In this study, a parallel adaptive weighting strategy of water consumption forecast for the next 24-48 h, using univariate time series of potable water consumption, is proposed. Two Portuguese potable water distribution networks are used as case studies where the only input data are the consumption of water and the national calendar. For the development of the strategy, the Autoregressive Integrated Moving Average (ARIMA) method and a short-term forecast heuristic algorithm are used. Simulations with the model showed that, when using a parallel adaptive weighting strategy, the prediction error can be reduced by 15.96% and the average error by 9.20%. This reduction is important in the control and management of water supply systems. The proposed methodology can be extended to other forecast methods, especially when it comes to the availability of multiple forecast models.

  17. Analysis on Influence Factors of Adaptive Filter Acting on ANC

    NASA Astrophysics Data System (ADS)

    Zhang, Xiuqun; Zou, Liang; Ni, Guangkui; Wang, Xiaojun; Han, Tao; Zhao, Quanfu

    The noise problem has become more and more serious in recent years. The adaptive filter theory which is applied in ANC [1] (active noise control) has also attracted more and more attention. In this article, the basic principle and algorithm of adaptive theory are both researched. And then the influence factor that affects its covergence rate and noise reduction is also simulated.

  18. On the Use of Adaptive Instructional Images Based on the Sequential-Global Dimension of the Felder-Silverman Learning Style Theory

    ERIC Educational Resources Information Center

    Filippidis, Stavros K.; Tsoukalas, Ioannis A.

    2009-01-01

    An adaptive educational system that uses adaptive presentation is presented. In this system fragments of different images present the same content and the system can choose the one most relevant to the user based on the sequential-global dimension of Felder-Silverman's learning style theory. In order to retrieve the learning style of each student…

  19. An embodied biologically constrained model of foraging: from classical and operant conditioning to adaptive real-world behavior in DAC-X.

    PubMed

    Maffei, Giovanni; Santos-Pata, Diogo; Marcos, Encarni; Sánchez-Fibla, Marti; Verschure, Paul F M J

    2015-12-01

    Animals successfully forage within new environments by learning, simulating and adapting to their surroundings. The functions behind such goal-oriented behavior can be decomposed into 5 top-level objectives: 'how', 'why', 'what', 'where', 'when' (H4W). The paradigms of classical and operant conditioning describe some of the behavioral aspects found in foraging. However, it remains unclear how the organization of their underlying neural principles account for these complex behaviors. We address this problem from the perspective of the Distributed Adaptive Control theory of mind and brain (DAC) that interprets these two paradigms as expressing properties of core functional subsystems of a layered architecture. In particular, we propose DAC-X, a novel cognitive architecture that unifies the theoretical principles of DAC with biologically constrained computational models of several areas of the mammalian brain. DAC-X supports complex foraging strategies through the progressive acquisition, retention and expression of task-dependent information and associated shaping of action, from exploration to goal-oriented deliberation. We benchmark DAC-X using a robot-based hoarding task including the main perceptual and cognitive aspects of animal foraging. We show that efficient goal-oriented behavior results from the interaction of parallel learning mechanisms accounting for motor adaptation, spatial encoding and decision-making. Together, our results suggest that the H4W problem can be solved by DAC-X building on the insights from the study of classical and operant conditioning. Finally, we discuss the advantages and limitations of the proposed biologically constrained and embodied approach towards the study of cognition and the relation of DAC-X to other cognitive architectures. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Towards a cell-based mechanostat theory of bone: the need to account for osteocyte desensitisation and osteocyte replacement.

    PubMed

    Lerebours, Chloé; Buenzli, Pascal R

    2016-09-06

    Bone׳s mechanostat theory describes the adaptation of bone tissues to their mechanical environment. Many experiments have investigated and observed such structural adaptation. However, there is still much uncertainty about how to define the reference mechanical state at which bone structure is adapted and stable. Clinical and experimental observations show that this reference state varies both in space and in time, over a wide range of timescales. We propose here an osteocyte-based mechanostat theory that encodes the mechanical reference state in osteocyte properties. This theory assumes that osteocytes are initially formed adapted to their current local mechanical environment through modulation of their properties. We distinguish two main types of physiological processes by which osteocytes subsequently modify the reference mechanical state at different timescales. One is cell desensitisation, which occurs rapidly and reversibly during an osteocyte׳s lifetime. The other is the replacement of osteocytes during bone remodelling, which occurs over the long timescales of bone turnover. The novelty of this theory is to propose that long-lasting morphological and genotypic osteocyte properties provide a material basis for a long-term mechanical memory of bone that is gradually reset by bone remodelling. We test this theory by simulating long-term mechanical disuse (modelling spinal cord injury), and short-term mechanical loadings (modelling daily exercises) with a mathematical model. The consideration of osteocyte desensitisation and of osteocyte replacement by remodelling is able to capture a number of phenomena and timescales observed during the mechanical adaptation of bone tissues, lending support to this theory. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Cellular Automata

    NASA Astrophysics Data System (ADS)

    Gutowitz, Howard

    1991-08-01

    Cellular automata, dynamic systems in which space and time are discrete, are yielding interesting applications in both the physical and natural sciences. The thirty four contributions in this book cover many aspects of contemporary studies on cellular automata and include reviews, research reports, and guides to recent literature and available software. Chapters cover mathematical analysis, the structure of the space of cellular automata, learning rules with specified properties: cellular automata in biology, physics, chemistry, and computation theory; and generalizations of cellular automata in neural nets, Boolean nets, and coupled map lattices. Current work on cellular automata may be viewed as revolving around two central and closely related problems: the forward problem and the inverse problem. The forward problem concerns the description of properties of given cellular automata. Properties considered include reversibility, invariants, criticality, fractal dimension, and computational power. The role of cellular automata in computation theory is seen as a particularly exciting venue for exploring parallel computers as theoretical and practical tools in mathematical physics. The inverse problem, an area of study gaining prominence particularly in the natural sciences, involves designing rules that possess specified properties or perform specified task. A long-term goal is to develop a set of techniques that can find a rule or set of rules that can reproduce quantitative observations of a physical system. Studies of the inverse problem take up the organization and structure of the set of automata, in particular the parameterization of the space of cellular automata. Optimization and learning techniques, like the genetic algorithm and adaptive stochastic cellular automata are applied to find cellular automaton rules that model such physical phenomena as crystal growth or perform such adaptive-learning tasks as balancing an inverted pole. Howard Gutowitz is Collaborateur in the Service de Physique du Solide et Résonance Magnetique, Commissariat a I'Energie Atomique, Saclay, France.

  2. An overview of adaptive model theory: solving the problems of redundancy, resources, and nonlinear interactions in human movement control.

    PubMed

    Neilson, Peter D; Neilson, Megan D

    2005-09-01

    Adaptive model theory (AMT) is a computational theory that addresses the difficult control problem posed by the musculoskeletal system in interaction with the environment. It proposes that the nervous system creates motor maps and task-dependent synergies to solve the problems of redundancy and limited central resources. These lead to the adaptive formation of task-dependent feedback/feedforward controllers able to generate stable, noninteractive control and render nonlinear interactions unobservable in sensory-motor relationships. AMT offers a unified account of how the nervous system might achieve these solutions by forming internal models. This is presented as the design of a simulator consisting of neural adaptive filters based on cerebellar circuitry. It incorporates a new network module that adaptively models (in real time) nonlinear relationships between inputs with changing and uncertain spectral and amplitude probability density functions as is the case for sensory and motor signals.

  3. Capabilities of Fully Parallelized MHD Stability Code MARS

    NASA Astrophysics Data System (ADS)

    Svidzinski, Vladimir; Galkin, Sergei; Kim, Jin-Soo; Liu, Yueqiang

    2016-10-01

    Results of full parallelization of the plasma stability code MARS will be reported. MARS calculates eigenmodes in 2D axisymmetric toroidal equilibria in MHD-kinetic plasma models. Parallel version of MARS, named PMARS, has been recently developed at FAR-TECH. Parallelized MARS is an efficient tool for simulation of MHD instabilities with low, intermediate and high toroidal mode numbers within both fluid and kinetic plasma models, implemented in MARS. Parallelization of the code included parallelization of the construction of the matrix for the eigenvalue problem and parallelization of the inverse vector iterations algorithm, implemented in MARS for the solution of the formulated eigenvalue problem. Construction of the matrix is parallelized by distributing the load among processors assigned to different magnetic surfaces. Parallelization of the solution of the eigenvalue problem is made by repeating steps of the MARS algorithm using parallel libraries and procedures. Parallelized MARS is capable of calculating eigenmodes with significantly increased spatial resolution: up to 5,000 adapted radial grid points with up to 500 poloidal harmonics. Such resolution is sufficient for simulation of kink, tearing and peeling-ballooning instabilities with physically relevant parameters. Work is supported by the U.S. DOE SBIR program.

  4. Fully Parallel MHD Stability Analysis Tool

    NASA Astrophysics Data System (ADS)

    Svidzinski, Vladimir; Galkin, Sergei; Kim, Jin-Soo; Liu, Yueqiang

    2015-11-01

    Progress on full parallelization of the plasma stability code MARS will be reported. MARS calculates eigenmodes in 2D axisymmetric toroidal equilibria in MHD-kinetic plasma models. It is a powerful tool for studying MHD and MHD-kinetic instabilities and it is widely used by fusion community. Parallel version of MARS is intended for simulations on local parallel clusters. It will be an efficient tool for simulation of MHD instabilities with low, intermediate and high toroidal mode numbers within both fluid and kinetic plasma models, already implemented in MARS. Parallelization of the code includes parallelization of the construction of the matrix for the eigenvalue problem and parallelization of the inverse iterations algorithm, implemented in MARS for the solution of the formulated eigenvalue problem. Construction of the matrix is parallelized by distributing the load among processors assigned to different magnetic surfaces. Parallelization of the solution of the eigenvalue problem is made by repeating steps of the present MARS algorithm using parallel libraries and procedures. Results of MARS parallelization and of the development of a new fix boundary equilibrium code adapted for MARS input will be reported. Work is supported by the U.S. DOE SBIR program.

  5. Adaptive-Wall Wind-Tunnel Investigations

    DTIC Science & Technology

    1981-02-01

    boundary condition for unconfined flow. In this way, theory and experiment are combined to minimize wall interference. The concept of an adaptive wall...should be noted that although shock waves extend to the walls, the exterior-flow calculation was based on subcritical-flow theory . Goodyer’s configuration...and v by aerodynamic probes. Both subsonic and transonic small- disturbance theory were used, as appropriate, to evaluate the functional rela

  6. Self-consistent field theory simulations of polymers on arbitrary domains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ouaknin, Gaddiel, E-mail: gaddielouaknin@umail.ucsb.edu; Laachi, Nabil; Delaney, Kris

    2016-12-15

    We introduce a framework for simulating the mesoscale self-assembly of block copolymers in arbitrary confined geometries subject to Neumann boundary conditions. We employ a hybrid finite difference/volume approach to discretize the mean-field equations on an irregular domain represented implicitly by a level-set function. The numerical treatment of the Neumann boundary conditions is sharp, i.e. it avoids an artificial smearing in the irregular domain boundary. This strategy enables the study of self-assembly in confined domains and enables the computation of physically meaningful quantities at the domain interface. In addition, we employ adaptive grids encoded with Quad-/Oc-trees in parallel to automatically refinemore » the grid where the statistical fields vary rapidly as well as at the boundary of the confined domain. This approach results in a significant reduction in the number of degrees of freedom and makes the simulations in arbitrary domains using effective boundary conditions computationally efficient in terms of both speed and memory requirement. Finally, in the case of regular periodic domains, where pseudo-spectral approaches are superior to finite differences in terms of CPU time and accuracy, we use the adaptive strategy to store chain propagators, reducing the memory footprint without loss of accuracy in computed physical observables.« less

  7. Accelerating Sequences in the Presence of Metal by Exploiting the Spatial Distribution of Off-Resonance

    PubMed Central

    Smith, Matthew R.; Artz, Nathan S.; Koch, Kevin M.; Samsonov, Alexey; Reeder, Scott B.

    2014-01-01

    Purpose To demonstrate feasibility of exploiting the spatial distribution of off-resonance surrounding metallic implants for accelerating multispectral imaging techniques. Theory Multispectral imaging (MSI) techniques perform time-consuming independent 3D acquisitions with varying RF frequency offsets to address the extreme off-resonance from metallic implants. Each off-resonance bin provides a unique spatial sensitivity that is analogous to the sensitivity of a receiver coil, and therefore provides a unique opportunity for acceleration. Methods Fully sampled MSI was performed to demonstrate retrospective acceleration. A uniform sampling pattern across off-resonance bins was compared to several adaptive sampling strategies using a total hip replacement phantom. Monte Carlo simulations were performed to compare noise propagation of two of these strategies. With a total knee replacement phantom, positive and negative off-resonance bins were strategically sampled with respect to the B0 field to minimize aliasing. Reconstructions were performed with a parallel imaging framework to demonstrate retrospective acceleration. Results An adaptive sampling scheme dramatically improved reconstruction quality, which was supported by the noise propagation analysis. Independent acceleration of negative and positive off-resonance bins demonstrated reduced overlapping of aliased signal to improve the reconstruction. Conclusion This work presents the feasibility of acceleration in the presence of metal by exploiting the spatial sensitivities of off-resonance bins. PMID:24431210

  8. Complex-energy approach to sum rules within nuclear density functional theory

    DOE PAGES

    Hinohara, Nobuo; Kortelainen, Markus; Nazarewicz, Witold; ...

    2015-04-27

    The linear response of the nucleus to an external field contains unique information about the effective interaction, correlations governing the behavior of the many-body system, and properties of its excited states. To characterize the response, it is useful to use its energy-weighted moments, or sum rules. By comparing computed sum rules with experimental values, the information content of the response can be utilized in the optimization process of the nuclear Hamiltonian or nuclear energy density functional (EDF). But the additional information comes at a price: compared to the ground state, computation of excited states is more demanding. To establish anmore » efficient framework to compute energy-weighted sum rules of the response that is adaptable to the optimization of the nuclear EDF and large-scale surveys of collective strength, we have developed a new technique within the complex-energy finite-amplitude method (FAM) based on the quasiparticle random- phase approximation. The proposed sum-rule technique based on the complex-energy FAM is a tool of choice when optimizing effective interactions or energy functionals. The method is very efficient and well-adaptable to parallel computing. As a result, the FAM formulation is especially useful when standard theorems based on commutation relations involving the nuclear Hamiltonian and external field cannot be used.« less

  9. Development and Standardization of the Diagnostic Adaptive Behavior Scale: Application of Item Response Theory to the Assessment of Adaptive Behavior.

    PubMed

    Tassé, Marc J; Schalock, Robert L; Thissen, David; Balboni, Giulia; Bersani, Henry Hank; Borthwick-Duffy, Sharon A; Spreat, Scott; Widaman, Keith F; Zhang, Dalun; Navas, Patricia

    2016-03-01

    The Diagnostic Adaptive Behavior Scale (DABS) was developed using item response theory (IRT) methods and was constructed to provide the most precise and valid adaptive behavior information at or near the cutoff point of making a decision regarding a diagnosis of intellectual disability. The DABS initial item pool consisted of 260 items. Using IRT modeling and a nationally representative standardization sample, the item set was reduced to 75 items that provide the most precise adaptive behavior information at the cutoff area determining the presence or not of significant adaptive behavior deficits across conceptual, social, and practical skills. The standardization of the DABS is described and discussed.

  10. Adaptive Core Simulation Employing Discrete Inverse Theory - Part II: Numerical Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abdel-Khalik, Hany S.; Turinsky, Paul J.

    2005-07-15

    Use of adaptive simulation is intended to improve the fidelity and robustness of important core attribute predictions such as core power distribution, thermal margins, and core reactivity. Adaptive simulation utilizes a selected set of past and current reactor measurements of reactor observables, i.e., in-core instrumentation readings, to adapt the simulation in a meaningful way. The companion paper, ''Adaptive Core Simulation Employing Discrete Inverse Theory - Part I: Theory,'' describes in detail the theoretical background of the proposed adaptive techniques. This paper, Part II, demonstrates several computational experiments conducted to assess the fidelity and robustness of the proposed techniques. The intentmore » is to check the ability of the adapted core simulator model to predict future core observables that are not included in the adaption or core observables that are recorded at core conditions that differ from those at which adaption is completed. Also, this paper demonstrates successful utilization of an efficient sensitivity analysis approach to calculate the sensitivity information required to perform the adaption for millions of input core parameters. Finally, this paper illustrates a useful application for adaptive simulation - reducing the inconsistencies between two different core simulator code systems, where the multitudes of input data to one code are adjusted to enhance the agreement between both codes for important core attributes, i.e., core reactivity and power distribution. Also demonstrated is the robustness of such an application.« less

  11. A parallel time integrator for noisy nonlinear oscillatory systems

    NASA Astrophysics Data System (ADS)

    Subber, Waad; Sarkar, Abhijit

    2018-06-01

    In this paper, we adapt a parallel time integration scheme to track the trajectories of noisy non-linear dynamical systems. Specifically, we formulate a parallel algorithm to generate the sample path of nonlinear oscillator defined by stochastic differential equations (SDEs) using the so-called parareal method for ordinary differential equations (ODEs). The presence of Wiener process in SDEs causes difficulties in the direct application of any numerical integration techniques of ODEs including the parareal algorithm. The parallel implementation of the algorithm involves two SDEs solvers, namely a fine-level scheme to integrate the system in parallel and a coarse-level scheme to generate and correct the required initial conditions to start the fine-level integrators. For the numerical illustration, a randomly excited Duffing oscillator is investigated in order to study the performance of the stochastic parallel algorithm with respect to a range of system parameters. The distributed implementation of the algorithm exploits Massage Passing Interface (MPI).

  12. Why the Rediscoverer Ended up on the Sidelines: Hugo De Vries's Theory of Inheritance and the Mendelian Laws

    ERIC Educational Resources Information Center

    Stamhuis, Ida H.

    2015-01-01

    Eleven years before the "rediscovery" in 1900 of Mendel's work, Hugo De Vries published his theory of heredity. He expected his theory to become a big success, but it was not well-received. To find supporting evidence for this theory De Vries started an extensive research program. Because of the parallels of his ideas with the…

  13. Not different, Just Better: The Adaptive Evolution of an Enzyme

    DTIC Science & Technology

    2015-12-20

    ELEMENT NUMBER 5b. GRANT NUMBER 5a. CONTRACT NUMBER Form Approved OMB NO. 0704-0188 3 . DATES COVERED (From - To) - UU UU UU UU 20-12-2015 1-Oct-2011 30...is precisely regulated by allostery and the adaptation of allostery is unknown, and 3 ) multiple experiments by others have demonstrated that adaptive...mutations in the same gene, but replicate populations, functionally parallel? • Aim 3 ) Expression, purification and functional analysis of evolved pyruvate

  14. Specification and Analysis of Parallel Machine Architecture

    DTIC Science & Technology

    1990-03-17

    Parallel Machine Architeture C.V. Ramamoorthy Computer Science Division Dept. of Electrical Engineering and Computer Science University of California...capacity. (4) Adaptive: The overhead in resolution of deadlocks, etc. should be in proportion to their frequency. (5) Avoid rollbacks: Rollbacks can be...snapshots of system state graphically at a rate proportional to simulation time. Some of the examples are as follow: (1) When the simulation clock of

  15. Compiler and Runtime Support for Programming in Adaptive Parallel Environments

    DTIC Science & Technology

    1998-10-15

    noother job is waiting for resources, and use a smaller number of processors when other jobs needresources. Setia et al. [15, 20] have shown that such...15] Vijay K. Naik, Sanjeev Setia , and Mark Squillante. Performance analysis of job scheduling policiesin parallel supercomputing environments. In...on networks ofheterogeneous workstations. Technical Report CSE-94-012, Oregon Graduate Institute of Scienceand Technology, 1994.[20] Sanjeev Setia

  16. Automatic Adaptation of Tunable Distributed Applications

    DTIC Science & Technology

    2001-01-01

    size, weight, and battery life, with a single CPU, less memory, smaller hard disk, and lower bandwidth network connectivity. The power of PDAs is...wireless, and bluetooth [32] facilities; thus achieving different rates of data transmission. 1 With the trend of “write once, run everywhere...applications, a single component can execute on multiple processors (or machines) in parallel. These parallel applications, written in a specialized language

  17. Domain Adaptation of Translation Models for Multilingual Applications

    DTIC Science & Technology

    2009-04-01

    expansion effect that corpus (or dictionary ) based trans- lation introduces - however, this effect is maintained even with monolingual query expansion [12...every day; bilingual web pages are harvested as parallel corpora as the quantity of non-English data on the web increases; online dictionaries of...approach is to customize translation models to a domain, by automatically selecting the resources ( dictionaries , parallel corpora) that are best for

  18. Applications of Computerized Adaptive Testing. Proceedings of a Symposium presented at the Annual Convention of the Military Testing Association (18th, October 1976). Research Report 77-1.

    ERIC Educational Resources Information Center

    Weiss, David J., Ed.

    This symposium consists of five papers and presents some recent developments in adaptive testing which have applications to several military testing problems. The overview, by James R. McBride, defines adaptive testing and discusses some of its item selection and scoring strategies. Item response theory, or item characteristic curve theory, is…

  19. Parallel 3D Mortar Element Method for Adaptive Nonconforming Meshes

    NASA Technical Reports Server (NTRS)

    Feng, Huiyu; Mavriplis, Catherine; VanderWijngaart, Rob; Biswas, Rupak

    2004-01-01

    High order methods are frequently used in computational simulation for their high accuracy. An efficient way to avoid unnecessary computation in smooth regions of the solution is to use adaptive meshes which employ fine grids only in areas where they are needed. Nonconforming spectral elements allow the grid to be flexibly adjusted to satisfy the computational accuracy requirements. The method is suitable for computational simulations of unsteady problems with very disparate length scales or unsteady moving features, such as heat transfer, fluid dynamics or flame combustion. In this work, we select the Mark Element Method (MEM) to handle the non-conforming interfaces between elements. A new technique is introduced to efficiently implement MEM in 3-D nonconforming meshes. By introducing an "intermediate mortar", the proposed method decomposes the projection between 3-D elements and mortars into two steps. In each step, projection matrices derived in 2-D are used. The two-step method avoids explicitly forming/deriving large projection matrices for 3-D meshes, and also helps to simplify the implementation. This new technique can be used for both h- and p-type adaptation. This method is applied to an unsteady 3-D moving heat source problem. With our new MEM implementation, mesh adaptation is able to efficiently refine the grid near the heat source and coarsen the grid once the heat source passes. The savings in computational work resulting from the dynamic mesh adaptation is demonstrated by the reduction of the the number of elements used and CPU time spent. MEM and mesh adaptation, respectively, bring irregularity and dynamics to the computer memory access pattern. Hence, they provide a good way to gauge the performance of computer systems when running scientific applications whose memory access patterns are irregular and unpredictable. We select a 3-D moving heat source problem as the Unstructured Adaptive (UA) grid benchmark, a new component of the NAS Parallel Benchmarks (NPB). In this paper, we present some interesting performance results of ow OpenMP parallel implementation on different architectures such as the SGI Origin2000, SGI Altix, and Cray MTA-2.

  20. Teaching and Learning: Highlighting the Parallels between Education and Participatory Evaluation.

    ERIC Educational Resources Information Center

    Vanden Berk, Eric J.; Cassata, Jennifer Coyne; Moye, Melinda J.; Yarbrough, Donald B.; Siddens, Stephanie K.

    As an evaluation team trained in educational psychology and committed to participatory evaluation and its evolution, the researchers have found the parallel between evaluator-stakeholder roles in the participatory evaluation process and educator-student roles in educational psychology theory to be important. One advantage then is that the theories…

  1. An Alternative Methodology for Creating Parallel Test Forms Using the IRT Information Function.

    ERIC Educational Resources Information Center

    Ackerman, Terry A.

    The purpose of this paper is to report results on the development of a new computer-assisted methodology for creating parallel test forms using the item response theory (IRT) information function. Recently, several researchers have approached test construction from a mathematical programming perspective. However, these procedures require…

  2. The Extended Parallel Process Model: Illuminating the Gaps in Research

    ERIC Educational Resources Information Center

    Popova, Lucy

    2012-01-01

    This article examines constructs, propositions, and assumptions of the extended parallel process model (EPPM). Review of the EPPM literature reveals that its theoretical concepts are thoroughly developed, but the theory lacks consistency in operational definitions of some of its constructs. Out of the 12 propositions of the EPPM, a few have not…

  3. Curious parallels and curious connections--phylogenetic thinking in biology and historical linguistics.

    PubMed

    Atkinson, Quentin D; Gray, Russell D

    2005-08-01

    In The Descent of Man (1871), Darwin observed "curious parallels" between the processes of biological and linguistic evolution. These parallels mean that evolutionary biologists and historical linguists seek answers to similar questions and face similar problems. As a result, the theory and methodology of the two disciplines have evolved in remarkably similar ways. In addition to Darwin's curious parallels of process, there are a number of equally curious parallels and connections between the development of methods in biology and historical linguistics. Here we briefly review the parallels between biological and linguistic evolution and contrast the historical development of phylogenetic methods in the two disciplines. We then look at a number of recent studies that have applied phylogenetic methods to language data and outline some current problems shared by the two fields.

  4. HALOS: fast, autonomous, holographic adaptive optics

    NASA Astrophysics Data System (ADS)

    Andersen, Geoff P.; Gelsinger-Austin, Paul; Gaddipati, Ravi; Gaddipati, Phani; Ghebremichael, Fassil

    2014-08-01

    We present progress on our holographic adaptive laser optics system (HALOS): a compact, closed-loop aberration correction system that uses a multiplexed hologram to deconvolve the phase aberrations in an input beam. The wavefront characterization is based on simple, parallel measurements of the intensity of fixed focal spots and does not require any complex calculations. As such, the system does not require a computer and is thus much cheaper, less complex than conventional approaches. We present details of a fully functional, closed-loop prototype incorporating a 32-element MEMS mirror, operating at a bandwidth of over 10kHz. Additionally, since the all-optical sensing is made in parallel, the speed is independent of actuator number - running at the same bandwidth for one actuator as for a million.

  5. Parallel Programming Strategies for Irregular Adaptive Applications

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2001-01-01

    Achieving scalable performance for dynamic irregular applications is eminently challenging. Traditional message-passing approaches have been making steady progress towards this goal; however, they suffer from complex implementation requirements. The use of a global address space greatly simplifies the programming task, but can degrade the performance for such computations. In this work, we examine two typical irregular adaptive applications, Dynamic Remeshing and N-Body, under competing programming methodologies and across various parallel architectures. The Dynamic Remeshing application simulates flow over an airfoil, and refines localized regions of the underlying unstructured mesh. The N-Body experiment models two neighboring Plummer galaxies that are about to undergo a merger. Both problems demonstrate dramatic changes in processor workloads and interprocessor communication with time; thus, dynamic load balancing is a required component.

  6. [The Application of Grief Theories to Bereaved Family Members].

    PubMed

    Wu, Lee-Jen Suen; Chou, Chuan-Chiang; Lin, Yen-Chun

    2017-12-01

    Loss is an inevitable experience for humans for which grief is a natural response. Nurses must have an adequate understanding of grief and bereavement in order to be more sensitive to these painful emotions and to provide appropriate care to families who have lost someone they love deeply. This article introduces four important grief theories: Freud's grief theory, Bowlby's attachment theory, Stroebe and Schuts' dual process model, and Neiyemer's meaning reconstruction model. Freud's grief theory holds that the process of grief adaptation involves a bereaved family adopting alternative ways to connect with the death of a loved one and to restore their self-ego. Attachment theory holds that individuals who undergo grieving that is caused by separation from significant others and that triggers the process of grief adaptation will fail to adapt if they resist change. The dual process model holds that bereaved families undergo grief adaptation not only as a way to face their loss but also to restore normality in their lives. Finally, the meaning reconstruction model holds that the grief-adaptation strength of bereaved families comes from their meaning reconstruction in response to encountered events. It is hoped that these theories offer nurses different perspectives on the grieving process and provide a practical framework for grief assessment and interventions. Additionally, specific interventions that are based on these four grief theories are recommended. Furthermore, theories of grief may help nurses gain insight into their own practice-related reactions and healing processes, which is an important part of caring for the grieving. Although the grieving process is time consuming, nurses who better understand grief will be better able to help family members prepare in advance for the death of a loved one and, in doing so, help facilitate their healing, with a view to the future and to finally returning to normal daily life.

  7. Modern spandrels: the roles of genetic drift, gene flow and natural selection in the evolution of parallel clines.

    PubMed

    Santangelo, James S; Johnson, Marc T J; Ness, Rob W

    2018-05-16

    Urban environments offer the opportunity to study the role of adaptive and non-adaptive evolutionary processes on an unprecedented scale. While the presence of parallel clines in heritable phenotypic traits is often considered strong evidence for the role of natural selection, non-adaptive evolutionary processes can also generate clines, and this may be more likely when traits have a non-additive genetic basis due to epistasis. In this paper, we use spatially explicit simulations modelled according to the cyanogenesis (hydrogen cyanide, HCN) polymorphism in white clover ( Trifolium repens ) to examine the formation of phenotypic clines along urbanization gradients under varying levels of drift, gene flow and selection. HCN results from an epistatic interaction between two Mendelian-inherited loci. Our results demonstrate that the genetic architecture of this trait makes natural populations susceptible to decreases in HCN frequencies via drift. Gradients in the strength of drift across a landscape resulted in phenotypic clines with lower frequencies of HCN in strongly drifting populations, giving the misleading appearance of deterministic adaptive changes in the phenotype. Studies of heritable phenotypic change in urban populations should generate null models of phenotypic evolution based on the genetic architecture underlying focal traits prior to invoking selection's role in generating adaptive differentiation. © 2018 The Author(s).

  8. Teaching ethics to engineers: ethical decision making parallels the engineering design process.

    PubMed

    Bero, Bridget; Kuhlman, Alana

    2011-09-01

    In order to fulfill ABET requirements, Northern Arizona University's Civil and Environmental engineering programs incorporate professional ethics in several of its engineering courses. This paper discusses an ethics module in a 3rd year engineering design course that focuses on the design process and technical writing. Engineering students early in their student careers generally possess good black/white critical thinking skills on technical issues. Engineering design is the first time students are exposed to "grey" or multiple possible solution technical problems. To identify and solve these problems, the engineering design process is used. Ethical problems are also "grey" problems and present similar challenges to students. Students need a practical tool for solving these ethical problems. The step-wise engineering design process was used as a model to demonstrate a similar process for ethical situations. The ethical decision making process of Martin and Schinzinger was adapted for parallelism to the design process and presented to students as a step-wise technique for identification of the pertinent ethical issues, relevant moral theories, possible outcomes and a final decision. Students had greatest difficulty identifying the broader, global issues presented in an ethical situation, but by the end of the module, were better able to not only identify the broader issues, but also to more comprehensively assess specific issues, generate solutions and a desired response to the issue.

  9. Second order kinetic theory of parallel momentum transport in collisionless drift wave turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yang, E-mail: lyang13@mails.tsinghua.edu.cn; Southwestern Institute of Physics, Chengdu 610041; Gao, Zhe

    A second order kinetic model for turbulent ion parallel momentum transport is presented. A new nonresonant second order parallel momentum flux term is calculated. The resonant component of the ion parallel electrostatic force is the momentum source, while the nonresonant component of the ion parallel electrostatic force compensates for that of the nonresonant second order parallel momentum flux. The resonant component of the kinetic momentum flux can be divided into three parts, including the pinch term, the diffusive term, and the residual stress. By reassembling the pinch term and the residual stress, the residual stress can be considered as amore » pinch term of parallel wave-particle resonant velocity, and, therefore, may be called as “resonant velocity pinch” term. Considering the resonant component of the ion parallel electrostatic force is the transfer rate between resonant ions and waves (or, equivalently, nonresonant ions), a conservation equation of the parallel momentum of resonant ions and waves is obtained.« less

  10. Modern Models of Psychosocial Adaptation to Chronic Illness and Disability as Viewed through the Prism of Lewin's Field Theory: A Comparative Review

    ERIC Educational Resources Information Center

    Livneh, Hanoch; Bishop, Malachy; Anctil, Tina M.

    2014-01-01

    Purpose: In this article, we describe how four recent models of psychosocial adaptation to chronic illness and disability (CID) could be fruitfully conceptualized and compared by resorting to the general framework of Lewin's field theory--a theory frequently regarded as a precursor and the primary impetus to the development of the field of…

  11. How to say no: single- and dual-process theories of short-term recognition tested on negative probes.

    PubMed

    Oberauer, Klaus

    2008-05-01

    Three experiments with short-term recognition tasks are reported. In Experiments 1 and 2, participants decided whether a probe matched a list item specified by its spatial location. Items presented at study in a different location (intrusion probes) had to be rejected. Serial position curves of positive, new, and intrusion probes over the probed location's position were mostly parallel. Serial position curves of intrusion probes over their position of origin were again parallel to those of positive probes. Experiment 3 showed largely parallel serial position effects for positive probes and for intrusion probes plotted over positions in a relevant and an irrelevant list, respectively. The results support a dual-process theory in which recognition is based on familiarity and recollection, and recollection uses 2 retrieval routes, from context to item and from item to context.

  12. The surface diffusion coefficient for an arbitrarily curved fluid-fluid interface. (I). General expression

    NASA Astrophysics Data System (ADS)

    M. C. Sagis, Leonard

    2001-03-01

    In this paper, we develop a theory for the calculation of the surface diffusion coefficient for an arbitrarily curved fluid-fluid interface. The theory is valid for systems in hydrodynamic equilibrium, with zero mass-averaged velocities in the bulk and interfacial regions. We restrict our attention to systems with isotropic bulk phases, and an interfacial region that is isotropic in the plane parallel to the dividing surface. The dividing surface is assumed to be a simple interface, without memory effects or yield stresses. We derive an expression for the surface diffusion coefficient in terms of two parameters of the interfacial region: the coefficient for plane-parallel diffusion D (AB)aa(ξ) , and the driving force d(B)I||(ξ) . This driving force is the parallel component of the driving force for diffusion in the interfacial region. We derive an expression for this driving force using the entropy balance.

  13. Transitioning NWChem to the Next Generation of Manycore Machines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bylaska, Eric J.; Apra, E; Kowalski, Karol

    The NorthWest chemistry (NWChem) modeling software is a popular molecular chemistry simulation software that was designed from the start to work on massively parallel processing supercomputers [1-3]. It contains an umbrella of modules that today includes self-consistent eld (SCF), second order Møller-Plesset perturbation theory (MP2), coupled cluster (CC), multiconguration self-consistent eld (MCSCF), selected conguration interaction (CI), tensor contraction engine (TCE) many body methods, density functional theory (DFT), time-dependent density functional theory (TDDFT), real-time time-dependent density functional theory, pseudopotential plane-wave density functional theory (PSPW), band structure (BAND), ab initio molecular dynamics (AIMD), Car-Parrinello molecular dynamics (MD), classical MD, hybrid quantum mechanicsmore » molecular mechanics (QM/MM), hybrid ab initio molecular dynamics molecular mechanics (AIMD/MM), gauge independent atomic orbital nuclear magnetic resonance (GIAO NMR), conductor like screening solvation model (COSMO), conductor-like screening solvation model based on density (COSMO-SMD), and reference interaction site model (RISM) solvation models, free energy simulations, reaction path optimization, parallel in time, among other capabilities [4]. Moreover, new capabilities continue to be added with each new release.« less

  14. Batalin-Vilkovisky quantization and generalizations

    NASA Astrophysics Data System (ADS)

    Bering, Klaus

    Gauge theories play an important role in modern physics. Whenever a gauge symmetry is present, one should provide for a manifestly gauge independent formalism. It turns out that the BRST symmetry plays a prominent part in providing the gauge independence. The importance of gauge independence in the Hamiltonian Batalin-Fradkin-Fradkina- Vilkovisky formalism and in the Lagrangian Batalin- Vilkovisky formalism is stressed. Parallels are drawn between the various theories. A Hamiltonian path integral that takes into account quantum ordering effects arising in the operator formalism, should be written with the help of the star- multiplication or the Moyal bracket. It is generally believed, that this leads to higher order quantum corrections in the corresponding Lagrangian path integral. A higher order Lagrangian path integral based on a nilpotent higher order odd Laplacian is proposed. A new gauge independence mechanism that adapts to the higher order formalism, and that by-passes the problem of constructing a BRST transformation of the path integral in the higher order case, is developed. The new gauge mechanism is closely related to the cohomology of the odd Laplacian operator. Various cohomology aspects of the odd Laplacian are investigated. Whereas for instance the role of the ghost-cohomology properties of the BFV-BRST charge has been emphasized by several authors, the cohomology of the odd Laplacian are in general not well known.

  15. Accelerating large scale Kohn-Sham density functional theory calculations with semi-local functionals and hybrid functionals

    NASA Astrophysics Data System (ADS)

    Lin, Lin

    The computational cost of standard Kohn-Sham density functional theory (KSDFT) calculations scale cubically with respect to the system size, which limits its use in large scale applications. In recent years, we have developed an alternative procedure called the pole expansion and selected inversion (PEXSI) method. The PEXSI method solves KSDFT without solving any eigenvalue and eigenvector, and directly evaluates physical quantities including electron density, energy, atomic force, density of states, and local density of states. The overall algorithm scales as at most quadratically for all materials including insulators, semiconductors and the difficult metallic systems. The PEXSI method can be efficiently parallelized over 10,000 - 100,000 processors on high performance machines. The PEXSI method has been integrated into a number of community electronic structure software packages such as ATK, BigDFT, CP2K, DGDFT, FHI-aims and SIESTA, and has been used in a number of applications with 2D materials beyond 10,000 atoms. The PEXSI method works for LDA, GGA and meta-GGA functionals. The mathematical structure for hybrid functional KSDFT calculations is significantly different. I will also discuss recent progress on using adaptive compressed exchange method for accelerating hybrid functional calculations. DOE SciDAC Program, DOE CAMERA Program, LBNL LDRD, Sloan Fellowship.

  16. Prediction of vein connectivity using the percolation approach: model test with field data

    NASA Astrophysics Data System (ADS)

    Belayneh, M.; Masihi, M.; Matthäi, S. K.; King, P. R.

    2006-09-01

    Evaluating the uncertainty in fracture connectivity and its effect on the flow behaviour of natural fracture networks formed under in situ conditions is an extremely difficult task. One widely used probabilistic approach is to use percolation theory, which is well adapted to estimate the connectivity and conductivity of geometrical objects near the percolation threshold. In this paper, we apply scaling laws from percolation theory to predict the connectivity of vein sets exposed on the southern margin of the Bristol Channel Basin. Two vein sets in a limestone bed interbedded with shales on the limb of a rollover fold were analysed for length, spacing and aperture distributions. Eight scan lines, low-level aerial photographs and mosaics of photographs taken with a tripod were used. The analysed veins formed contemporaneously with the rollover fold during basin subsidence on the hanging wall of a listric normal fault. The first vein set, V1, is fold axis-parallel (i.e. striking ~100°) and normal to bedding. The second vein set, V2, strikes 140° and crosscuts V1. We find a close agreement in connectivity between our predictions using the percolation approach and the field data. The implication is that reasonable predictions of vein connectivity can be made from sparse data obtained from boreholes or (limited) sporadic outcrop.

  17. Frustration and curvature - Glasses and the cholesteric blue phase

    NASA Technical Reports Server (NTRS)

    Sethna, J. P.

    1983-01-01

    An analogy is drawn between continuum elastic theories of the blue phase of cholesteric liquid crystals and recent theories of frustration in configurational glasses. Both involve the introduction of a lattice of disclination lines to relieve frustration; the frustration is due to an intrinsic curvature in the natural form of parallel transport. A continuum theory of configurational glasses is proposed.

  18. On the biological plausibility of grandmother cells: implications for neural network theories in psychology and neuroscience.

    PubMed

    Bowers, Jeffrey S

    2009-01-01

    A fundamental claim associated with parallel distributed processing (PDP) theories of cognition is that knowledge is coded in a distributed manner in mind and brain. This approach rejects the claim that knowledge is coded in a localist fashion, with words, objects, and simple concepts (e.g. "dog"), that is, coded with their own dedicated representations. One of the putative advantages of this approach is that the theories are biologically plausible. Indeed, advocates of the PDP approach often highlight the close parallels between distributed representations learned in connectionist models and neural coding in brain and often dismiss localist (grandmother cell) theories as biologically implausible. The author reviews a range a data that strongly challenge this claim and shows that localist models provide a better account of single-cell recording studies. The author also contrast local and alternative distributed coding schemes (sparse and coarse coding) and argues that common rejection of grandmother cell theories in neuroscience is due to a misunderstanding about how localist models behave. The author concludes that the localist representations embedded in theories of perception and cognition are consistent with neuroscience; biology only calls into question the distributed representations often learned in PDP models.

  19. Eco-evolutionary feedbacks, adaptive dynamics and evolutionary rescue theory

    PubMed Central

    Ferriere, Regis; Legendre, Stéphane

    2013-01-01

    Adaptive dynamics theory has been devised to account for feedbacks between ecological and evolutionary processes. Doing so opens new dimensions to and raises new challenges about evolutionary rescue. Adaptive dynamics theory predicts that successive trait substitutions driven by eco-evolutionary feedbacks can gradually erode population size or growth rate, thus potentially raising the extinction risk. Even a single trait substitution can suffice to degrade population viability drastically at once and cause ‘evolutionary suicide’. In a changing environment, a population may track a viable evolutionary attractor that leads to evolutionary suicide, a phenomenon called ‘evolutionary trapping’. Evolutionary trapping and suicide are commonly observed in adaptive dynamics models in which the smooth variation of traits causes catastrophic changes in ecological state. In the face of trapping and suicide, evolutionary rescue requires that the population overcome evolutionary threats generated by the adaptive process itself. Evolutionary repellors play an important role in determining how variation in environmental conditions correlates with the occurrence of evolutionary trapping and suicide, and what evolutionary pathways rescue may follow. In contrast with standard predictions of evolutionary rescue theory, low genetic variation may attenuate the threat of evolutionary suicide and small population sizes may facilitate escape from evolutionary traps. PMID:23209163

  20. An FPGA-based DS-CDMA multiuser demodulator employing adaptive multistage parallel interference cancellation

    NASA Astrophysics Data System (ADS)

    Li, Xinhua; Song, Zhenyu; Zhan, Yongjie; Wu, Qiongzhi

    2009-12-01

    Since the system capacity is severely limited, reducing the multiple access interfere (MAI) is necessary in the multiuser direct-sequence code division multiple access (DS-CDMA) system which is used in the telecommunication terminals data-transferred link system. In this paper, we adopt an adaptive multistage parallel interference cancellation structure in the demodulator based on the least mean square (LMS) algorithm to eliminate the MAI on the basis of overviewing various of multiuser dectection schemes. Neither a training sequence nor a pilot signal is needed in the proposed scheme, and its implementation complexity can be greatly reduced by a LMS approximate algorithm. The algorithm and its FPGA implementation is then derived. Simulation results of the proposed adaptive PIC can outperform some of the existing interference cancellation methods in AWGN channels. The hardware setup of mutiuser demodulator is described, and the experimental results based on it demonstrate that the simulation results shows large performance gains over the conventional single-user demodulator.

  1. The Parallel Episodic Processing (PEP) model: dissociating contingency and conflict adaptation in the item-specific proportion congruent paradigm.

    PubMed

    Schmidt, James R

    2013-01-01

    The present work introduces a computational model, the Parallel Episodic Processing (PEP) model, which demonstrates that contingency learning achieved via simple storage and retrieval of episodic memories can explain the item-specific proportion congruency effect in the colour-word Stroop paradigm. The current work also presents a new experimental procedure to more directly dissociate contingency biases from conflict adaptation (i.e., proportion congruency). This was done with three different types of incongruent words that allow a comparison of: (a) high versus low contingency while keeping proportion congruency constant, and (b) high versus low proportion congruency while keeping contingency constant. Results demonstrated a significant contingency effect, but no effect of proportion congruence. It was further shown that the proportion congruency associated with the colour does not matter, either. Thus, the results quite directly demonstrate that ISPC effects are not due to conflict adaptation, but instead to contingency learning biases. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. What actually confers adaptive capacity? Insights from agro-climatic vulnerability of Australian wheat.

    PubMed

    Bryan, Brett A; Huai, Jianjun; Connor, Jeff; Gao, Lei; King, Darran; Kandulu, John; Zhao, Gang

    2015-01-01

    Vulnerability assessments have often invoked sustainable livelihoods theory to support the quantification of adaptive capacity based on the availability of capital--social, human, physical, natural, and financial. However, the assumption that increased availability of these capitals confers greater adaptive capacity remains largely untested. We quantified the relationship between commonly used capital indicators and an empirical index of adaptive capacity (ACI) in the context of vulnerability of Australian wheat production to climate variability and change. We calculated ACI by comparing actual yields from farm survey data to climate-driven expected yields estimated by a crop model for 12 regions in Australia's wheat-sheep zone from 1991-2010. We then compiled data for 24 typical indicators used in vulnerability analyses, spanning the five capitals. We analyzed the ACI and used regression techniques to identify related capital indicators. Between regions, mean ACI was not significantly different but variance over time was. ACI was higher in dry years and lower in wet years suggesting that farm adaptive strategies are geared towards mitigating losses rather than capitalizing on opportunity. Only six of the 24 capital indicators were significantly related to adaptive capacity in a way predicted by theory. Another four indicators were significantly related to adaptive capacity but of the opposite sign, countering our theory-driven expectation. We conclude that the deductive, theory-based use of capitals to define adaptive capacity and vulnerability should be more circumspect. Assessments need to be more evidence-based, first testing the relevance and influence of capital metrics on adaptive capacity for the specific system of interest. This will more effectively direct policy and targeting of investment to mitigate agro-climatic vulnerability.

  3. What Actually Confers Adaptive Capacity? Insights from Agro-Climatic Vulnerability of Australian Wheat

    PubMed Central

    Bryan, Brett A.; Huai, Jianjun; Connor, Jeff; Gao, Lei; King, Darran; Kandulu, John; Zhao, Gang

    2015-01-01

    Vulnerability assessments have often invoked sustainable livelihoods theory to support the quantification of adaptive capacity based on the availability of capital—social, human, physical, natural, and financial. However, the assumption that increased availability of these capitals confers greater adaptive capacity remains largely untested. We quantified the relationship between commonly used capital indicators and an empirical index of adaptive capacity (ACI) in the context of vulnerability of Australian wheat production to climate variability and change. We calculated ACI by comparing actual yields from farm survey data to climate-driven expected yields estimated by a crop model for 12 regions in Australia’s wheat-sheep zone from 1991–2010. We then compiled data for 24 typical indicators used in vulnerability analyses, spanning the five capitals. We analyzed the ACI and used regression techniques to identify related capital indicators. Between regions, mean ACI was not significantly different but variance over time was. ACI was higher in dry years and lower in wet years suggesting that farm adaptive strategies are geared towards mitigating losses rather than capitalizing on opportunity. Only six of the 24 capital indicators were significantly related to adaptive capacity in a way predicted by theory. Another four indicators were significantly related to adaptive capacity but of the opposite sign, countering our theory-driven expectation. We conclude that the deductive, theory-based use of capitals to define adaptive capacity and vulnerability should be more circumspect. Assessments need to be more evidence-based, first testing the relevance and influence of capital metrics on adaptive capacity for the specific system of interest. This will more effectively direct policy and targeting of investment to mitigate agro-climatic vulnerability. PMID:25668192

  4. Parallel closure theory for toroidally confined plasmas

    NASA Astrophysics Data System (ADS)

    Ji, Jeong-Young; Held, Eric D.

    2017-10-01

    We solve a system of general moment equations to obtain parallel closures for electrons and ions in an axisymmetric toroidal magnetic field. Magnetic field gradient terms are kept and treated using the Fourier series method. Assuming lowest order density (pressure) and temperature to be flux labels, the parallel heat flow, friction, and viscosity are expressed in terms of radial gradients of the lowest-order temperature and pressure, parallel gradients of temperature and parallel flow, and the relative electron-ion parallel flow velocity. Convergence of closure quantities is demonstrated as the number of moments and Fourier modes are increased. Properties of the moment equations in the collisionless limit are also discussed. Combining closures with fluid equations parallel mass flow and electric current are also obtained. Work in collaboration with the PSI Center and supported by the U.S. DOE under Grant Nos. DE-SC0014033, DE-SC0016256, and DE-FG02-04ER54746.

  5. Research of the effectiveness of parallel multithreaded realizations of interpolation methods for scaling raster images

    NASA Astrophysics Data System (ADS)

    Vnukov, A. A.; Shershnev, M. B.

    2018-01-01

    The aim of this work is the software implementation of three image scaling algorithms using parallel computations, as well as the development of an application with a graphical user interface for the Windows operating system to demonstrate the operation of algorithms and to study the relationship between system performance, algorithm execution time and the degree of parallelization of computations. Three methods of interpolation were studied, formalized and adapted to scale images. The result of the work is a program for scaling images by different methods. Comparison of the quality of scaling by different methods is given.

  6. Parallel Adaptive Mesh Refinement for High-Order Finite-Volume Schemes in Computational Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Schwing, Alan Michael

    For computational fluid dynamics, the governing equations are solved on a discretized domain of nodes, faces, and cells. The quality of the grid or mesh can be a driving source for error in the results. While refinement studies can help guide the creation of a mesh, grid quality is largely determined by user expertise and understanding of the flow physics. Adaptive mesh refinement is a technique for enriching the mesh during a simulation based on metrics for error, impact on important parameters, or location of important flow features. This can offload from the user some of the difficult and ambiguous decisions necessary when discretizing the domain. This work explores the implementation of adaptive mesh refinement in an implicit, unstructured, finite-volume solver. Consideration is made for applying modern computational techniques in the presence of hanging nodes and refined cells. The approach is developed to be independent of the flow solver in order to provide a path for augmenting existing codes. It is designed to be applicable for unsteady simulations and refinement and coarsening of the grid does not impact the conservatism of the underlying numerics. The effect on high-order numerical fluxes of fourth- and sixth-order are explored. Provided the criteria for refinement is appropriately selected, solutions obtained using adapted meshes have no additional error when compared to results obtained on traditional, unadapted meshes. In order to leverage large-scale computational resources common today, the methods are parallelized using MPI. Parallel performance is considered for several test problems in order to assess scalability of both adapted and unadapted grids. Dynamic repartitioning of the mesh during refinement is crucial for load balancing an evolving grid. Development of the methods outlined here depend on a dual-memory approach that is described in detail. Validation of the solver developed here against a number of motivating problems shows favorable comparisons across a range of regimes. Unsteady and steady applications are considered in both subsonic and supersonic flows. Inviscid and viscous simulations achieve similar results at a much reduced cost when employing dynamic mesh adaptation. Several techniques for guiding adaptation are compared. Detailed analysis of statistics from the instrumented solver enable understanding of the costs associated with adaptation. Adaptive mesh refinement shows promise for the test cases presented here. It can be considerably faster than using conventional grids and provides accurate results. The procedures for adapting the grid are light-weight enough to not require significant computational time and yield significant reductions in grid size.

  7. VizieR Online Data Catalog: Solar wind 3D magnetohydrodynamic simulation (Chhiber+, 2017)

    NASA Astrophysics Data System (ADS)

    Chhiber, R.; Subedi, P.; Usmanov, A. V.; Matthaeus, W. H.; Ruffolo, D.; Goldstein, M. L.; Parashar, T. N.

    2017-08-01

    We use a three-dimensional magnetohydrodynamic simulation of the solar wind to calculate cosmic-ray diffusion coefficients throughout the inner heliosphere (2Rȯ-3au). The simulation resolves large-scale solar wind flow, which is coupled to small-scale fluctuations through a turbulence model. Simulation results specify background solar wind fields and turbulence parameters, which are used to compute diffusion coefficients and study their behavior in the inner heliosphere. The parallel mean free path (mfp) is evaluated using quasi-linear theory, while the perpendicular mfp is determined from nonlinear guiding center theory with the random ballistic interpretation. Several runs examine varying turbulent energy and different solar source dipole tilts. We find that for most of the inner heliosphere, the radial mfp is dominated by diffusion parallel to the mean magnetic field; the parallel mfp remains at least an order of magnitude larger than the perpendicular mfp, except in the heliospheric current sheet, where the perpendicular mfp may be a few times larger than the parallel mfp. In the ecliptic region, the perpendicular mfp may influence the radial mfp at heliocentric distances larger than 1.5au; our estimations of the parallel mfp in the ecliptic region at 1 au agree well with the Palmer "consensus" range of 0.08-0.3au. Solar activity increases perpendicular diffusion and reduces parallel diffusion. The parallel mfp mostly varies with rigidity (P) as P.33, and the perpendicular mfp is weakly dependent on P. The mfps are weakly influenced by the choice of long-wavelength power spectra. (2 data files).

  8. Research on parallel algorithm for sequential pattern mining

    NASA Astrophysics Data System (ADS)

    Zhou, Lijuan; Qin, Bai; Wang, Yu; Hao, Zhongxiao

    2008-03-01

    Sequential pattern mining is the mining of frequent sequences related to time or other orders from the sequence database. Its initial motivation is to discover the laws of customer purchasing in a time section by finding the frequent sequences. In recent years, sequential pattern mining has become an important direction of data mining, and its application field has not been confined to the business database and has extended to new data sources such as Web and advanced science fields such as DNA analysis. The data of sequential pattern mining has characteristics as follows: mass data amount and distributed storage. Most existing sequential pattern mining algorithms haven't considered the above-mentioned characteristics synthetically. According to the traits mentioned above and combining the parallel theory, this paper puts forward a new distributed parallel algorithm SPP(Sequential Pattern Parallel). The algorithm abides by the principal of pattern reduction and utilizes the divide-and-conquer strategy for parallelization. The first parallel task is to construct frequent item sets applying frequent concept and search space partition theory and the second task is to structure frequent sequences using the depth-first search method at each processor. The algorithm only needs to access the database twice and doesn't generate the candidated sequences, which abates the access time and improves the mining efficiency. Based on the random data generation procedure and different information structure designed, this paper simulated the SPP algorithm in a concrete parallel environment and implemented the AprioriAll algorithm. The experiments demonstrate that compared with AprioriAll, the SPP algorithm had excellent speedup factor and efficiency.

  9. MADNESS: A Multiresolution, Adaptive Numerical Environment for Scientific Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harrison, Robert J.; Beylkin, Gregory; Bischoff, Florian A.

    2016-01-01

    MADNESS (multiresolution adaptive numerical environment for scientific simulation) is a high-level software environment for solving integral and differential equations in many dimensions that uses adaptive and fast harmonic analysis methods with guaranteed precision based on multiresolution analysis and separated representations. Underpinning the numerical capabilities is a powerful petascale parallel programming environment that aims to increase both programmer productivity and code scalability. This paper describes the features and capabilities of MADNESS and briefly discusses some current applications in chemistry and several areas of physics.

  10. Basil Bernstein and Emile Durkheim: Two Theories of Change in Educational Systems

    ERIC Educational Resources Information Center

    Cherkaoui, Mohamed

    1977-01-01

    Attempts to draw out parallels and differences between Emile Durkheim's and Basil Bernstein's theories of educational systems and highlights Bernstein's reformulation of certain features of Durkheim's thought. Focuses on the role of the school, curriculum change, and social conflict. (Author/RK)

  11. Substitution in recreation choice behavior

    Treesearch

    George L. Peterson; Daniel J. Stynes; Donald H. Rosenthal; John F. Dwyer

    1985-01-01

    This review discusses concepts and theories of substitution in recreation choice. It brings together the literature of recreation research, psychology, geography, economics, and transportation. Parallel and complementary developments need integration into an improved theory of substitution. Recreation decision behavior is characterized as a nested or sequential choice...

  12. Adaptive Delta Management: cultural aspects of dealing with uncertainty

    NASA Astrophysics Data System (ADS)

    Timmermans, Jos; Haasnoot, Marjolijn; Hermans, Leon; Kwakkel, Jan

    2016-04-01

    Deltas are generally recognized as vulnerable to climate change and therefore a salient topic in adaptation science. Deltas are also highly dynamic systems viewed from physical (erosion, sedimentation, subsidence), social (demographic), economic (trade), infrastructures (transport, energy, metropolization) and cultural (multi-ethnic) perspectives. This multi-faceted dynamic character of delta areas warrants the emergence of a branch of applied adaptation science, Adaptive Delta Management, which explicitly focuses on climate adaptation of such highly dynamic and deeply uncertain systems. The application of Adaptive Delta Management in the Dutch Delta Program and its active international dissemination by Dutch professionals results in the rapid dissemination of Adaptive Delta Management to deltas worldwide. This global dissemination raises concerns among professionals in delta management on its applicability in deltas with cultural conditions and historical developments quite different from those found in the Netherlands and the United Kingdom where the practices now labelled as Adaptive Delta Management first emerged. This research develops an approach and gives a first analysis of the interaction between the characteristics of different approaches in Adaptive Delta Management and their alignment with the cultural conditions encountered in various delta's globally. In this analysis, first different management theories underlying approaches to Adaptive Delta Management as encountered in both scientific and professional publications are identified and characterized on three dimensions: The characteristics dimensions used are: orientation on today, orientation on the future, and decision making (Timmermans, 2015). The different underlying management theories encountered are policy analysis, strategic management, transition management, and adaptive management. These four management theories underlying different approaches in Adaptive Delta Management are connected to Hofstede's (1983) cultural dimensions, of which uncertainty avoidance and long-term orientation are of particular relevance for our analysis. Our conclusions comment on the suitability of approaches in Adaptive Delta Management rooted in different management theories are more suitable for specific delta countries than others. The most striking conclusion is the unsuitability of rational policy analytic approaches for The Netherlands. Although surprising this conclusion finds some support in the process dominated approach taken in the Dutch Delta Program. In addition, the divergence between Vietnam, Bangladesh and Myanmar, all located in South East Asia, is striking. References Hofstede, G. (1983). The cultural relativity of organizational practices and theories. Journal of international business studies, 75-89. Jos Timmermans, Marjolijn Haasnoot, Leon Hermans, Jan Kwakkel, Martine Rutten and Wil Thissen (2015). Adaptive Delta Management: Roots and Branches, IAHR The Hague 2015.

  13. Sharing the cost of river basin adaptation portfolios to climate change: Insights from social justice and cooperative game theory

    NASA Astrophysics Data System (ADS)

    Girard, Corentin; Rinaudo, Jean-Daniel; Pulido-Velazquez, Manuel

    2016-10-01

    The adaptation of water resource systems to the potential impacts of climate change requires mixed portfolios of supply and demand adaptation measures. The issue is not only to select efficient, robust, and flexible adaptation portfolios but also to find equitable strategies of cost allocation among the stakeholders. Our work addresses such cost allocation problems by applying two different theoretical approaches: social justice and cooperative game theory in a real case study. First of all, a cost-effective portfolio of adaptation measures at the basin scale is selected using a least-cost optimization model. Cost allocation solutions are then defined based on economic rationality concepts from cooperative game theory (the Core). Second, interviews are conducted to characterize stakeholders' perceptions of social justice principles associated with the definition of alternatives cost allocation rules. The comparison of the cost allocation scenarios leads to contrasted insights in order to inform the decision-making process at the river basin scale and potentially reap the efficiency gains from cooperation in the design of river basin adaptation portfolios.

  14. Not letting the left leg know what the right leg is doing: limb-specific locomotor adaptation to sensory-cue conflict.

    PubMed

    Durgin, Frank H; Fox, Laura F; Hoon Kim, Dong

    2003-11-01

    We investigated the phenomenon of limb-specific locomotor adaptation in order to adjudicate between sensory-cue-conflict theory and motor-adaptation theory. The results were consistent with cue-conflict theory in demonstrating that two different leg-specific hopping aftereffects are modulated by the presence of conflicting estimates of self-motion from visual and nonvisual sources. Experiment 1 shows that leg-specific increases in forward drift during attempts to hop in place on one leg while blindfolded vary according to the relationship between visual information and motor activity during an adaptation to outdoor forward hopping. Experiment 2 shows that leg-specific changes in performance on a blindfolded hopping-to-target task are similarly modulated by the presence of cue conflict during adaptation to hopping on a treadmill. Experiment 3 shows that leg-specific aftereffects from hopping additionally produce inadvertent turning during running in place while blindfolded. The results of these experiments suggest that these leg-specific locomotor aftereffects are produced by sensory-cue conflict rather than simple motor adaptation.

  15. Magnetic spectral signatures in the Earth's magnetosheath and plasma depletion layer

    NASA Technical Reports Server (NTRS)

    Anderson, Brian J.; Fuselier, Stephen A.; Gary, S. Peter; Denton, Richard E.

    1994-01-01

    Correlations between plasma properties and magnetic fluctuations in the sub-solar magnetosheath downstream of a quasi-perpendicular shock have been found and indicate that mirror and ion cyclotronlike fluctuations correlate with the magnetosheath proper and plasma depletion layer, respectively (Anderson and Fueselier, 1993). We explore the entire range of magnetic spectral signatures observed from the Active Magnetospheric Particle Tracer Explorers/Charge Composition Explorer (AMPTE/CCE)spacecraft in the magnetosheath downstream of a quasi-perpendicular shock. The magnetic spectral signatures typically progress from predominantly compressional fluctuations,delta B(sub parallel)/delta B perpendicular to approximately 3, with F/F (sub p) less than 0.2 (F and F (sub p) are the wave frequency and proton gyrofrequency, respectively) to predominantly transverse fluctuations, delta B(sub parallel)/delta B perpendicular to approximately 0.3, extending up to F(sub p). The compressional fluctuations are characterized by anticorrelation between the field magnitude and electron density, n(sub e), and by a small compressibility, C(sub e) identically equal to (delta n(sub e)/n(sub e)) (exp 2) (B/delta B(sub parallel)) (exp 2) approximately 0.13, indicative of mirror waves. The spectral characteristics of the transverse fluctuations are in agreement with predictions of linear Vlasov theory for the H(+) and He(2+) cyclotron modes. The power spectra and local plasma parameters are found to vary in concert: mirror waves occur for beta(s ub parallel p) (beta (sub parallel p) identically = 2 mu(sub zero) n(sub p) kT (sub parallel p) / B(exp 2) approximately = 2, A(sub p) indentically = T(sub perpendicular to p)/T(sub parallel p) - 1 approximately = 0.4, whereas cyclotron waves occur for beta (sub parallel p) approximately = 0.2 and A(sub p) approximately = 2. The transition from mirror to cyclotron modes is predicted by linear theory. The spectral characteristics overlap for intermediate plasma parameters. The plasma observations are described by A(sub p) = 0.85 beta(sub parallel P) (exp - 0.48) with a log regression coefficient of -0.74. This inverse A(sub p) - beta(sub parallel p) correlation corresponds closely to the isocontours of maximum ion anisotropy instability growth, gamma (sub m)/omega(sub p) = 0.01, for the mirror and cyclotron modes. The agreement of observed properties and predictions of local theory suggests that the spectral signatures reflect the local plasma environment and that the anisotropy instabilities regulate A(sub p). We suggest that the spectral characteristics may provide a useful basis for ordering observations in the magnetosheath and that the A(sub p) - beta(sub parallel p) inverse correlation may be used as a beta-dependent upper limit on the proton anisotropy to represent kinetic effects.

  16. Modeling and analysis of the TF30-P-3 compressor system with inlet pressure distortion

    NASA Technical Reports Server (NTRS)

    Mazzawy, R. S.; Banks, G. A.

    1976-01-01

    Circumferential inlet distortion testing of a TF30-P-3 afterburning turbofan engine was conducted at NASA-Lewis Research Center. Pratt and Whitney Aircraft analyzed the data using its multiple segment parallel compressor model and classical compressor theory. Distortion attenuation analysis resulted in a detailed flow field calculation with good agreement between multiple segment model predictions and the test data. Sensitivity of the engine stall line to circumferential inlet distortion was calculated on the basis of parallel compressor theory to be more severe than indicated by the data. However, the calculated stall site location was in agreement with high response instrumentation measurements.

  17. Goertler vortices in growing boundary layers: The leading edge receptivity problem, linear growth and the nonlinear breakdown stage

    NASA Technical Reports Server (NTRS)

    Hall, Philip

    1989-01-01

    Goertler vortices are thought to be the cause of transition in many fluid flows of practical importance. A review of the different stages of vortex growth is given. In the linear regime, nonparallel effects completely govern this growth, and parallel flow theories do not capture the essential features of the development of the vortices. A detailed comparison between the parallel and nonparallel theories is given and it is shown that at small vortex wavelengths, the parallel flow theories have some validity; otherwise nonparallel effects are dominant. New results for the receptivity problem for Goertler vortices are given; in particular vortices induced by free stream perturbations impinging on the leading edge of the walls are considered. It is found that the most dangerous mode of this type can be isolated and it's neutral curve is determined. This curve agrees very closely with the available experimental data. A discussion of the different regimes of growth of nonlinear vortices is also given. Again it is shown that, unless the vortex wavelength is small, nonparallel effects are dominant. Some new results for nonlinear vortices of 0(1) wavelengths are given and compared to experimental observations.

  18. Military Curricula for Vocational & Technical Education. Basic Electricity and Electronics Individualized Learning System. CANTRAC A-100-0010. Module Six: Parallel Circuits. Study Booklet.

    ERIC Educational Resources Information Center

    Chief of Naval Education and Training Support, Pensacola, FL.

    This individualized learning module on parallel circuits is one in a series of modules for a course in basic electricity and electronics. The course is one of a number of military-developed curriculum packages selected for adaptation to vocational instructional and curriculum development in a civilian setting. Four lessons are included in the…

  19. GPU accelerated cell-based adaptive mesh refinement on unstructured quadrilateral grid

    NASA Astrophysics Data System (ADS)

    Luo, Xisheng; Wang, Luying; Ran, Wei; Qin, Fenghua

    2016-10-01

    A GPU accelerated inviscid flow solver is developed on an unstructured quadrilateral grid in the present work. For the first time, the cell-based adaptive mesh refinement (AMR) is fully implemented on GPU for the unstructured quadrilateral grid, which greatly reduces the frequency of data exchange between GPU and CPU. Specifically, the AMR is processed with atomic operations to parallelize list operations, and null memory recycling is realized to improve the efficiency of memory utilization. It is found that results obtained by GPUs agree very well with the exact or experimental results in literature. An acceleration ratio of 4 is obtained between the parallel code running on the old GPU GT9800 and the serial code running on E3-1230 V2. With the optimization of configuring a larger L1 cache and adopting Shared Memory based atomic operations on the newer GPU C2050, an acceleration ratio of 20 is achieved. The parallelized cell-based AMR processes have achieved 2x speedup on GT9800 and 18x on Tesla C2050, which demonstrates that parallel running of the cell-based AMR method on GPU is feasible and efficient. Our results also indicate that the new development of GPU architecture benefits the fluid dynamics computing significantly.

  20. Lithofacies identification using multiple adaptive resonance theory neural networks and group decision expert system

    USGS Publications Warehouse

    Chang, H.-C.; Kopaska-Merkel, D. C.; Chen, H.-C.; Rocky, Durrans S.

    2000-01-01

    Lithofacies identification supplies qualitative information about rocks. Lithofacies represent rock textures and are important components of hydrocarbon reservoir description. Traditional techniques of lithofacies identification from core data are costly and different geologists may provide different interpretations. In this paper, we present a low-cost intelligent system consisting of three adaptive resonance theory neural networks and a rule-based expert system to consistently and objectively identify lithofacies from well-log data. The input data are altered into different forms representing different perspectives of observation of lithofacies. Each form of input is processed by a different adaptive resonance theory neural network. Among these three adaptive resonance theory neural networks, one neural network processes the raw continuous data, another processes categorial data, and the third processes fuzzy-set data. Outputs from these three networks are then combined by the expert system using fuzzy inference to determine to which facies the input data should be assigned. Rules are prioritized to emphasize the importance of firing order. This new approach combines the learning ability of neural networks, the adaptability of fuzzy logic, and the expertise of geologists to infer facies of the rocks. This approach is applied to the Appleton Field, an oil field located in Escambia County, Alabama. The hybrid intelligence system predicts lithofacies identity from log data with 87.6% accuracy. This prediction is more accurate than those of single adaptive resonance theory networks, 79.3%, 68.0% and 66.0%, using raw, fuzzy-set, and categorical data, respectively, and by an error-backpropagation neural network, 57.3%. (C) 2000 Published by Elsevier Science Ltd. All rights reserved.

  1. Computerized Adaptive Testing: Some Issues in Development.

    ERIC Educational Resources Information Center

    Orcutt, Venetia L.

    The emergence of enhanced capabilities in computer technology coupled with the growing body of knowledge regarding item response theory has resulted in the expansion of computerized adaptive test (CAT) utilization in a variety of venues. Newcomers to the field need a more thorough understanding of item response theory (IRT) principles, their…

  2. Multiprocessor smalltalk: Implementation, performance, and analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pallas, J.I.

    1990-01-01

    Multiprocessor Smalltalk demonstrates the value of object-oriented programming on a multiprocessor. Its implementation and analysis shed light on three areas: concurrent programming in an object oriented language without special extensions, implementation techniques for adapting to multiprocessors, and performance factors in the resulting system. Adding parallelism to Smalltalk code is easy, because programs already use control abstractions like iterators. Smalltalk's basic control and concurrency primitives (lambda expressions, processes and semaphores) can be used to build parallel control abstractions, including parallel iterators, parallel objects, atomic objects, and futures. Language extensions for concurrency are not required. This implementation demonstrates that it is possiblemore » to build an efficient parallel object-oriented programming system and illustrates techniques for doing so. Three modification tools-serialization, replication, and reorganization-adapted the Berkeley Smalltalk interpreter to the Firefly multiprocessor. Multiprocessor Smalltalk's performance shows that the combination of multiprocessing and object-oriented programming can be effective: speedups (relative to the original serial version) exceed 2.0 for five processors on all the benchmarks; the median efficiency is 48%. Analysis shows both where performance is lost and how to improve and generalize the experimental results. Changes in the interpreter to support concurrency add at most 12% overhead; better access to per-process variables could eliminate much of that. Changes in the user code to express concurrency add as much as 70% overhead; this overhead could be reduced to 54% if blocks (lambda expressions) were reentrant. Performance is also lost when the program cannot keep all five processors busy.« less

  3. The evolution of cultural adaptations: Fijian food taboos protect against dangerous marine toxins

    PubMed Central

    Henrich, Joseph; Henrich, Natalie

    2010-01-01

    The application of evolutionary theory to understanding the origins of our species' capacities for social learning has generated key insights into cultural evolution. By focusing on how our psychology has evolved to adaptively extract beliefs and practices by observing others, theorists have hypothesized how social learning can, over generations, give rise to culturally evolved adaptations. While much field research documents the subtle ways in which culturally transmitted beliefs and practices adapt people to their local environments, and much experimental work reveals the predicted patterns of social learning, little research connects real-world adaptive cultural traits to the patterns of transmission predicted by these theories. Addressing this gap, we show how food taboos for pregnant and lactating women in Fiji selectively target the most toxic marine species, effectively reducing a woman's chances of fish poisoning by 30 per cent during pregnancy and 60 per cent during breastfeeding. We further analyse how these taboos are transmitted, showing support for cultural evolutionary models that combine familial transmission with selective learning from locally prestigious individuals. In addition, we explore how particular aspects of human cognitive processes increase the frequency of some non-adaptive taboos. This case demonstrates how evolutionary theory can be deployed to explain both adaptive and non-adaptive behavioural patterns. PMID:20667878

  4. Application of persuasion and health behavior theories for behavior change counseling: design of the ADAPT (Avoiding Diabetes Thru Action Plan Targeting) program.

    PubMed

    Lin, Jenny J; Mann, Devin M

    2012-09-01

    Diabetes incidence is increasing worldwide and providers often do not feel they can effectively counsel about preventive lifestyle changes. The goal of this paper is to describe the development and initial feasibility testing of the Avoiding Diabetes Thru Action Plan Targeting (ADAPT) program to enhance counseling about behavior change for patients with pre-diabetes. Primary care providers and patients were interviewed about their perspectives on lifestyle changes to prevent diabetes. A multidisciplinary design team incorporated this data to translate elements from behavior change theories to create the ADAPT program. The ADAPT program was pilot tested to evaluate feasibility. Leveraging elements from health behavior theories and persuasion literature, the ADAPT program comprises a shared goal-setting module, implementation intentions exercise, and tailored reminders to encourage behavior change. Feasibility data demonstrate that patients were able to use the program to achieve their behavior change goals. Initial findings show that the ADAPT program is feasible for helping improve primary care providers' counseling for behavior change in patients with pre-diabetes. If successful, the ADAPT program may represent an adaptable and scalable behavior change tool for providers to encourage lifestyle changes to prevent diabetes. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  5. Evolution Is an Experiment: Assessing Parallelism in Crop Domestication and Experimental Evolution: (Nei Lecture, SMBE 2014, Puerto Rico).

    PubMed

    Gaut, Brandon S

    2015-07-01

    In this commentary, I make inferences about the level of repeatability and constraint in the evolutionary process, based on two sets of replicated experiments. The first experiment is crop domestication, which has been replicated across many different species. I focus on results of whole-genome scans for genes selected during domestication and ask whether genes are, in fact, selected in parallel across different domestication events. If genes are selected in parallel, it implies that the number of genetic solutions to the challenge of domestication is constrained. However, I find no evidence for parallel selection events either between species (maize vs. rice) or within species (two domestication events within beans). These results suggest that there are few constraints on genetic adaptation, but conclusions must be tempered by several complicating factors, particularly the lack of explicit design standards for selection screens. The second experiment involves the evolution of Escherichia coli to thermal stress. Unlike domestication, this highly replicated experiment detected a limited set of genes that appear prone to modification during adaptation to thermal stress. However, the number of potentially beneficial mutations within these genes is large, such that adaptation is constrained at the genic level but much less so at the nucleotide level. Based on these two experiments, I make the general conclusion that evolution is remarkably flexible, despite the presence of epistatic interactions that constrain evolutionary trajectories. I also posit that evolution is so rapid that we should establish a Speciation Prize, to be awarded to the first researcher who demonstrates speciation with a sexual organism in the laboratory. © The Author 2015. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  6. Transcriptomic imprints of adaptation to fresh water: parallel evolution of osmoregulatory gene expression in the Alewife

    USGS Publications Warehouse

    Velotta, Jonathan P.; Wegrzyn, Jill L.; Ginzburg, Samuel; Kang, Lin; Czesny, Sergiusz J.; O'Neill, Rachel J.; McCormick, Stephen; Michalak, Pawel; Schultz, Eric T.

    2017-01-01

    Comparative approaches in physiological genomics offer an opportunity to understand the functional importance of genes involved in niche exploitation. We used populations of Alewife (Alosa pseudoharengus) to explore the transcriptional mechanisms that underlie adaptation to fresh water. Ancestrally anadromous Alewives have recently formed multiple, independently derived, landlocked populations, which exhibit reduced tolerance of saltwater and enhanced tolerance of fresh water. Using RNA-seq, we compared transcriptional responses of an anadromous Alewife population to two landlocked populations after acclimation to fresh (0 ppt) and saltwater (35 ppt). Our results suggest that the gill transcriptome has evolved in primarily discordant ways between independent landlocked populations and their anadromous ancestor. By contrast, evolved shifts in the transcription of a small suite of well-characterized osmoregulatory genes exhibited a strong degree of parallelism. In particular, transcription of genes that regulate gill ion exchange has diverged in accordance with functional predictions: freshwater ion-uptake genes (most notably, the ‘freshwater paralog’ of Na+/K+-ATPase α-subunit) were more highly expressed in landlocked forms, whereas genes that regulate saltwater ion secretion (e.g. the ‘saltwater paralog’ of NKAα) exhibited a blunted response to saltwater. Parallel divergence of ion transport gene expression is associated with shifts in salinity tolerance limits among landlocked forms, suggesting that changes to the gill's transcriptional response to salinity facilitate freshwater adaptation.

  7. A mode-matching analysis of dielectric-filled resonant cavities coupled to terahertz parallel-plate waveguides.

    PubMed

    Astley, Victoria; Reichel, Kimberly S; Jones, Jonathan; Mendis, Rajind; Mittleman, Daniel M

    2012-09-10

    We use the mode-matching technique to study parallel-plate waveguide resonant cavities that are filled with a dielectric. We apply the generalized scattering matrix theory to calculate the power transmission through the waveguide-cavities. We compare the analytical results to experimental data to confirm the validity of this approach.

  8. Competition and Cooperation among Similar Representations: Toward a Unified Account of Facilitative and Inhibitory Effects of Lexical Neighbors

    ERIC Educational Resources Information Center

    Chen, Qi; Mirman, Daniel

    2012-01-01

    One of the core principles of how the mind works is the graded, parallel activation of multiple related or similar representations. Parallel activation of multiple representations has been particularly important in the development of theories and models of language processing, where coactivated representations ("neighbors") have been shown to…

  9. From Competence to Efficiency: A Tale of GA Progress

    NASA Technical Reports Server (NTRS)

    Goldberg, David E.

    1996-01-01

    Genetic algorithms (GAs) - search procedures based on the mechanics of natural selection and genetics - have grown in popularity for the solution of difficult optimization problems. Concomitant with this growth has been a rising cacaphony of complaint asserting that too much time must be spent by the GA practitioner diddling with codes, operators, and GA parameters; and even then these GA cassandras continue, and the user is still unsure that the effort will meet with success. At the same time, there has been a rising interest in GA theory by a growing community - a theorocracy - of mathematicians and theoretical computer scientists, and these individuals have turned their efforts increasingly toward elegant abstract theorems and proofs that seem to the practitioner to offer little in the way of answers for GA design or practice. What both groups seem to have missed is the largely unheralded 1993 assembly of integrated, applicable theory and its experimental confirmation. This theory has done two key things. First, it has predicted that simple GAs are severely limited in the difficulty of problems they can solve, and these limitations have been confirmed experimentally. Second, it has shown the path to circumventing these limitations in nontraditional GA designs such as the fast messy GA. This talk surveys the history, methodology, and accomplishment of the 1993 applicable theory revolution. After arguing that these accomplishments open the door to universal GA competence, the paper shifts the discussion to the possibility of universal GA efficiency in the utilization of time and real estate through effective parallelization, temporal decomposition, hybridization, and relaxed function evaluation. The presentation concludes by suggesting that these research directions are quickly taking us to a golden age of adaptation.

  10. Illuminating the dark matter of social neuroscience: Considering the problem of social interaction from philosophical, psychological, and neuroscientific perspectives.

    PubMed

    Przyrembel, Marisa; Smallwood, Jonathan; Pauen, Michael; Singer, Tania

    2012-01-01

    Successful human social interaction depends on our capacity to understand other people's mental states and to anticipate how they will react to our actions. Despite its importance to the human condition, the exact mechanisms underlying our ability to understand another's actions, feelings, and thoughts are still a matter of conjecture. Here, we consider this problem from philosophical, psychological, and neuroscientific perspectives. In a critical review, we demonstrate that attempts to draw parallels across these complementary disciplines is premature: The second-person perspective does not map directly to Interaction or Simulation theories, online social cognition, or shared neural network accounts underlying action observation or empathy. Nor does the third-person perspective map onto Theory-Theory (TT), offline social cognition, or the neural networks that support Theory of Mind (ToM). Moreover, we argue that important qualities of social interaction emerge through the reciprocal interplay of two independent agents whose unpredictable behavior requires that models of their partner's internal state be continually updated. This analysis draws attention to the need for paradigms in social neuroscience that allow two individuals to interact in a spontaneous and natural manner and to adapt their behavior and cognitions in a response contingent fashion due to the inherent unpredictability in another person's behavior. Even if such paradigms were implemented, it is possible that the specific neural correlates supporting such reciprocal interaction would not reflect computation unique to social interaction but rather the use of basic cognitive and emotional processes combined in a unique manner. Finally, we argue that given the crucial role of social interaction in human evolution, ontogeny, and every-day social life, a more theoretically and methodologically nuanced approach to the study of real social interaction will nevertheless help the field of social cognition to evolve.

  11. Commentary: On the wisdom and challenges of culturally attuned treatments for Latinos.

    PubMed

    Falicov, Celia Jaes

    2009-06-01

    In this commentary, I outline the common and distinctive components in the cultural adaptation studies in this special issue and compare cultural adaptations with universalistic and culture-specific perspectives. The term cultural attunement may be more reflective than cultural adaptation insofar as the cultural additions in these studies make the treatments more accessible by adding language translation, cultural values, and contextual stressors. These additions most likely enhance the level of engagement and retention in therapy for Latino families. The work ahead requires a deeper examination of the cultural theories of psychological distress and the cultural theories of change in therapy. A final proposal is made in this commentary for considering the bicultural aspects of the cultural adaptation or attunement enterprise, insofar as the clinical research encounters with immigrants are bicultural encounters. These encounters can reach beyond the notion of cultural "adaptation" of mainstream evidence-based treatments to ethnic minorities and present a unique opportunity for mutually enriching bicultural integration of theory, research, and practice.

  12. Massively parallel and linear-scaling algorithm for second-order Moller–Plesset perturbation theory applied to the study of supramolecular wires

    DOE PAGES

    Kjaergaard, Thomas; Baudin, Pablo; Bykov, Dmytro; ...

    2016-11-16

    Here, we present a scalable cross-platform hybrid MPI/OpenMP/OpenACC implementation of the Divide–Expand–Consolidate (DEC) formalism with portable performance on heterogeneous HPC architectures. The Divide–Expand–Consolidate formalism is designed to reduce the steep computational scaling of conventional many-body methods employed in electronic structure theory to linear scaling, while providing a simple mechanism for controlling the error introduced by this approximation. Our massively parallel implementation of this general scheme has three levels of parallelism, being a hybrid of the loosely coupled task-based parallelization approach and the conventional MPI +X programming model, where X is either OpenMP or OpenACC. We demonstrate strong and weak scalabilitymore » of this implementation on heterogeneous HPC systems, namely on the GPU-based Cray XK7 Titan supercomputer at the Oak Ridge National Laboratory. Using the “resolution of the identity second-order Moller–Plesset perturbation theory” (RI-MP2) as the physical model for simulating correlated electron motion, the linear-scaling DEC implementation is applied to 1-aza-adamantane-trione (AAT) supramolecular wires containing up to 40 monomers (2440 atoms, 6800 correlated electrons, 24 440 basis functions and 91 280 auxiliary functions). This represents the largest molecular system treated at the MP2 level of theory, demonstrating an efficient removal of the scaling wall pertinent to conventional quantum many-body methods.« less

  13. Robust adaptive vibration control of a flexible structure.

    PubMed

    Khoshnood, A M; Moradi, H M

    2014-07-01

    Different types of L1 adaptive control systems show that using robust theories with adaptive control approaches has produced high performance controllers. In this study, a model reference adaptive control scheme considering robust theories is used to propose a practical control system for vibration suppression of a flexible launch vehicle (FLV). In this method, control input of the system is shaped from the dynamic model of the vehicle and components of the control input are adaptively constructed by estimating the undesirable vibration frequencies. Robust stability of the adaptive vibration control system is guaranteed by using the L1 small gain theorem. Simulation results of the robust adaptive vibration control strategy confirm that the effects of vibration on the vehicle performance considerably decrease without the loss of the phase margin of the system. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  14. Personal Construct Theory and Systemic Therapies: Parallel or Convergent Trends?

    ERIC Educational Resources Information Center

    Feixas, Guillem

    1990-01-01

    Explores similarities between Kelly's Personal Construct Theory (PCT) and systemic therapies. Asserts that (1) PCT and systemic therapies share common epistemological stance, constructivism; (2) personal construct systems possess properties of open systems; and (3) PCT and systemic therapies hold similar positions on relevant theoretical and…

  15. Piaget's Theory of Knowledge: Its Philosophical Context.

    ERIC Educational Resources Information Center

    Fabricius, William V.

    1983-01-01

    Sketches the epistemologies of the 17th and 18th century movements of rationalism, empiricism, and romanticism. Discusses Immanuel Kant's revolutionary conclusions concerning reality and the knowing process, and points out some parallels and areas of divergence between the Kantian and Piagetian theories of knowledge. (Author/BJD)

  16. Matching Theory - A Sampler: From Denes Koenig to the Present

    DTIC Science & Technology

    1991-01-01

    1079. [1131 , Matching Theory, Ann. Discrete Math . 29, North- Holland, Amsterdam, 1986. [114 ] M. Luby, A simple parallel algorithm for the maximal...311. [135 ]M.D. Plummer, On n-extendable graphs, Discrete Math . 31, 1980, 201-210. [1361 , Matching extension and the genus of a graph, J. Combin...Theory Ser. B, 44, 1988, 329-837. [137] , A theorem on matchings in the plane, Graph Theory in Memory of G.A. Dirac, Ann. Discrete Math . 41, North

  17. Graph-based linear scaling electronic structure theory.

    PubMed

    Niklasson, Anders M N; Mniszewski, Susan M; Negre, Christian F A; Cawkwell, Marc J; Swart, Pieter J; Mohd-Yusof, Jamal; Germann, Timothy C; Wall, Michael E; Bock, Nicolas; Rubensson, Emanuel H; Djidjev, Hristo

    2016-06-21

    We show how graph theory can be combined with quantum theory to calculate the electronic structure of large complex systems. The graph formalism is general and applicable to a broad range of electronic structure methods and materials, including challenging systems such as biomolecules. The methodology combines well-controlled accuracy, low computational cost, and natural low-communication parallelism. This combination addresses substantial shortcomings of linear scaling electronic structure theory, in particular with respect to quantum-based molecular dynamics simulations.

  18. Graph-based linear scaling electronic structure theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niklasson, Anders M. N., E-mail: amn@lanl.gov; Negre, Christian F. A.; Cawkwell, Marc J.

    2016-06-21

    We show how graph theory can be combined with quantum theory to calculate the electronic structure of large complex systems. The graph formalism is general and applicable to a broad range of electronic structure methods and materials, including challenging systems such as biomolecules. The methodology combines well-controlled accuracy, low computational cost, and natural low-communication parallelism. This combination addresses substantial shortcomings of linear scaling electronic structure theory, in particular with respect to quantum-based molecular dynamics simulations.

  19. An FPGA-based High Speed Parallel Signal Processing System for Adaptive Optics Testbed

    NASA Astrophysics Data System (ADS)

    Kim, H.; Choi, Y.; Yang, Y.

    In this paper a state-of-the-art FPGA (Field Programmable Gate Array) based high speed parallel signal processing system (SPS) for adaptive optics (AO) testbed with 1 kHz wavefront error (WFE) correction frequency is reported. The AO system consists of Shack-Hartmann sensor (SHS) and deformable mirror (DM), tip-tilt sensor (TTS), tip-tilt mirror (TTM) and an FPGA-based high performance SPS to correct wavefront aberrations. The SHS is composed of 400 subapertures and the DM 277 actuators with Fried geometry, requiring high speed parallel computing capability SPS. In this study, the target WFE correction speed is 1 kHz; therefore, it requires massive parallel computing capabilities as well as strict hard real time constraints on measurements from sensors, matrix computation latency for correction algorithms, and output of control signals for actuators. In order to meet them, an FPGA based real-time SPS with parallel computing capabilities is proposed. In particular, the SPS is made up of a National Instrument's (NI's) real time computer and five FPGA boards based on state-of-the-art Xilinx Kintex 7 FPGA. Programming is done with NI's LabView environment, providing flexibility when applying different algorithms for WFE correction. It also facilitates faster programming and debugging environment as compared to conventional ones. One of the five FPGA's is assigned to measure TTS and calculate control signals for TTM, while the rest four are used to receive SHS signal, calculate slops for each subaperture and correction signal for DM. With this parallel processing capabilities of the SPS the overall closed-loop WFE correction speed of 1 kHz has been achieved. System requirements, architecture and implementation issues are described; furthermore, experimental results are also given.

  20. [Ecological misunderstanding, integrative approach, and potential industries in circular economy transition].

    PubMed

    Wang, Rusong

    2005-12-01

    Based on the Social-Economic-Natural Complex Ecosystem theory, this paper questioned 8 kinds of misunderstandings in current planning, incubation, development, and management of circular economy, which had led to either ultra-right or ultra-left actions in ecological and economic development. Rather than concentrated only on the 3-r micro-principles of "reduce-reuse-recycle", thise paper suggested 3-R macro-principles of "Rethinking-Reform-Refunction" for circular economy development. Nine kinds of eco-integrative strategies in industrial transition were put forward, i.e., food web-based horizontal/parallel coupling, life cycle-oriented vertical/serial coupling, functional service rather than products-oriented production, flexible and adaptive structure, ecosystem-based regional coupling, social integrity, comprehensive capacity building, employment enhancement, and respecting human dignity. Ten promising potential eco-industries in China's near-future circular economy development were proposed, such as the transition of traditional chemical fertilizer and pesticide industry to a new kind of industrial complex for agro-ecosystem management.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dorier, Matthieu; Sisneros, Roberto; Bautista Gomez, Leonard

    While many parallel visualization tools now provide in situ visualization capabilities, the trend has been to feed such tools with large amounts of unprocessed output data and let them render everything at the highest possible resolution. This leads to an increased run time of simulations that still have to complete within a fixed-length job allocation. In this paper, we tackle the challenge of enabling in situ visualization under performance constraints. Our approach shuffles data across processes according to its content and filters out part of it in order to feed a visualization pipeline with only a reorganized subset of themore » data produced by the simulation. Our framework leverages fast, generic evaluation procedures to score blocks of data, using information theory, statistics, and linear algebra. It monitors its own performance and adapts dynamically to achieve appropriate visual fidelity within predefined performance constraints. Experiments on the Blue Waters supercomputer with the CM1 simulation show that our approach enables a 5 speedup with respect to the initial visualization pipeline and is able to meet performance constraints.« less

  2. RMG An Open Source Electronic Structure Code for Multi-Petaflops Calculations

    NASA Astrophysics Data System (ADS)

    Briggs, Emil; Lu, Wenchang; Hodak, Miroslav; Bernholc, Jerzy

    RMG (Real-space Multigrid) is an open source, density functional theory code for quantum simulations of materials. It solves the Kohn-Sham equations on real-space grids, which allows for natural parallelization via domain decomposition. Either subspace or Davidson diagonalization, coupled with multigrid methods, are used to accelerate convergence. RMG is a cross platform open source package which has been used in the study of a wide range of systems, including semiconductors, biomolecules, and nanoscale electronic devices. It can optionally use GPU accelerators to improve performance on systems where they are available. The recently released versions (>2.0) support multiple GPU's per compute node, have improved performance and scalability, enhanced accuracy and support for additional hardware platforms. New versions of the code are regularly released at http://www.rmgdft.org. The releases include binaries for Linux, Windows and MacIntosh systems, automated builds for clusters using cmake, as well as versions adapted to the major supercomputing installations and platforms. Several recent, large-scale applications of RMG will be discussed.

  3. Quantitative phase-field lattice-Boltzmann study of lamellar eutectic growth under natural convection

    NASA Astrophysics Data System (ADS)

    Zhang, A.; Guo, Z.; Xiong, S.-M.

    2018-05-01

    The influence of natural convection on lamellar eutectic growth was determined by a comprehensive phase-field lattice-Boltzmann study for Al-Cu and CB r4-C2C l6 eutectic alloys. The mass differences resulting from concentration differences led to the fluid flow and a robust parallel and adaptive mesh refinement algorithm was employed to improve the computational efficiency. By means of carefully designed "numerical experiments", the eutectic growth under natural convection was explored and a simple analytical model was proposed to predict the adjustment of the lamellar spacing. Furthermore, by alternating the solute expansion coefficient, initial lamellar spacing, and undercooling, the microstructure evolution was presented and compared with the classical eutectic growth theory. Results showed that both interfacial solute distribution and average curvature were affected by the natural convection, the effect of which could be further quantified by adding a constant into the growth rule proposed by Jackson and Hunt [Jackson and Hunt, Trans. Metall. Soc. AIME 236, 1129 (1966)].

  4. An adaptive Cartesian control scheme for manipulators

    NASA Technical Reports Server (NTRS)

    Seraji, H.

    1987-01-01

    A adaptive control scheme for direct control of manipulator end-effectors to achieve trajectory tracking in Cartesian space is developed. The control structure is obtained from linear multivariable theory and is composed of simple feedforward and feedback controllers and an auxiliary input. The direct adaptation laws are derived from model reference adaptive control theory and are not based on parameter estimation of the robot model. The utilization of feedforward control and the inclusion of auxiliary input are novel features of the present scheme and result in improved dynamic performance over existing adaptive control schemes. The adaptive controller does not require the complex mathematical model of the robot dynamics or any knowledge of the robot parameters or the payload, and is computationally fast for online implementation with high sampling rates.

  5. The Study and Design of Adaptive Learning System Based on Fuzzy Set Theory

    NASA Astrophysics Data System (ADS)

    Jia, Bing; Zhong, Shaochun; Zheng, Tianyang; Liu, Zhiyong

    Adaptive learning is an effective way to improve the learning outcomes, that is, the selection of learning content and presentation should be adapted to each learner's learning context, learning levels and learning ability. Adaptive Learning System (ALS) can provide effective support for adaptive learning. This paper proposes a new ALS based on fuzzy set theory. It can effectively estimate the learner's knowledge level by test according to learner's target. Then take the factors of learner's cognitive ability and preference into consideration to achieve self-organization and push plan of knowledge. This paper focuses on the design and implementation of domain model and user model in ALS. Experiments confirmed that the system providing adaptive content can effectively help learners to memory the content and improve their comprehension.

  6. Improve load balancing and coding efficiency of tiles in high efficiency video coding by adaptive tile boundary

    NASA Astrophysics Data System (ADS)

    Chan, Chia-Hsin; Tu, Chun-Chuan; Tsai, Wen-Jiin

    2017-01-01

    High efficiency video coding (HEVC) not only improves the coding efficiency drastically compared to the well-known H.264/AVC but also introduces coding tools for parallel processing, one of which is tiles. Tile partitioning is allowed to be arbitrary in HEVC, but how to decide tile boundaries remains an open issue. An adaptive tile boundary (ATB) method is proposed to select a better tile partitioning to improve load balancing (ATB-LoadB) and coding efficiency (ATB-Gain) with a unified scheme. Experimental results show that, compared to ordinary uniform-space partitioning, the proposed ATB can save up to 17.65% of encoding times in parallel encoding scenarios and can reduce up to 0.8% of total bit rates for coding efficiency.

  7. EOS: A project to investigate the design and construction of real-time distributed Embedded Operating Systems

    NASA Technical Reports Server (NTRS)

    Campbell, R. H.; Essick, Ray B.; Johnston, Gary; Kenny, Kevin; Russo, Vince

    1987-01-01

    Project EOS is studying the problems of building adaptable real-time embedded operating systems for the scientific missions of NASA. Choices (A Class Hierarchical Open Interface for Custom Embedded Systems) is an operating system designed and built by Project EOS to address the following specific issues: the software architecture for adaptable embedded parallel operating systems, the achievement of high-performance and real-time operation, the simplification of interprocess communications, the isolation of operating system mechanisms from one another, and the separation of mechanisms from policy decisions. Choices is written in C++ and runs on a ten processor Encore Multimax. The system is intended for use in constructing specialized computer applications and research on advanced operating system features including fault tolerance and parallelism.

  8. Interaction-based evolution: how natural selection and nonrandom mutation work together.

    PubMed

    Livnat, Adi

    2013-10-18

    The modern evolutionary synthesis leaves unresolved some of the most fundamental, long-standing questions in evolutionary biology: What is the role of sex in evolution? How does complex adaptation evolve? How can selection operate effectively on genetic interactions? More recently, the molecular biology and genomics revolutions have raised a host of critical new questions, through empirical findings that the modern synthesis fails to explain: for example, the discovery of de novo genes; the immense constructive role of transposable elements in evolution; genetic variance and biochemical activity that go far beyond what traditional natural selection can maintain; perplexing cases of molecular parallelism; and more. Here I address these questions from a unified perspective, by means of a new mechanistic view of evolution that offers a novel connection between selection on the phenotype and genetic evolutionary change (while relying, like the traditional theory, on natural selection as the only source of feedback on the fit between an organism and its environment). I hypothesize that the mutation that is of relevance for the evolution of complex adaptation-while not Lamarckian, or "directed" to increase fitness-is not random, but is instead the outcome of a complex and continually evolving biological process that combines information from multiple loci into one. This allows selection on a fleeting combination of interacting alleles at different loci to have a hereditary effect according to the combination's fitness. This proposed mechanism addresses the problem of how beneficial genetic interactions can evolve under selection, and also offers an intuitive explanation for the role of sex in evolution, which focuses on sex as the generator of genetic combinations. Importantly, it also implies that genetic variation that has appeared neutral through the lens of traditional theory can actually experience selection on interactions and thus has a much greater adaptive potential than previously considered. Empirical evidence for the proposed mechanism from both molecular evolution and evolution at the organismal level is discussed, and multiple predictions are offered by which it may be tested. This article was reviewed by Nigel Goldenfeld (nominated by Eugene V. Koonin), Jürgen Brosius and W. Ford Doolittle.

  9. Marine Controlled-Source Electromagnetic 2D Inversion for synthetic models.

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Li, Y.

    2016-12-01

    We present a 2D inverse algorithm for frequency domain marine controlled-source electromagnetic (CSEM) data, which is based on the regularized Gauss-Newton approach. As a forward solver, our parallel adaptive finite element forward modeling program is employed. It is a self-adaptive, goal-oriented grid refinement algorithm in which a finite element analysis is performed on a sequence of refined meshes. The mesh refinement process is guided by a dual error estimate weighting to bias refinement towards elements that affect the solution at the EM receiver locations. With the use of the direct solver (MUMPS), we can effectively compute the electromagnetic fields for multi-sources and parametric sensitivities. We also implement the parallel data domain decomposition approach of Key and Ovall (2011), with the goal of being able to compute accurate responses in parallel for complicated models and a full suite of data parameters typical of offshore CSEM surveys. All minimizations are carried out by using the Gauss-Newton algorithm and model perturbations at each iteration step are obtained by using the Inexact Conjugate Gradient iteration method. Synthetic test inversions are presented.

  10. Adaptation of a Multi-Block Structured Solver for Effective Use in a Hybrid CPU/GPU Massively Parallel Environment

    NASA Astrophysics Data System (ADS)

    Gutzwiller, David; Gontier, Mathieu; Demeulenaere, Alain

    2014-11-01

    Multi-Block structured solvers hold many advantages over their unstructured counterparts, such as a smaller memory footprint and efficient serial performance. Historically, multi-block structured solvers have not been easily adapted for use in a High Performance Computing (HPC) environment, and the recent trend towards hybrid GPU/CPU architectures has further complicated the situation. This paper will elaborate on developments and innovations applied to the NUMECA FINE/Turbo solver that have allowed near-linear scalability with real-world problems on over 250 hybrid GPU/GPU cluster nodes. Discussion will focus on the implementation of virtual partitioning and load balancing algorithms using a novel meta-block concept. This implementation is transparent to the user, allowing all pre- and post-processing steps to be performed using a simple, unpartitioned grid topology. Additional discussion will elaborate on developments that have improved parallel performance, including fully parallel I/O with the ADIOS API and the GPU porting of the computationally heavy CPUBooster convergence acceleration module. Head of HPC and Release Management, Numeca International.

  11. Three dimensional adaptive mesh refinement on a spherical shell for atmospheric models with lagrangian coordinates

    NASA Astrophysics Data System (ADS)

    Penner, Joyce E.; Andronova, Natalia; Oehmke, Robert C.; Brown, Jonathan; Stout, Quentin F.; Jablonowski, Christiane; van Leer, Bram; Powell, Kenneth G.; Herzog, Michael

    2007-07-01

    One of the most important advances needed in global climate models is the development of atmospheric General Circulation Models (GCMs) that can reliably treat convection. Such GCMs require high resolution in local convectively active regions, both in the horizontal and vertical directions. During previous research we have developed an Adaptive Mesh Refinement (AMR) dynamical core that can adapt its grid resolution horizontally. Our approach utilizes a finite volume numerical representation of the partial differential equations with floating Lagrangian vertical coordinates and requires resolving dynamical processes on small spatial scales. For the latter it uses a newly developed general-purpose library, which facilitates 3D block-structured AMR on spherical grids. The library manages neighbor information as the blocks adapt, and handles the parallel communication and load balancing, freeing the user to concentrate on the scientific modeling aspects of their code. In particular, this library defines and manages adaptive blocks on the sphere, provides user interfaces for interpolation routines and supports the communication and load-balancing aspects for parallel applications. We have successfully tested the library in a 2-D (longitude-latitude) implementation. During the past year, we have extended the library to treat adaptive mesh refinement in the vertical direction. Preliminary results are discussed. This research project is characterized by an interdisciplinary approach involving atmospheric science, computer science and mathematical/numerical aspects. The work is done in close collaboration between the Atmospheric Science, Computer Science and Aerospace Engineering Departments at the University of Michigan and NOAA GFDL.

  12. Ecological adaptation of diverse honey bee (Apis mellifera) populations.

    PubMed

    Parker, Robert; Melathopoulos, Andony P; White, Rick; Pernal, Stephen F; Guarna, M Marta; Foster, Leonard J

    2010-06-15

    Honey bees are complex eusocial insects that provide a critical contribution to human agricultural food production. Their natural migration has selected for traits that increase fitness within geographical areas, but in parallel their domestication has selected for traits that enhance productivity and survival under local conditions. Elucidating the biochemical mechanisms of these local adaptive processes is a key goal of evolutionary biology. Proteomics provides tools unique among the major 'omics disciplines for identifying the mechanisms employed by an organism in adapting to environmental challenges. Through proteome profiling of adult honey bee midgut from geographically dispersed, domesticated populations combined with multiple parallel statistical treatments, the data presented here suggest some of the major cellular processes involved in adapting to different climates. These findings provide insight into the molecular underpinnings that may confer an advantage to honey bee populations. Significantly, the major energy-producing pathways of the mitochondria, the organelle most closely involved in heat production, were consistently higher in bees that had adapted to colder climates. In opposition, up-regulation of protein metabolism capacity, from biosynthesis to degradation, had been selected for in bees from warmer climates. Overall, our results present a proteomic interpretation of expression polymorphisms between honey bee ecotypes and provide insight into molecular aspects of local adaptation or selection with consequences for honey bee management and breeding. The implications of our findings extend beyond apiculture as they underscore the need to consider the interdependence of animal populations and their agro-ecological context.

  13. Instructors' Application of the Theory of Planned Behavior in Teaching Undergraduate Physical Education Courses

    ERIC Educational Resources Information Center

    Filho, Paulo Jose Barbosa Gutierres; Monteiro, Maria Dolores Alves Ferreira; da Silva, Rudney; Hodge, Samuel R.

    2013-01-01

    The purpose of this study was to analyze adapted physical education instructors' views about the application of the theory of planned behavior (TpB) in teaching physical education undergraduate courses. Participants ("n" = 17) were instructors of adapted physical activity courses from twelve randomly selected institutions of higher…

  14. Adaptive Insecure Attachment and Resource Control Strategies during Middle Childhood

    ERIC Educational Resources Information Center

    Chen, Bin-Bin; Chang, Lei

    2012-01-01

    By integrating the life history theory of attachment with resource control theory, the current study examines the hypothesis that insecure attachment styles reorganized in middle childhood are alternative adaptive strategies used to prepare for upcoming competition with the peer group. A sample of 654 children in the second through seventh grades…

  15. The Effects of Reflective Activities on Skill Adaptation in a Work-Related Instrumental Learning Setting

    ERIC Educational Resources Information Center

    Roessger, Kevin M.

    2014-01-01

    In work-related instrumental learning contexts, the role of reflective activities is unclear. Kolb's experiential learning theory and Mezirow's transformative learning theory predict skill adaptation as an outcome. This prediction was tested by manipulating reflective activities and assessing participants' response and error rates during novel…

  16. Investigating the Impact of Formal Reflective Activities on Skill Adaptation in a Work-Related Instrumental Learning Setting

    ERIC Educational Resources Information Center

    Roessger, Kevin M.

    2013-01-01

    In work-related, instrumental learning contexts the role of reflective activities is unclear. Kolb's (1985) experiential learning theory and Mezirow's transformative learning theory (2000) predict skill-adaptation as a possible outcome. This prediction was experimentally explored by manipulating reflective activities and assessing participants'…

  17. Firestar-"D": Computerized Adaptive Testing Simulation Program for Dichotomous Item Response Theory Models

    ERIC Educational Resources Information Center

    Choi, Seung W.; Podrabsky, Tracy; McKinney, Natalie

    2012-01-01

    Computerized adaptive testing (CAT) enables efficient and flexible measurement of latent constructs. The majority of educational and cognitive measurement constructs are based on dichotomous item response theory (IRT) models. An integral part of developing various components of a CAT system is conducting simulations using both known and empirical…

  18. Young Children's Near and Far Transfer of the Basic Theory of Natural Selection: An Analogical Storybook Intervention

    ERIC Educational Resources Information Center

    Emmons, Natalie; Lees, Kristin; Kelemen, Deborah

    2018-01-01

    Misconceptions about adaptation by natural selection are widespread among adults and likely stem, in part, from cognitive biases and intuitive theories observable in early childhood. Current educational guidelines that recommend delaying comprehensive instruction on the topic of adaptation until adolescence, therefore, raise concerns because…

  19. Efficient parallel resolution of the simplified transport equations in mixed-dual formulation

    NASA Astrophysics Data System (ADS)

    Barrault, M.; Lathuilière, B.; Ramet, P.; Roman, J.

    2011-03-01

    A reactivity computation consists of computing the highest eigenvalue of a generalized eigenvalue problem, for which an inverse power algorithm is commonly used. Very fine modelizations are difficult to treat for our sequential solver, based on the simplified transport equations, in terms of memory consumption and computational time. A first implementation of a Lagrangian based domain decomposition method brings to a poor parallel efficiency because of an increase in the power iterations [1]. In order to obtain a high parallel efficiency, we improve the parallelization scheme by changing the location of the loop over the subdomains in the overall algorithm and by benefiting from the characteristics of the Raviart-Thomas finite element. The new parallel algorithm still allows us to locally adapt the numerical scheme (mesh, finite element order). However, it can be significantly optimized for the matching grid case. The good behavior of the new parallelization scheme is demonstrated for the matching grid case on several hundreds of nodes for computations based on a pin-by-pin discretization.

  20. Efficient Parallelization of a Dynamic Unstructured Application on the Tera MTA

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Biswas, Rupak

    1999-01-01

    The success of parallel computing in solving real-life computationally-intensive problems relies on their efficient mapping and execution on large-scale multiprocessor architectures. Many important applications are both unstructured and dynamic in nature, making their efficient parallel implementation a daunting task. This paper presents the parallelization of a dynamic unstructured mesh adaptation algorithm using three popular programming paradigms on three leading supercomputers. We examine an MPI message-passing implementation on the Cray T3E and the SGI Origin2OOO, a shared-memory implementation using cache coherent nonuniform memory access (CC-NUMA) of the Origin2OOO, and a multi-threaded version on the newly-released Tera Multi-threaded Architecture (MTA). We compare several critical factors of this parallel code development, including runtime, scalability, programmability, and memory overhead. Our overall results demonstrate that multi-threaded systems offer tremendous potential for quickly and efficiently solving some of the most challenging real-life problems on parallel computers.

  1. Parallel implementation of all-digital timing recovery for high-speed and real-time optical coherent receivers.

    PubMed

    Zhou, Xian; Chen, Xue

    2011-05-09

    The digital coherent receivers combine coherent detection with digital signal processing (DSP) to compensate for transmission impairments, and therefore are a promising candidate for future high-speed optical transmission system. However, the maximum symbol rate supported by such real-time receivers is limited by the processing rate of hardware. In order to cope with this difficulty, the parallel processing algorithms is imperative. In this paper, we propose a novel parallel digital timing recovery loop (PDTRL) based on our previous work. Furthermore, for increasing the dynamic dispersion tolerance range of receivers, we embed a parallel adaptive equalizer in the PDTRL. This parallel joint scheme (PJS) can be used to complete synchronization, equalization and polarization de-multiplexing simultaneously. Finally, we demonstrate that PDTRL and PJS allow the hardware to process 112 Gbit/s POLMUX-DQPSK signal at the hundreds MHz range. © 2011 Optical Society of America

  2. A Parallel Numerical Algorithm To Solve Linear Systems Of Equations Emerging From 3D Radiative Transfer

    NASA Astrophysics Data System (ADS)

    Wichert, Viktoria; Arkenberg, Mario; Hauschildt, Peter H.

    2016-10-01

    Highly resolved state-of-the-art 3D atmosphere simulations will remain computationally extremely expensive for years to come. In addition to the need for more computing power, rethinking coding practices is necessary. We take a dual approach by introducing especially adapted, parallel numerical methods and correspondingly parallelizing critical code passages. In the following, we present our respective work on PHOENIX/3D. With new parallel numerical algorithms, there is a big opportunity for improvement when iteratively solving the system of equations emerging from the operator splitting of the radiative transfer equation J = ΛS. The narrow-banded approximate Λ-operator Λ* , which is used in PHOENIX/3D, occurs in each iteration step. By implementing a numerical algorithm which takes advantage of its characteristic traits, the parallel code's efficiency is further increased and a speed-up in computational time can be achieved.

  3. Generalized parallel-perspective stereo mosaics from airborne video.

    PubMed

    Zhu, Zhigang; Hanson, Allen R; Riseman, Edward M

    2004-02-01

    In this paper, we present a new method for automatically and efficiently generating stereoscopic mosaics by seamless registration of images collected by a video camera mounted on an airborne platform. Using a parallel-perspective representation, a pair of geometrically registered stereo mosaics can be precisely constructed under quite general motion. A novel parallel ray interpolation for stereo mosaicing (PRISM) approach is proposed to make stereo mosaics seamless in the presence of obvious motion parallax and for rather arbitrary scenes. Parallel-perspective stereo mosaics generated with the PRISM method have better depth resolution than perspective stereo due to the adaptive baseline geometry. Moreover, unlike previous results showing that parallel-perspective stereo has a constant depth error, we conclude that the depth estimation error of stereo mosaics is in fact a linear function of the absolute depths of a scene. Experimental results on long video sequences are given.

  4. A divide-conquer-recombine algorithmic paradigm for large spatiotemporal quantum molecular dynamics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shimojo, Fuyuki; Hattori, Shinnosuke; Department of Physics, Kumamoto University, Kumamoto 860-8555

    We introduce an extension of the divide-and-conquer (DC) algorithmic paradigm called divide-conquer-recombine (DCR) to perform large quantum molecular dynamics (QMD) simulations on massively parallel supercomputers, in which interatomic forces are computed quantum mechanically in the framework of density functional theory (DFT). In DCR, the DC phase constructs globally informed, overlapping local-domain solutions, which in the recombine phase are synthesized into a global solution encompassing large spatiotemporal scales. For the DC phase, we design a lean divide-and-conquer (LDC) DFT algorithm, which significantly reduces the prefactor of the O(N) computational cost for N electrons by applying a density-adaptive boundary condition at themore » peripheries of the DC domains. Our globally scalable and locally efficient solver is based on a hybrid real-reciprocal space approach that combines: (1) a highly scalable real-space multigrid to represent the global charge density; and (2) a numerically efficient plane-wave basis for local electronic wave functions and charge density within each domain. Hybrid space-band decomposition is used to implement the LDC-DFT algorithm on parallel computers. A benchmark test on an IBM Blue Gene/Q computer exhibits an isogranular parallel efficiency of 0.984 on 786 432 cores for a 50.3 × 10{sup 6}-atom SiC system. As a test of production runs, LDC-DFT-based QMD simulation involving 16 661 atoms is performed on the Blue Gene/Q to study on-demand production of hydrogen gas from water using LiAl alloy particles. As an example of the recombine phase, LDC-DFT electronic structures are used as a basis set to describe global photoexcitation dynamics with nonadiabatic QMD (NAQMD) and kinetic Monte Carlo (KMC) methods. The NAQMD simulations are based on the linear response time-dependent density functional theory to describe electronic excited states and a surface-hopping approach to describe transitions between the excited states. A series of techniques are employed for efficiently calculating the long-range exact exchange correction and excited-state forces. The NAQMD trajectories are analyzed to extract the rates of various excitonic processes, which are then used in KMC simulation to study the dynamics of the global exciton flow network. This has allowed the study of large-scale photoexcitation dynamics in 6400-atom amorphous molecular solid, reaching the experimental time scales.« less

  5. Decentralized Control of Scheduling in Distributed Systems.

    DTIC Science & Technology

    1983-03-18

    the job scheduling algorithm adapts to the changing busyness of the various hosts in the system. The environment in which the job scheduling entities...resources and processes that constitute the node and a set of interfaces for accessing these processes and resources. The structure of a node could change ...parallel. Chang [CHNG82] has also described some algorithms for detecting properties of general graphs by traversing paths in a graph in parallel. One of

  6. A Parallel Workload Model and its Implications for Processor Allocation

    DTIC Science & Technology

    1996-11-01

    with SEV or AVG, both of which can tolerate c = 0.4 { 0.6 before their performance deteriorates signi cantly. On the other hand, Setia [10] has...Sanjeev. K Setia . The interaction between memory allocation and adaptive partitioning in message-passing multicomputers. In IPPS 󈨣 Workshop on Job...Scheduling Strategies for Parallel Processing, pages 89{99, 1995. [11] Sanjeev K. Setia and Satish K. Tripathi. An analysis of several processor

  7. Parallel deterministic neutronics with AMR in 3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clouse, C.; Ferguson, J.; Hendrickson, C.

    1997-12-31

    AMTRAN, a three dimensional Sn neutronics code with adaptive mesh refinement (AMR) has been parallelized over spatial domains and energy groups and runs on the Meiko CS-2 with MPI message passing. Block refined AMR is used with linear finite element representations for the fluxes, which allows for a straight forward interpretation of fluxes at block interfaces with zoning differences. The load balancing algorithm assumes 8 spatial domains, which minimizes idle time among processors.

  8. Parallel Implementation of the Wideband DOA Algorithm on the IBM Cell BE Processor

    DTIC Science & Technology

    2010-05-01

    Abstract—The Multiple Signal Classification ( MUSIC ) algorithm is a powerful technique for determining the Direction of Arrival (DOA) of signals...Broadband Engine Processor (Cell BE). The process of adapting the serial based MUSIC algorithm to the Cell BE will be analyzed in terms of parallelism and...using Multiple Signal Classification MUSIC algorithm [4] • Computation of Focus matrix • Computation of number of sources • Separation of Signal

  9. Prediction and control of chaotic processes using nonlinear adaptive networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, R.D.; Barnes, C.W.; Flake, G.W.

    1990-01-01

    We present the theory of nonlinear adaptive networks and discuss a few applications. In particular, we review the theory of feedforward backpropagation networks. We then present the theory of the Connectionist Normalized Linear Spline network in both its feedforward and iterated modes. Also, we briefly discuss the theory of stochastic cellular automata. We then discuss applications to chaotic time series, tidal prediction in Venice lagoon, finite differencing, sonar transient detection, control of nonlinear processes, control of a negative ion source, balancing a double inverted pendulum and design advice for free electron lasers and laser fusion targets.

  10. Adaptation and Cultural Diffusion.

    ERIC Educational Resources Information Center

    Ormrod, Richard K.

    1992-01-01

    Explores the role of adaptation in cultural diffusion. Explains that adaptation theory recognizes the lack of independence between innovations and their environmental settings. Discusses testing and selection, modification, motivation, and cognition. Suggests that adaptation effects are pervasive in cultural diffusion but require a broader, more…

  11. Middle-Range Theory: Coping and Adaptation with Active Aging.

    PubMed

    Salazar-Barajas, Martha Elba; Salazar-González, Bertha Cecilia; Gallegos-Cabriales, Esther Carlota

    2017-10-01

    Various disciplines focus on a multiplicity of aspects of aging: lifestyles, personal biological factors, psychological conditions, health conditions, physical environment, and social and economic factors. The aforementioned are all related to the determinants of active aging. The aim is to describe the development of a middle-range theory based on coping and adaptation with active aging. Concepts and relationships derived from Roy's model of adaptation are included. The proposed concepts are hope, health habits, coping with aging, social relations, and active aging.

  12. A Principled Approach to Teaching Music Composition to Children

    ERIC Educational Resources Information Center

    Kaschub, Michele; Smith, Janice P.

    2009-01-01

    Building on an apposition of the theories of neurobiologist Antonio Damasio and music theorist Heinrich Schenker, we posit a new model for developing composition instruction based upon the organic connections between humans and music. Parallels are drawn between Damasio's theory of consciousness in which meaning arises from the relationships…

  13. Genetic Wild Card: A Marker for Learners at Risk.

    ERIC Educational Resources Information Center

    Williams, Christine A.

    This paper surveys past and current theories about the workings of the mind, current brain research and psychological applications of non-linear dynamics. Parallels are drawn between the world of high-functioning autism, gifted individuals with learning disabilities, and aspects of genius. An organizing theory is presented, which includes these…

  14. The Reliability of Criterion-Referenced Measures.

    ERIC Educational Resources Information Center

    Livingston, Samuel A.

    The assumptions of the classical test-theory model are used to develop a theory of reliability for criterion-referenced measures which parallels that for norm-referenced measures. It is shown that the Spearman-Brown formula holds for criterion-referenced measures and that the criterion-referenced reliability coefficient can be used to correct…

  15. Career Counseling in a Volatile Job Market: Tiedeman's Perspective Revisited

    ERIC Educational Resources Information Center

    Duys, David K.; Ward, Janice E.; Maxwell, Jane A.; Eaton-Comerford, Leslie

    2008-01-01

    This article explores implications of Tiedeman's original theory for career counselors. Some components of the theory seem to be compatible with existing volatile job market conditions. Notions of career path recycling, development in reverse, nonlinear progress, and parallel streams in career development are explored. Suggestions are made for…

  16. The Harmonic Sieve: A Novel Application of Fourier Analysis to Machine Learning Theory and Practice.

    DTIC Science & Technology

    1995-08-23

    1987. [Ros62] F. Rosenblatt. Principles of Neurodynamics : Perceptrons and the Theory of Brain Mechanisms. Spartan Books, 1962. [RHW86] D. E...editors, Parallel Distributed Processing: Explorations in the Micro structures of Cognition , volume 1, chapter 8, pages 318-362. MIT Press, 1986

  17. Design guidelines for adapting scientific research articles: An example from an introductory level, interdisciplinary program on soft matter

    NASA Astrophysics Data System (ADS)

    Langbeheim, Elon; Safran, Samuel A.; Yerushalmi, Edit

    2013-01-01

    We present design guidelines for using Adapted Primary Literature (APL) as part of current interdisciplinary topics to introductory physics students. APL is a text genre that allows students to comprehend a scientific article, while maintaining the core features of the communication among scientists, thus representing an authentic scientific discourse. We describe the adaptation of a research paper by Nobel Laureate Paul Flory on phase equilibrium in polymer-solvent mixtures that was presented to high school students in a project-based unit on soft matter. The adaptation followed two design strategies: a) Making explicit the interplay between the theory and experiment. b) Re-structuring the text to map the theory onto the students' prior knowledge. Specifically, we map the theory of polymer-solvent systems onto a model for binary mixtures of small molecules of equal size that was already studied in class.

  18. Divergence across diet, time and populations rules out parallel evolution in the gut microbiomes of Trinidadian guppies.

    PubMed

    Sullam, Karen E; Rubin, Benjamin E R; Dalton, Christopher M; Kilham, Susan S; Flecker, Alexander S; Russell, Jacob A

    2015-07-01

    Diverse microbial consortia profoundly influence animal biology, necessitating an understanding of microbiome variation in studies of animal adaptation. Yet, little is known about such variability among fish, in spite of their importance in aquatic ecosystems. The Trinidadian guppy, Poecilia reticulata, is an intriguing candidate to test microbiome-related hypotheses on the drivers and consequences of animal adaptation, given the recent parallel origins of a similar ecotype across streams. To assess the relationships between the microbiome and host adaptation, we used 16S rRNA amplicon sequencing to characterize gut bacteria of two guppy ecotypes with known divergence in diet, life history, physiology and morphology collected from low-predation (LP) and high-predation (HP) habitats in four Trinidadian streams. Guts were populated by several recurring, core bacteria that are related to other fish associates and rarely detected in the environment. Although gut communities of lab-reared guppies differed from those in the wild, microbiome divergence between ecotypes from the same stream was evident under identical rearing conditions, suggesting host genetic divergence can affect associations with gut bacteria. In the field, gut communities varied over time, across streams and between ecotypes in a stream-specific manner. This latter finding, along with PICRUSt predictions of metagenome function, argues against strong parallelism of the gut microbiome in association with LP ecotype evolution. Thus, bacteria cannot be invoked in facilitating the heightened reliance of LP guppies on lower-quality diets. We argue that the macroevolutionary microbiome convergence seen across animals with similar diets may be a signature of secondary microbial shifts arising some time after host-driven adaptation.

  19. Divergence across diet, time and populations rules out parallel evolution in the gut microbiomes of Trinidadian guppies

    PubMed Central

    Sullam, Karen E; Rubin, Benjamin ER; Dalton, Christopher M; Kilham, Susan S; Flecker, Alexander S; Russell, Jacob A

    2015-01-01

    Diverse microbial consortia profoundly influence animal biology, necessitating an understanding of microbiome variation in studies of animal adaptation. Yet, little is known about such variability among fish, in spite of their importance in aquatic ecosystems. The Trinidadian guppy, Poecilia reticulata, is an intriguing candidate to test microbiome-related hypotheses on the drivers and consequences of animal adaptation, given the recent parallel origins of a similar ecotype across streams. To assess the relationships between the microbiome and host adaptation, we used 16S rRNA amplicon sequencing to characterize gut bacteria of two guppy ecotypes with known divergence in diet, life history, physiology and morphology collected from low-predation (LP) and high-predation (HP) habitats in four Trinidadian streams. Guts were populated by several recurring, core bacteria that are related to other fish associates and rarely detected in the environment. Although gut communities of lab-reared guppies differed from those in the wild, microbiome divergence between ecotypes from the same stream was evident under identical rearing conditions, suggesting host genetic divergence can affect associations with gut bacteria. In the field, gut communities varied over time, across streams and between ecotypes in a stream-specific manner. This latter finding, along with PICRUSt predictions of metagenome function, argues against strong parallelism of the gut microbiome in association with LP ecotype evolution. Thus, bacteria cannot be invoked in facilitating the heightened reliance of LP guppies on lower-quality diets. We argue that the macroevolutionary microbiome convergence seen across animals with similar diets may be a signature of secondary microbial shifts arising some time after host-driven adaptation. PMID:25575311

  20. Reconfigurable Model Execution in the OpenMDAO Framework

    NASA Technical Reports Server (NTRS)

    Hwang, John T.

    2017-01-01

    NASA's OpenMDAO framework facilitates constructing complex models and computing their derivatives for multidisciplinary design optimization. Decomposing a model into components that follow a prescribed interface enables OpenMDAO to assemble multidisciplinary derivatives from the component derivatives using what amounts to the adjoint method, direct method, chain rule, global sensitivity equations, or any combination thereof, using the MAUD architecture. OpenMDAO also handles the distribution of processors among the disciplines by hierarchically grouping the components, and it automates the data transfer between components that are on different processors. These features have made OpenMDAO useful for applications in aircraft design, satellite design, wind turbine design, and aircraft engine design, among others. This paper presents new algorithms for OpenMDAO that enable reconfigurable model execution. This concept refers to dynamically changing, during execution, one or more of: the variable sizes, solution algorithm, parallel load balancing, or set of variables-i.e., adding and removing components, perhaps to switch to a higher-fidelity sub-model. Any component can reconfigure at any point, even when running in parallel with other components, and the reconfiguration algorithm presented here performs the synchronized updates to all other components that are affected. A reconfigurable software framework for multidisciplinary design optimization enables new adaptive solvers, adaptive parallelization, and new applications such as gradient-based optimization with overset flow solvers and adaptive mesh refinement. Benchmarking results demonstrate the time savings for reconfiguration compared to setting up the model again from scratch, which can be significant in large-scale problems. Additionally, the new reconfigurability feature is applied to a mission profile optimization problem for commercial aircraft where both the parametrization of the mission profile and the time discretization are adaptively refined, resulting in computational savings of roughly 10% and the elimination of oscillations in the optimized altitude profile.

  1. Repetition of the classical Boysen-Jensen and Nielsen's experiment on phototropism of oat coleoptiles.

    PubMed

    Yamada, K; Nakano, H; Yokotani-Tomita, K; Bruinsma, J; Yamamura, S; Hasegawa, K

    2000-03-01

    The classical experiment of phototropic response as reported by Boysen-Jensen and Nielsen (1926), which supports the Cholodny-Went theory, was repeated in detail. In the original experiment, etiolated oat (Avena sativa L. cv. Victory) coleoptiles with mica inserted into their tip only showed a positive response when the mica was placed parallel toward the light source and not if it was inserted perpendicularly. On the contrary, we found a positive response irrespective of whether the mica was inserted parallel or perpendicularly to the light source. Damage owing to rude splitting severely reduced the response upon perpendicular insertion. These results invalidate the Boysen-Jensen and Nielsen's experiment as a support of the Cholodny-Went theory and lend support to the Bruinsma-Hasegawa theory ascribing phototropism to the local light-induced accumulation of growth inhibitors against a background of even auxin distribution, the diffusion of auxin being unaffected.

  2. Cascade and parallel combination (CPC) of adaptive filters for estimating heart rate during intensive physical exercise from photoplethysmographic signal

    PubMed Central

    Islam, Mohammad Tariqul; Tanvir Ahmed, Sk.; Zabir, Ishmam; Shahnaz, Celia

    2018-01-01

    Photoplethysmographic (PPG) signal is getting popularity for monitoring heart rate in wearable devices because of simplicity of construction and low cost of the sensor. The task becomes very difficult due to the presence of various motion artefacts. In this study, an algorithm based on cascade and parallel combination (CPC) of adaptive filters is proposed in order to reduce the effect of motion artefacts. First, preliminary noise reduction is performed by averaging two channel PPG signals. Next in order to reduce the effect of motion artefacts, a cascaded filter structure consisting of three cascaded adaptive filter blocks is developed where three-channel accelerometer signals are used as references to motion artefacts. To further reduce the affect of noise, a scheme based on convex combination of two such cascaded adaptive noise cancelers is introduced, where two widely used adaptive filters namely recursive least squares and least mean squares filters are employed. Heart rates are estimated from the noise reduced PPG signal in spectral domain. Finally, an efficient heart rate tracking algorithm is designed based on the nature of the heart rate variability. The performance of the proposed CPC method is tested on a widely used public database. It is found that the proposed method offers very low estimation error and a smooth heart rate tracking with simple algorithmic approach. PMID:29515812

  3. Capturing the superorganism: a formal theory of group adaptation.

    PubMed

    Gardner, A; Grafen, A

    2009-04-01

    Adaptation is conventionally regarded as occurring at the level of the individual organism. However, in recent years there has been a revival of interest in the possibility for group adaptations and superorganisms. Here, we provide the first formal theory of group adaptation. In particular: (1) we clarify the distinction between group selection and group adaptation, framing the former in terms of gene frequency change and the latter in terms of optimization; (2) we capture the superorganism in the form of a 'group as maximizing agent' analogy that links an optimization program to a model of a group-structured population; (3) we demonstrate that between-group selection can lead to group adaptation, but only in rather special circumstances; (4) we provide formal support for the view that between-group selection is the best definition for 'group selection'; and (5) we reveal that mechanisms of conflict resolution such as policing cannot be regarded as group adaptations.

  4. Direct adaptive control of manipulators in Cartesian space

    NASA Technical Reports Server (NTRS)

    Seraji, H.

    1987-01-01

    A new adaptive-control scheme for direct control of manipulator end effector to achieve trajectory tracking in Cartesian space is developed in this article. The control structure is obtained from linear multivariable theory and is composed of simple feedforward and feedback controllers and an auxiliary input. The direct adaptation laws are derived from model reference adaptive control theory and are not based on parameter estimation of the robot model. The utilization of adaptive feedforward control and the inclusion of auxiliary input are novel features of the present scheme and result in improved dynamic performance over existing adaptive control schemes. The adaptive controller does not require the complex mathematical model of the robot dynamics or any knowledge of the robot parameters or the payload, and is computationally fast for on-line implementation with high sampling rates. The control scheme is applied to a two-link manipulator for illustration.

  5. Evaluating Statistical Targets for Assembling Parallel Mixed-Format Test Forms

    ERIC Educational Resources Information Center

    Debeer, Dries; Ali, Usama S.; van Rijn, Peter W.

    2017-01-01

    Test assembly is the process of selecting items from an item pool to form one or more new test forms. Often new test forms are constructed to be parallel with an existing (or an ideal) test. Within the context of item response theory, the test information function (TIF) or the test characteristic curve (TCC) are commonly used as statistical…

  6. Sustainability Attitudes and Behavioral Motivations of College Students: Testing the Extended Parallel Process Model

    ERIC Educational Resources Information Center

    Perrault, Evan K.; Clark, Scott K.

    2018-01-01

    Purpose: A planet that can no longer sustain life is a frightening thought--and one that is often present in mass media messages. Therefore, this study aims to test the components of a classic fear appeal theory, the extended parallel process model (EPPM) and to determine how well its constructs predict sustainability behavioral intentions. This…

  7. Digital signal processing and control and estimation theory -- Points of tangency, area of intersection, and parallel directions

    NASA Technical Reports Server (NTRS)

    Willsky, A. S.

    1976-01-01

    A number of current research directions in the fields of digital signal processing and modern control and estimation theory were studied. Topics such as stability theory, linear prediction and parameter identification, system analysis and implementation, two-dimensional filtering, decentralized control and estimation, image processing, and nonlinear system theory were examined in order to uncover some of the basic similarities and differences in the goals, techniques, and philosophy of the two disciplines. An extensive bibliography is included.

  8. Spatial and temporal task characteristics as stress: a test of the dynamic adaptability theory of stress, workload, and performance.

    PubMed

    Szalma, James L; Teo, Grace W L

    2012-03-01

    The goal for this study was to test assertions of the dynamic adaptability theory of stress, which proposes two fundamental task dimensions, information rate (temporal properties of a task) and information structure (spatial properties of a task). The theory predicts adaptive stability across stress magnitudes, with progressive and precipitous changes in adaptive response manifesting first as increases in perceived workload and stress and then as performance failure. Information structure was manipulated by varying the number of displays to be monitored (1, 2, 4 or 8 displays). Information rate was manipulated by varying stimulus presentation rate (8, 12, 16, or 20 events/min). A signal detection task was used in which critical signals were pairs of digits that differed by 0 or 1. Performance accuracy declined and workload and stress increased as a function of increased task demand, with a precipitous decline in accuracy at the highest demand levels. However, the form of performance change as well as the pattern of relationships between speed and accuracy and between performance and workload/stress indicates that some aspects of the theory need revision. Implications of the results for the theory and for future research are discussed. Copyright © 2011 Elsevier B.V. All rights reserved.

  9. Implicit schemes and parallel computing in unstructured grid CFD

    NASA Technical Reports Server (NTRS)

    Venkatakrishnam, V.

    1995-01-01

    The development of implicit schemes for obtaining steady state solutions to the Euler and Navier-Stokes equations on unstructured grids is outlined. Applications are presented that compare the convergence characteristics of various implicit methods. Next, the development of explicit and implicit schemes to compute unsteady flows on unstructured grids is discussed. Next, the issues involved in parallelizing finite volume schemes on unstructured meshes in an MIMD (multiple instruction/multiple data stream) fashion are outlined. Techniques for partitioning unstructured grids among processors and for extracting parallelism in explicit and implicit solvers are discussed. Finally, some dynamic load balancing ideas, which are useful in adaptive transient computations, are presented.

  10. Disturbance Accommodating Adaptive Control with Application to Wind Turbines

    NASA Technical Reports Server (NTRS)

    Frost, Susan

    2012-01-01

    Adaptive control techniques are well suited to applications that have unknown modeling parameters and poorly known operating conditions. Many physical systems experience external disturbances that are persistent or continually recurring. Flexible structures and systems with compliance between components often form a class of systems that fail to meet standard requirements for adaptive control. For these classes of systems, a residual mode filter can restore the ability of the adaptive controller to perform in a stable manner. New theory will be presented that enables adaptive control with accommodation of persistent disturbances using residual mode filters. After a short introduction to some of the control challenges of large utility-scale wind turbines, this theory will be applied to a high-fidelity simulation of a wind turbine.

  11. Avoidance learning: a review of theoretical models and recent developments

    PubMed Central

    Krypotos, Angelos-Miltiadis; Effting, Marieke; Kindt, Merel; Beckers, Tom

    2015-01-01

    Avoidance is a key characteristic of adaptive and maladaptive fear. Here, we review past and contemporary theories of avoidance learning. Based on the theories, experimental findings and clinical observations reviewed, we distill key principles of how adaptive and maladaptive avoidance behavior is acquired and maintained. We highlight clinical implications of avoidance learning theories and describe intervention strategies that could reduce maladaptive avoidance and prevent its return. We end with a brief overview of recent developments and avenues for further research. PMID:26257618

  12. Parallel-aware, dedicated job co-scheduling within/across symmetric multiprocessing nodes

    DOEpatents

    Jones, Terry R.; Watson, Pythagoras C.; Tuel, William; Brenner, Larry; ,Caffrey, Patrick; Fier, Jeffrey

    2010-10-05

    In a parallel computing environment comprising a network of SMP nodes each having at least one processor, a parallel-aware co-scheduling method and system for improving the performance and scalability of a dedicated parallel job having synchronizing collective operations. The method and system uses a global co-scheduler and an operating system kernel dispatcher adapted to coordinate interfering system and daemon activities on a node and across nodes to promote intra-node and inter-node overlap of said interfering system and daemon activities as well as intra-node and inter-node overlap of said synchronizing collective operations. In this manner, the impact of random short-lived interruptions, such as timer-decrement processing and periodic daemon activity, on synchronizing collective operations is minimized on large processor-count SPMD bulk-synchronous programming styles.

  13. An information theory of image gathering

    NASA Technical Reports Server (NTRS)

    Fales, Carl L.; Huck, Friedrich O.

    1991-01-01

    Shannon's mathematical theory of communication is extended to image gathering. Expressions are obtained for the total information that is received with a single image-gathering channel and with parallel channels. It is concluded that the aliased signal components carry information even though these components interfere with the within-passband components in conventional image gathering and restoration, thereby degrading the fidelity and visual quality of the restored image. An examination of the expression for minimum mean-square-error, or Wiener-matrix, restoration from parallel image-gathering channels reveals a method for unscrambling the within-passband and aliased signal components to restore spatial frequencies beyond the sampling passband out to the spatial frequency response cutoff of the optical aperture.

  14. Justification of Shallow-Water Theory

    NASA Astrophysics Data System (ADS)

    Ostapenko, V. V.

    2018-01-01

    The basic conservation laws of shallow-water theory are derived from multidimensional mass and momentum integral conservation laws describing the plane-parallel flow of an ideal incompressible fluid above the horizontal bottom. This conclusion is based on the concept of hydrostatic approximation, which generalizes the concept of long-wavelength approximation and is used for justifying the applicability of the shallow-water theory in the simulation of wave flows of fluid with hydraulic bores.

  15. PREFACE: Conceptual and Technical Challenges for Quantum Gravity 2014 - Parallel session: Noncommutative Geometry and Quantum Gravity

    NASA Astrophysics Data System (ADS)

    Martinetti, P.; Wallet, J.-C.; Amelino-Camelia, G.

    2015-08-01

    The conference Conceptual and Technical Challenges for Quantum Gravity at Sapienza University of Rome, from 8 to 12 September 2014, has provided a beautiful opportunity for an encounter between different approaches and different perspectives on the quantum-gravity problem. It contributed to a higher level of shared knowledge among the quantum-gravity communities pursuing each specific research program. There were plenary talks on many different approaches, including in particular string theory, loop quantum gravity, spacetime noncommutativity, causal dynamical triangulations, asymptotic safety and causal sets. Contributions from the perspective of philosophy of science were also welcomed. In addition several parallel sessions were organized. The present volume collects contributions from the Noncommutative Geometry and Quantum Gravity parallel session4, with additional invited contributions from specialists in the field. Noncommutative geometry in its many incarnations appears at the crossroad of many researches in theoretical and mathematical physics: • from models of quantum space-time (with or without breaking of Lorentz symmetry) to loop gravity and string theory, • from early considerations on UV-divergencies in quantum field theory to recent models of gauge theories on noncommutative spacetime, • from Connes description of the standard model of elementary particles to recent Pati-Salam like extensions. This volume provides an overview of these various topics, interesting for the specialist as well as accessible to the newcomer. 4partially funded by CNRS PEPS /PTI ''Metric aspect of noncommutative geometry: from Monge to Higgs''

  16. A multi-block adaptive solving technique based on lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Zhang, Yang; Xie, Jiahua; Li, Xiaoyue; Ma, Zhenghai; Zou, Jianfeng; Zheng, Yao

    2018-05-01

    In this paper, a CFD parallel adaptive algorithm is self-developed by combining the multi-block Lattice Boltzmann Method (LBM) with Adaptive Mesh Refinement (AMR). The mesh refinement criterion of this algorithm is based on the density, velocity and vortices of the flow field. The refined grid boundary is obtained by extending outward half a ghost cell from the coarse grid boundary, which makes the adaptive mesh more compact and the boundary treatment more convenient. Two numerical examples of the backward step flow separation and the unsteady flow around circular cylinder demonstrate the vortex structure of the cold flow field accurately and specifically.

  17. Adaptive Core Simulation Employing Discrete Inverse Theory - Part I: Theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abdel-Khalik, Hany S.; Turinsky, Paul J.

    2005-07-15

    Use of adaptive simulation is intended to improve the fidelity and robustness of important core attribute predictions such as core power distribution, thermal margins, and core reactivity. Adaptive simulation utilizes a selected set of past and current reactor measurements of reactor observables, i.e., in-core instrumentation readings, to adapt the simulation in a meaningful way. A meaningful adaption will result in high-fidelity and robust adapted core simulator models. To perform adaption, we propose an inverse theory approach in which the multitudes of input data to core simulators, i.e., reactor physics and thermal-hydraulic data, are to be adjusted to improve agreement withmore » measured observables while keeping core simulator models unadapted. At first glance, devising such adaption for typical core simulators with millions of input and observables data would spawn not only several prohibitive challenges but also numerous disparaging concerns. The challenges include the computational burdens of the sensitivity-type calculations required to construct Jacobian operators for the core simulator models. Also, the computational burdens of the uncertainty-type calculations required to estimate the uncertainty information of core simulator input data present a demanding challenge. The concerns however are mainly related to the reliability of the adjusted input data. The methodologies of adaptive simulation are well established in the literature of data adjustment. We adopt the same general framework for data adjustment; however, we refrain from solving the fundamental adjustment equations in a conventional manner. We demonstrate the use of our so-called Efficient Subspace Methods (ESMs) to overcome the computational and storage burdens associated with the core adaption problem. We illustrate the successful use of ESM-based adaptive techniques for a typical boiling water reactor core simulator adaption problem.« less

  18. Beyond adaptive-critic creative learning for intelligent mobile robots

    NASA Astrophysics Data System (ADS)

    Liao, Xiaoqun; Cao, Ming; Hall, Ernest L.

    2001-10-01

    Intelligent industrial and mobile robots may be considered proven technology in structured environments. Teach programming and supervised learning methods permit solutions to a variety of applications. However, we believe that to extend the operation of these machines to more unstructured environments requires a new learning method. Both unsupervised learning and reinforcement learning are potential candidates for these new tasks. The adaptive critic method has been shown to provide useful approximations or even optimal control policies to non-linear systems. The purpose of this paper is to explore the use of new learning methods that goes beyond the adaptive critic method for unstructured environments. The adaptive critic is a form of reinforcement learning. A critic element provides only high level grading corrections to a cognition module that controls the action module. In the proposed system the critic's grades are modeled and forecasted, so that an anticipated set of sub-grades are available to the cognition model. The forecasting grades are interpolated and are available on the time scale needed by the action model. The success of the system is highly dependent on the accuracy of the forecasted grades and adaptability of the action module. Examples from the guidance of a mobile robot are provided to illustrate the method for simple line following and for the more complex navigation and control in an unstructured environment. The theory presented that is beyond the adaptive critic may be called creative theory. Creative theory is a form of learning that models the highest level of human learning - imagination. The application of the creative theory appears to not only be to mobile robots but also to many other forms of human endeavor such as educational learning and business forecasting. Reinforcement learning such as the adaptive critic may be applied to known problems to aid in the discovery of their solutions. The significance of creative theory is that it permits the discovery of the unknown problems, ones that are not yet recognized but may be critical to survival or success.

  19. Free-energy landscapes from adaptively biased methods: Application to quantum systems

    NASA Astrophysics Data System (ADS)

    Calvo, F.

    2010-10-01

    Several parallel adaptive biasing methods are applied to the calculation of free-energy pathways along reaction coordinates, choosing as a difficult example the double-funnel landscape of the 38-atom Lennard-Jones cluster. In the case of classical statistics, the Wang-Landau and adaptively biased molecular-dynamics (ABMD) methods are both found efficient if multiple walkers and replication and deletion schemes are used. An extension of the ABMD technique to quantum systems, implemented through the path-integral MD framework, is presented and tested on Ne38 against the quantum superposition method.

  20. F-8C adaptive control law refinement and software development

    NASA Technical Reports Server (NTRS)

    Hartmann, G. L.; Stein, G.

    1981-01-01

    An explicit adaptive control algorithm based on maximum likelihood estimation of parameters was designed. To avoid iterative calculations, the algorithm uses parallel channels of Kalman filters operating at fixed locations in parameter space. This algorithm was implemented in NASA/DFRC's Remotely Augmented Vehicle (RAV) facility. Real-time sensor outputs (rate gyro, accelerometer, surface position) are telemetered to a ground computer which sends new gain values to an on-board system. Ground test data and flight records were used to establish design values of noise statistics and to verify the ground-based adaptive software.

  1. MADNESS: A Multiresolution, Adaptive Numerical Environment for Scientific Simulation

    DOE PAGES

    Harrison, Robert J.; Beylkin, Gregory; Bischoff, Florian A.; ...

    2016-01-01

    We present MADNESS (multiresolution adaptive numerical environment for scientific simulation) that is a high-level software environment for solving integral and differential equations in many dimensions that uses adaptive and fast harmonic analysis methods with guaranteed precision that are based on multiresolution analysis and separated representations. Underpinning the numerical capabilities is a powerful petascale parallel programming environment that aims to increase both programmer productivity and code scalability. This paper describes the features and capabilities of MADNESS and briefly discusses some current applications in chemistry and several areas of physics.

  2. The SMART MIL-STD-1553 bus adapter hardware manual

    NASA Technical Reports Server (NTRS)

    Ton, T. T.

    1981-01-01

    The SMART Multiplexer Interface Adapter, (SMIA) a complete system interface for message structure of the MIL-STD-1553, is described. It provides buffering and storage for transmitted and received data and handles all the necessary handshaking to interface between parallel 8-bit data bus and a MIL-STD serial bit stream. The bus adapter is configured as either a bus controller of a remote terminal interface. It is coupled directly to the multiplex bus, or stub coupled through an additional isolation transformer located at the connection point. Fault isolation resistors provide short circuit protection.

  3. Cognitive Adaptation Theory and Breast Cancer Recurrence: Are There Limits?

    ERIC Educational Resources Information Center

    Tomich, Patricia L.; Helgeson, Vicki S.

    2006-01-01

    Relations of the components of cognitive adaptation theory (self-esteem, optimism, control) to quality of life and benefit finding were examined for 70 women (91% Caucasian) diagnosed with Stage I, II, or III breast cancer over 5 years ago. Half of these women experienced a recurrence within the 5 years; the other half remained disease free. Women…

  4. Trigger Event Meets Culture Shock: Linking the Literature of Transformative Learning Theory and Cross-Cultural Adaptation.

    ERIC Educational Resources Information Center

    Lyon, Carol R.

    The literature on transformative learning theory and the literature on cross-cultural adaptation were analyzed to identify links between both bodies of literature. The notion of an unexpected phenomenon that influences individuals residing in an unfamiliar culture was shown to be a common thread linking the two bodies of literature. Transformative…

  5. Rhetorical Dissent as an Adaptive Response to Classroom Problems: A Test of Protection Motivation Theory

    ERIC Educational Resources Information Center

    Bolkan, San; Goodboy, Alan K.

    2016-01-01

    Protection motivation theory (PMT) explains people's adaptive behavior in response to personal threats. In this study, PMT was used to predict rhetorical dissent episodes related to 210 student reports of perceived classroom problems. In line with theoretical predictions, a moderated moderation analysis revealed that students were likely to voice…

  6. Seeing Coloured Fruits: Utilisation of the Theory of Adaptive Memory in Teaching Botany

    ERIC Educational Resources Information Center

    Prokop, Pavol; Fancovicová, Jana

    2014-01-01

    Plants are characterised by a great diversity of easily observed features such as colours or shape, but children show low interest in learning about them. Here, we integrated modern theory of adaptive memory and evolutionary views of the function of fruit colouration on children's retention of information. Survival-relevant (fruit toxicity) and…

  7. How Can Evolution Learn?

    PubMed

    Watson, Richard A; Szathmáry, Eörs

    2016-02-01

    The theory of evolution links random variation and selection to incremental adaptation. In a different intellectual domain, learning theory links incremental adaptation (e.g., from positive and/or negative reinforcement) to intelligent behaviour. Specifically, learning theory explains how incremental adaptation can acquire knowledge from past experience and use it to direct future behaviours toward favourable outcomes. Until recently such cognitive learning seemed irrelevant to the 'uninformed' process of evolution. In our opinion, however, new results formally linking evolutionary processes to the principles of learning might provide solutions to several evolutionary puzzles - the evolution of evolvability, the evolution of ecological organisation, and evolutionary transitions in individuality. If so, the ability for evolution to learn might explain how it produces such apparently intelligent designs. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Acceleration of low-energy ions at parallel shocks with a focused transport model

    DOE PAGES

    Zuo, Pingbing; Zhang, Ming; Rassoul, Hamid K.

    2013-04-10

    Here, we present a test particle simulation on the injection and acceleration of low-energy suprathermal particles by parallel shocks with a focused transport model. The focused transport equation contains all necessary physics of shock acceleration, but avoids the limitation of diffusive shock acceleration (DSA) that requires a small pitch angle anisotropy. This simulation verifies that the particles with speeds of a fraction of to a few times the shock speed can indeed be directly injected and accelerated into the DSA regime by parallel shocks. At higher energies starting from a few times the shock speed, the energy spectrum of acceleratedmore » particles is a power law with the same spectral index as the solution of standard DSA theory, although the particles are highly anisotropic in the upstream region. The intensity, however, is different from that predicted by DSA theory, indicating a different level of injection efficiency. It is found that the shock strength, the injection speed, and the intensity of an electric cross-shock potential (CSP) jump can affect the injection efficiency of the low-energy particles. A stronger shock has a higher injection efficiency. In addition, if the speed of injected particles is above a few times the shock speed, the produced power-law spectrum is consistent with the prediction of standard DSA theory in both its intensity and spectrum index with an injection efficiency of 1. CSP can increase the injection efficiency through direct particle reflection back upstream, but it has little effect on the energetic particle acceleration once the speed of injected particles is beyond a few times the shock speed. This test particle simulation proves that the focused transport theory is an extension of DSA theory with the capability of predicting the efficiency of particle injection.« less

  9. The metaphysics of D-CTCs: On the underlying assumptions of Deutsch's quantum solution to the paradoxes of time travel

    NASA Astrophysics Data System (ADS)

    Dunlap, Lucas

    2016-11-01

    I argue that Deutsch's model for the behavior of systems traveling around closed timelike curves (CTCs) relies implicitly on a substantive metaphysical assumption. Deutsch is employing a version of quantum theory with a significantly supplemented ontology of parallel existent worlds, which differ in kind from the many worlds of the Everett interpretation. Standard Everett does not support the existence of multiple identical copies of the world, which the D-CTC model requires. This has been obscured because he often refers to the branching structure of Everett as a "multiverse", and describes quantum interference by reference to parallel interacting definite worlds. But he admits that this is only an approximation to Everett. The D-CTC model, however, relies crucially on the existence of a multiverse of parallel interacting worlds. Since his model is supplemented by structures that go significantly beyond quantum theory, and play an ineliminable role in its predictions and explanations, it does not represent a quantum solution to the paradoxes of time travel.

  10. Functional consistency across two behavioural modalities: fire-setting and self-harm in female special hospital patients.

    PubMed

    Miller, Sarah; Fritzon, Katarina

    2007-01-01

    Fire-setting and self-harm behaviours among women in high security special hospitals may be understood using Shye's Action System Theory (AST) in which four functional modes are recognized: 'adaptive', 'expressive', 'integrative', and 'conservative'. To test for relationships between different forms of fire-setting and self-harm behaviours and AST modes among women in special hospital, and for consistency within modes across the two behaviours. Clinical case files evidencing both fire-setting and self-harm behaviours (n = 50) were analysed for content, focusing on incident characteristics. A total of 29 fire-setting and 22 self-harm variables were analysed using Smallest Space Analysis (SSA). Chi-square and Spearman's rho (rho) analyses were used to determine functional consistency across behavioural modes. Most women showed one predominant AST mode in fire-setting (n = 39) and self-harm (n = 35). Significant positive correlations were found between integrative and adaptive modes of functioning. The lack of correlation between conservative and expressive modes reflects the differing behaviours used in each activity. Despite this, significant cross-tabulations revealed that each woman had parallel fire-setting and self-harm styles. Findings suggest that, for some women, setting fires and self harm fulfil a similar underlying function. Support is given to AST as a way of furthering understanding of damaging behaviours, whether self- or other-inflicted. Copyright 2007 John Wiley & Sons, Ltd.

  11. Vehicular impact absorption system

    NASA Technical Reports Server (NTRS)

    Knoell, A. C.; Wilson, A. H. (Inventor)

    1978-01-01

    An improved vehicular impact absorption system characterized by a plurality of aligned crash cushions of substantially cubic configuration is described. Each consists of a plurality of voided aluminum beverage cans arranged in substantial parallelism within a plurality of superimposed tiers and a covering envelope formed of metal hardware cloth. A plurality of cables is extended through the cushions in substantial parallelism with an axis of alignment for the cushions adapted to be anchored at each of the opposite end thereof.

  12. Hybrid Parallelization of Adaptive MHD-Kinetic Module in Multi-Scale Fluid-Kinetic Simulation Suite

    DOE PAGES

    Borovikov, Sergey; Heerikhuisen, Jacob; Pogorelov, Nikolai

    2013-04-01

    The Multi-Scale Fluid-Kinetic Simulation Suite has a computational tool set for solving partially ionized flows. In this paper we focus on recent developments of the kinetic module which solves the Boltzmann equation using the Monte-Carlo method. The module has been recently redesigned to utilize intra-node hybrid parallelization. We describe in detail the redesign process, implementation issues, and modifications made to the code. Finally, we conduct a performance analysis.

  13. Adaption of a parallel-path poly(tetrafluoroethylene) nebulizer to an evaporative light scattering detector: Optimization and application to studies of poly(dimethylsiloxane) oligomers as a model polymer.

    PubMed

    Durner, Bernhard; Ehmann, Thomas; Matysik, Frank-Michael

    2018-06-05

    The adaption of an parallel-path poly(tetrafluoroethylene)(PTFE) ICP-nebulizer to an evaporative light scattering detector (ELSD) was realized. This was done by substituting the originally installed concentric glass nebulizer of the ELSD. The performance of both nebulizers was compared regarding nebulizer temperature, evaporator temperature, flow rate of nebulizing gas and flow rate of mobile phase of different solvents using caffeine and poly(dimethylsiloxane) (PDMS) as analytes. Both nebulizers showed similar performances but for the parallel-path PTFE nebulizer the performance was considerably better at low LC flow rates and the nebulizer lifetime was substantially increased. In general, for both nebulizers the highest sensitivity was obtained by applying the lowest possible evaporator temperature in combination with the highest possible nebulizer temperature at preferably low gas flow rates. Besides the optimization of detector parameters, response factors for various PDMS oligomers were determined and the dependency of the detector signal on molar mass of the analytes was studied. The significant improvement regarding long-term stability made the modified ELSD much more robust and saved time and money by reducing the maintenance efforts. Thus, especially in polymer HPLC, associated with a complex matrix situation, the PTFE-based parallel-path nebulizer exhibits attractive characteristics for analytical studies of polymers. Copyright © 2018. Published by Elsevier B.V.

  14. Parallel Gene Expression Differences between Low and High Latitude Populations of Drosophila melanogaster and D. simulans

    PubMed Central

    Zhao, Li; Wit, Janneke; Svetec, Nicolas; Begun, David J.

    2015-01-01

    Gene expression variation within species is relatively common, however, the role of natural selection in the maintenance of this variation is poorly understood. Here we investigate low and high latitude populations of Drosophila melanogaster and its sister species, D. simulans, to determine whether the two species show similar patterns of population differentiation, consistent with a role for spatially varying selection in maintaining gene expression variation. We compared at two temperatures the whole male transcriptome of D. melanogaster and D. simulans sampled from Panama City (Panama) and Maine (USA). We observed a significant excess of genes exhibiting differential expression in both species, consistent with parallel adaptation to heterogeneous environments. Moreover, the majority of genes showing parallel expression differentiation showed the same direction of differential expression in the two species and the magnitudes of expression differences between high and low latitude populations were correlated across species, further bolstering the conclusion that parallelism for expression phenotypes results from spatially varying selection. However, the species also exhibited important differences in expression phenotypes. For example, the genomic extent of genotype × environment interaction was much more common in D. melanogaster. Highly differentiated SNPs between low and high latitudes were enriched in the 3’ UTRs and CDS of the geographically differently expressed genes in both species, consistent with an important role for cis-acting variants in driving local adaptation for expression-related phenotypes. PMID:25950438

  15. Parallel Gene Expression Differences between Low and High Latitude Populations of Drosophila melanogaster and D. simulans.

    PubMed

    Zhao, Li; Wit, Janneke; Svetec, Nicolas; Begun, David J

    2015-05-01

    Gene expression variation within species is relatively common, however, the role of natural selection in the maintenance of this variation is poorly understood. Here we investigate low and high latitude populations of Drosophila melanogaster and its sister species, D. simulans, to determine whether the two species show similar patterns of population differentiation, consistent with a role for spatially varying selection in maintaining gene expression variation. We compared at two temperatures the whole male transcriptome of D. melanogaster and D. simulans sampled from Panama City (Panama) and Maine (USA). We observed a significant excess of genes exhibiting differential expression in both species, consistent with parallel adaptation to heterogeneous environments. Moreover, the majority of genes showing parallel expression differentiation showed the same direction of differential expression in the two species and the magnitudes of expression differences between high and low latitude populations were correlated across species, further bolstering the conclusion that parallelism for expression phenotypes results from spatially varying selection. However, the species also exhibited important differences in expression phenotypes. For example, the genomic extent of genotype × environment interaction was much more common in D. melanogaster. Highly differentiated SNPs between low and high latitudes were enriched in the 3' UTRs and CDS of the geographically differently expressed genes in both species, consistent with an important role for cis-acting variants in driving local adaptation for expression-related phenotypes.

  16. Aerodynamic Shape Optimization of Supersonic Aircraft Configurations via an Adjoint Formulation on Parallel Computers

    NASA Technical Reports Server (NTRS)

    Reuther, James; Alonso, Juan Jose; Rimlinger, Mark J.; Jameson, Antony

    1996-01-01

    This work describes the application of a control theory-based aerodynamic shape optimization method to the problem of supersonic aircraft design. The design process is greatly accelerated through the use of both control theory and a parallel implementation on distributed memory computers. Control theory is employed to derive the adjoint differential equations whose solution allows for the evaluation of design gradient information at a fraction of the computational cost required by previous design methods. The resulting problem is then implemented on parallel distributed memory architectures using a domain decomposition approach, an optimized communication schedule, and the MPI (Message Passing Interface) Standard for portability and efficiency. The final result achieves very rapid aerodynamic design based on higher order computational fluid dynamics methods (CFD). In our earlier studies, the serial implementation of this design method was shown to be effective for the optimization of airfoils, wings, wing-bodies, and complex aircraft configurations using both the potential equation and the Euler equations. In our most recent paper, the Euler method was extended to treat complete aircraft configurations via a new multiblock implementation. Furthermore, during the same conference, we also presented preliminary results demonstrating that this basic methodology could be ported to distributed memory parallel computing architectures. In this paper, our concern will be to demonstrate that the combined power of these new technologies can be used routinely in an industrial design environment by applying it to the case study of the design of typical supersonic transport configurations. A particular difficulty of this test case is posed by the propulsion/airframe integration.

  17. Contributions of Mirror and Ion Bernstein Instabilities to the Scattering of Pickup Ions in the Outer Heliosheath

    NASA Astrophysics Data System (ADS)

    Min, Kyungguk; Liu, Kaijun

    2018-01-01

    Maintaining the stability of pickup ions in the outer heliosheath is a critical element for the secondary energetic neutral atom (ENA) mechanism, a theory put forth to explain the nearly annular band of ENA emission observed by the Interstellar Boundary EXplorer. A recent study showed that a pickup ion ring can remain stable to the Alfvén/ion cyclotron (AC) instability at propagation parallel to the background magnetic field when the parallel thermal spread of the ring is comparable to that of a background population. This study investigates the potential role that the mirror or ion Bernstein (IB) instabilities can play in the stability of pickup ions when conditions are such that the AC instability is suppressed. Linear Vlasov theory predicts relatively fast mirror and IB instability growth even though AC instability growth is suppressed. For a few such cases, two-dimensional hybrid and macroscopic quasi-linear simulations are carried out to examine how the unstable mirror and IB modes evolve and affect the pickup ion ring beyond the linear theory picture. For the parameters used, the mirror mode dominates initially and leads to a rapid parallel heating of the pickup ions in excess of the parallel temperature of the background protons. The heated pickup ions subsequently trigger onset of the AC mode, which grows sufficiently large to be the dominant pitch angle scattering agent after the mirror mode has decayed away. The present results indicate that the pickup ion stability needed may not be guaranteed once the mirror and IB instabilities are taken into account.

  18. A domain-specific compiler for a parallel multiresolution adaptive numerical simulation environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rajbhandari, Samyam; Kim, Jinsung; Krishnamoorthy, Sriram

    This paper describes the design and implementation of a layered domain-specific compiler to support MADNESS---Multiresolution ADaptive Numerical Environment for Scientific Simulation. MADNESS is a high-level software environment for the solution of integral and differential equations in many dimensions, using adaptive and fast harmonic analysis methods with guaranteed precision. MADNESS uses k-d trees to represent spatial functions and implements operators like addition, multiplication, differentiation, and integration on the numerical representation of functions. The MADNESS runtime system provides global namespace support and a task-based execution model including futures. MADNESS is currently deployed on massively parallel supercomputers and has enabled many science advances.more » Due to the highly irregular and statically unpredictable structure of the k-d trees representing the spatial functions encountered in MADNESS applications, only purely runtime approaches to optimization have previously been implemented in the MADNESS framework. This paper describes a layered domain-specific compiler developed to address some performance bottlenecks in MADNESS. The newly developed static compile-time optimizations, in conjunction with the MADNESS runtime support, enable significant performance improvement for the MADNESS framework.« less

  19. Locally adaptive parallel temperature accelerated dynamics method

    NASA Astrophysics Data System (ADS)

    Shim, Yunsic; Amar, Jacques G.

    2010-03-01

    The recently-developed temperature-accelerated dynamics (TAD) method [M. Sørensen and A.F. Voter, J. Chem. Phys. 112, 9599 (2000)] along with the more recently developed parallel TAD (parTAD) method [Y. Shim et al, Phys. Rev. B 76, 205439 (2007)] allow one to carry out non-equilibrium simulations over extended time and length scales. The basic idea behind TAD is to speed up transitions by carrying out a high-temperature MD simulation and then use the resulting information to obtain event times at the desired low temperature. In a typical implementation, a fixed high temperature Thigh is used. However, in general one expects that for each configuration there exists an optimal value of Thigh which depends on the particular transition pathways and activation energies for that configuration. Here we present a locally adaptive high-temperature TAD method in which instead of using a fixed Thigh the high temperature is dynamically adjusted in order to maximize simulation efficiency. Preliminary results of the performance obtained from parTAD simulations of Cu/Cu(100) growth using the locally adaptive Thigh method will also be presented.

  20. The Effect of Semantic Transparency on the Processing of Morphologically Derived Words: Evidence from Decision Latencies and Event-Related Potentials

    ERIC Educational Resources Information Center

    Jared, Debra; Jouravlev, Olessia; Joanisse, Marc F.

    2017-01-01

    Decomposition theories of morphological processing in visual word recognition posit an early morpho-orthographic parser that is blind to semantic information, whereas parallel distributed processing (PDP) theories assume that the transparency of orthographic-semantic relationships influences processing from the beginning. To test these…

  1. Diffusion of the Internet within a Graduate School.

    ERIC Educational Resources Information Center

    Sherry, Lorraine

    This paper reports the results of a five-year case study of the use of online tools: Internet, e-mail, and the World Wide Web, within a Graduate School of Education. The conceptual framework was independently developed, but because of the striking parallel with activity theory, activity theory became the overall framework for interpreting…

  2. Chaos and Christianity: A Response to Butz and a Biblical Alternative.

    ERIC Educational Resources Information Center

    Watts, Richard E.; Trusty, Jerry

    1997-01-01

    M.R. Butz's position regarding chaos theory and Christianity is reviewed. The compatibility of biblical theology and the sciences is discussed. Parallels between chaos theory and the philosophical perspective of Soren Kierkegaard are explored. A biblical model is offered for counselors in assisting Christian clients in embracing chaos. (Author/EMK)

  3. Comparison of Reliability Measures under Factor Analysis and Item Response Theory

    ERIC Educational Resources Information Center

    Cheng, Ying; Yuan, Ke-Hai; Liu, Cheng

    2012-01-01

    Reliability of test scores is one of the most pervasive psychometric concepts in measurement. Reliability coefficients based on a unifactor model for continuous indicators include maximal reliability rho and an unweighted sum score-based omega, among many others. With increasing popularity of item response theory, a parallel reliability measure pi…

  4. A parallel adaptive quantum genetic algorithm for the controllability of arbitrary networks.

    PubMed

    Li, Yuhong; Gong, Guanghong; Li, Ni

    2018-01-01

    In this paper, we propose a novel algorithm-parallel adaptive quantum genetic algorithm-which can rapidly determine the minimum control nodes of arbitrary networks with both control nodes and state nodes. The corresponding network can be fully controlled with the obtained control scheme. We transformed the network controllability issue into a combinational optimization problem based on the Popov-Belevitch-Hautus rank condition. A set of canonical networks and a list of real-world networks were experimented. Comparison results demonstrated that the algorithm was more ideal to optimize the controllability of networks, especially those larger-size networks. We demonstrated subsequently that there were links between the optimal control nodes and some network statistical characteristics. The proposed algorithm provides an effective approach to improve the controllability optimization of large networks or even extra-large networks with hundreds of thousands nodes.

  5. Computing sensitivity and selectivity in parallel factor analysis and related multiway techniques: the need for further developments in net analyte signal theory.

    PubMed

    Olivieri, Alejandro C

    2005-08-01

    Sensitivity and selectivity are important figures of merit in multiway analysis, regularly employed for comparison of the analytical performance of methods and for experimental design and planning. They are especially interesting in the second-order advantage scenario, where the latter property allows for the analysis of samples with a complex background, permitting analyte determination even in the presence of unsuspected interferences. Since no general theory exists for estimating the multiway sensitivity, Monte Carlo numerical calculations have been developed for estimating variance inflation factors, as a convenient way of assessing both sensitivity and selectivity parameters for the popular parallel factor (PARAFAC) analysis and also for related multiway techniques. When the second-order advantage is achieved, the existing expressions derived from net analyte signal theory are only able to adequately cover cases where a single analyte is calibrated using second-order instrumental data. However, they fail for certain multianalyte cases, or when third-order data are employed, calling for an extension of net analyte theory. The results have strong implications in the planning of multiway analytical experiments.

  6. Neoclassical theory inside transport barriers in tokamaks

    NASA Astrophysics Data System (ADS)

    Shaing, K. C.; Hsu, C. T.

    2012-02-01

    Inside the transport barriers in tokamaks, ion energy losses sometimes are smaller than the value predicted by the standard neoclassical theory. This improvement can be understood in terms of the orbit squeezing theory in addition to the sonic poloidal E ×B Mach number Up,m that pushes the tips of the trapped particles to the higher energy. In general, Up,m also includes the poloidal component of the parallel mass flow speed. These physics mechanisms are the corner stones for the transition theory of the low confinement mode (L-mode) to the high confinement mode (H-mode) in tokamaks. Here, detailed transport fluxes in the banana regime are presented using the parallel viscous forces calculated earlier. It is found, as expected, that effects of orbit squeezing and the sonic Up,m reduce the ion heat conductivity. The former reduces it by a factor of |S|3/2 and the later by a factor of R(Up ,m2)exp(-Up ,m2) with R(Up ,m2), a rational function. Here, S is the orbit squeezing factor.

  7. Accelerating adaptive inverse distance weighting interpolation algorithm on a graphics processing unit

    PubMed Central

    Xu, Liangliang; Xu, Nengxiong

    2017-01-01

    This paper focuses on designing and implementing parallel adaptive inverse distance weighting (AIDW) interpolation algorithms by using the graphics processing unit (GPU). The AIDW is an improved version of the standard IDW, which can adaptively determine the power parameter according to the data points’ spatial distribution pattern and achieve more accurate predictions than those predicted by IDW. In this paper, we first present two versions of the GPU-accelerated AIDW, i.e. the naive version without profiting from the shared memory and the tiled version taking advantage of the shared memory. We also implement the naive version and the tiled version using two data layouts, structure of arrays and array of aligned structures, on both single and double precision. We then evaluate the performance of parallel AIDW by comparing it with its corresponding serial algorithm on three different machines equipped with the GPUs GT730M, M5000 and K40c. The experimental results indicate that: (i) there is no significant difference in the computational efficiency when different data layouts are employed; (ii) the tiled version is always slightly faster than the naive version; and (iii) on single precision the achieved speed-up can be up to 763 (on the GPU M5000), while on double precision the obtained highest speed-up is 197 (on the GPU K40c). To benefit the community, all source code and testing data related to the presented parallel AIDW algorithm are publicly available. PMID:28989754

  8. Accelerating adaptive inverse distance weighting interpolation algorithm on a graphics processing unit.

    PubMed

    Mei, Gang; Xu, Liangliang; Xu, Nengxiong

    2017-09-01

    This paper focuses on designing and implementing parallel adaptive inverse distance weighting (AIDW) interpolation algorithms by using the graphics processing unit (GPU). The AIDW is an improved version of the standard IDW, which can adaptively determine the power parameter according to the data points' spatial distribution pattern and achieve more accurate predictions than those predicted by IDW. In this paper, we first present two versions of the GPU-accelerated AIDW, i.e. the naive version without profiting from the shared memory and the tiled version taking advantage of the shared memory. We also implement the naive version and the tiled version using two data layouts, structure of arrays and array of aligned structures, on both single and double precision. We then evaluate the performance of parallel AIDW by comparing it with its corresponding serial algorithm on three different machines equipped with the GPUs GT730M, M5000 and K40c. The experimental results indicate that: (i) there is no significant difference in the computational efficiency when different data layouts are employed; (ii) the tiled version is always slightly faster than the naive version; and (iii) on single precision the achieved speed-up can be up to 763 (on the GPU M5000), while on double precision the obtained highest speed-up is 197 (on the GPU K40c). To benefit the community, all source code and testing data related to the presented parallel AIDW algorithm are publicly available.

  9. Error estimation and adaptive mesh refinement for parallel analysis of shell structures

    NASA Technical Reports Server (NTRS)

    Keating, Scott C.; Felippa, Carlos A.; Park, K. C.

    1994-01-01

    The formulation and application of element-level, element-independent error indicators is investigated. This research culminates in the development of an error indicator formulation which is derived based on the projection of element deformation onto the intrinsic element displacement modes. The qualifier 'element-level' means that no information from adjacent elements is used for error estimation. This property is ideally suited for obtaining error values and driving adaptive mesh refinements on parallel computers where access to neighboring elements residing on different processors may incur significant overhead. In addition such estimators are insensitive to the presence of physical interfaces and junctures. An error indicator qualifies as 'element-independent' when only visible quantities such as element stiffness and nodal displacements are used to quantify error. Error evaluation at the element level and element independence for the error indicator are highly desired properties for computing error in production-level finite element codes. Four element-level error indicators have been constructed. Two of the indicators are based on variational formulation of the element stiffness and are element-dependent. Their derivations are retained for developmental purposes. The second two indicators mimic and exceed the first two in performance but require no special formulation of the element stiffness mesh refinement which we demonstrate for two dimensional plane stress problems. The parallelizing of substructures and adaptive mesh refinement is discussed and the final error indicator using two-dimensional plane-stress and three-dimensional shell problems is demonstrated.

  10. Incorporating immigrants: integrating theoretical frameworks of adaptation.

    PubMed

    Treas, Judith

    2015-03-01

    To encourage research on immigrants and aging by analyzing theoretical commonalities in the two fields and identifying potential contributions of aging theories, specifically to the understanding of neglected age differences in the pace of immigrant incorporation. Survey of the historical development of assimilation theory and its successors and systematic comparison of key concepts in aging and immigrant incorporation theories. Studies of immigrants, as well as of the life course, trace their origins to the Chicago School at the turn of the 20th century. Today, both theoretical perspectives emphasize adaptation as a time-dependent, multidimensional, nonlinear, and multidirectional process. Immigrant incorporation theories have not fully engaged with a key concern of aging theory-why there are age differences. Insights from cognitive aging and developmental biology, life-span developmental psychology, and age stratification and the life course suggest explanations for age differences in the speed of immigrant incorporation. Theories of adaptation to aging and theories of immigrant incorporation developed so independently that they neglected the subject they have in common, namely, older immigrants. Because they address similar conceptual problems and share key assumptions, a productive dialogue between two vibrant fields is long overdue. © The Author 2014. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  11. Formative Research regarding Kidney Disease Health Information in a Latino American Sample: Associations among Message Frame, Threat, Efficacy, Message Effectiveness, and Behavioral Intention

    ERIC Educational Resources Information Center

    Maguire, Katheryn C.; Gardner, Jay; Sopory, Pradeep; Jian, Guowei; Roach, Marcia; Amschlinger, Joe; Moreno, Marcia; Pettey, Gary; Piccone, Gianfranco

    2010-01-01

    Using prospect theory and the extended parallel process model, this study examined the effect of gain/loss message framing on perceptions of severity, susceptibility, response efficacy, and self efficacy (derived from the extended parallel process model), as well as perception of message effectiveness and behavioral intention in a community based…

  12. A Parallel Particle Swarm Optimization Algorithm Accelerated by Asynchronous Evaluations

    NASA Technical Reports Server (NTRS)

    Venter, Gerhard; Sobieszczanski-Sobieski, Jaroslaw

    2005-01-01

    A parallel Particle Swarm Optimization (PSO) algorithm is presented. Particle swarm optimization is a fairly recent addition to the family of non-gradient based, probabilistic search algorithms that is based on a simplified social model and is closely tied to swarming theory. Although PSO algorithms present several attractive properties to the designer, they are plagued by high computational cost as measured by elapsed time. One approach to reduce the elapsed time is to make use of coarse-grained parallelization to evaluate the design points. Previous parallel PSO algorithms were mostly implemented in a synchronous manner, where all design points within a design iteration are evaluated before the next iteration is started. This approach leads to poor parallel speedup in cases where a heterogeneous parallel environment is used and/or where the analysis time depends on the design point being analyzed. This paper introduces an asynchronous parallel PSO algorithm that greatly improves the parallel e ciency. The asynchronous algorithm is benchmarked on a cluster assembled of Apple Macintosh G5 desktop computers, using the multi-disciplinary optimization of a typical transport aircraft wing as an example.

  13. Application of Parallel Adjoint-Based Error Estimation and Anisotropic Grid Adaptation for Three-Dimensional Aerospace Configurations

    NASA Technical Reports Server (NTRS)

    Lee-Rausch, E. M.; Park, M. A.; Jones, W. T.; Hammond, D. P.; Nielsen, E. J.

    2005-01-01

    This paper demonstrates the extension of error estimation and adaptation methods to parallel computations enabling larger, more realistic aerospace applications and the quantification of discretization errors for complex 3-D solutions. Results were shown for an inviscid sonic-boom prediction about a double-cone configuration and a wing/body segmented leading edge (SLE) configuration where the output function of the adjoint was pressure integrated over a part of the cylinder in the near field. After multiple cycles of error estimation and surface/field adaptation, a significant improvement in the inviscid solution for the sonic boom signature of the double cone was observed. Although the double-cone adaptation was initiated from a very coarse mesh, the near-field pressure signature from the final adapted mesh compared very well with the wind-tunnel data which illustrates that the adjoint-based error estimation and adaptation process requires no a priori refinement of the mesh. Similarly, the near-field pressure signature for the SLE wing/body sonic boom configuration showed a significant improvement from the initial coarse mesh to the final adapted mesh in comparison with the wind tunnel results. Error estimation and field adaptation results were also presented for the viscous transonic drag prediction of the DLR-F6 wing/body configuration, and results were compared to a series of globally refined meshes. Two of these globally refined meshes were used as a starting point for the error estimation and field-adaptation process where the output function for the adjoint was the total drag. The field-adapted results showed an improvement in the prediction of the drag in comparison with the finest globally refined mesh and a reduction in the estimate of the remaining drag error. The adjoint-based adaptation parameter showed a need for increased resolution in the surface of the wing/body as well as a need for wake resolution downstream of the fuselage and wing trailing edge in order to achieve the requested drag tolerance. Although further adaptation was required to meet the requested tolerance, no further cycles were computed in order to avoid large discrepancies between the surface mesh spacing and the refined field spacing.

  14. A Comparative Analysis of Three Unique Theories of Organizational Learning

    ERIC Educational Resources Information Center

    Leavitt, Carol C.

    2011-01-01

    The purpose of this paper is to present three classical theories on organizational learning and conduct a comparative analysis that highlights their strengths, similarities, and differences. Two of the theories -- experiential learning theory and adaptive -- generative learning theory -- represent the thinking of the cognitive perspective, while…

  15. The genetic architecture of local adaptation and reproductive isolation in sympatry within the Mimulus guttatus species complex.

    PubMed

    Ferris, Kathleen G; Barnett, Laryssa L; Blackman, Benjamin K; Willis, John H

    2017-01-01

    The genetic architecture of local adaptation has been of central interest to evolutionary biologists since the modern synthesis. In addition to classic theory on the effect size of adaptive mutations by Fisher, Kimura and Orr, recent theory addresses the genetic architecture of local adaptation in the face of ongoing gene flow. This theory predicts that with substantial gene flow between populations local adaptation should proceed primarily through mutations of large effect or tightly linked clusters of smaller effect loci. In this study, we investigate the genetic architecture of divergence in flowering time, mating system-related traits, and leaf shape between Mimulus laciniatus and a sympatric population of its close relative M. guttatus. These three traits are probably involved in M. laciniatus' adaptation to a dry, exposed granite outcrop environment. Flowering time and mating system differences are also reproductive isolating barriers making them 'magic traits'. Phenotypic hybrids in this population provide evidence of recent gene flow. Using next-generation sequencing, we generate dense SNP markers across the genome and map quantitative trait loci (QTLs) involved in flowering time, flower size and leaf shape. We find that interspecific divergence in all three traits is due to few QTL of large effect including a highly pleiotropic QTL on chromosome 8. This QTL region contains the pleiotropic candidate gene TCP4 and is involved in ecologically important phenotypes in other Mimulus species. Our results are consistent with theory, indicating that local adaptation and reproductive isolation with gene flow should be due to few loci with large and pleiotropic effects. © 2016 John Wiley & Sons Ltd.

  16. Large-Scale Linear Optimization through Machine Learning: From Theory to Practical System Design and Implementation

    DTIC Science & Technology

    2016-08-10

    AFRL-AFOSR-JP-TR-2016-0073 Large-scale Linear Optimization through Machine Learning: From Theory to Practical System Design and Implementation ...2016 4.  TITLE AND SUBTITLE Large-scale Linear Optimization through Machine Learning: From Theory to Practical System Design and Implementation 5a...performances on various machine learning tasks and it naturally lends itself to fast parallel implementations . Despite this, very little work has been

  17. A formal theory of the selfish gene.

    PubMed

    Gardner, A; Welch, J J

    2011-08-01

    Adaptation is conventionally regarded as occurring at the level of the individual organism. In contrast, the theory of the selfish gene proposes that it is more correct to view adaptation as occurring at the level of the gene. This view has received much popular attention, yet has enjoyed only limited uptake in the primary research literature. Indeed, the idea of ascribing goals and strategies to genes has been highly controversial. Here, we develop a formal theory of the selfish gene, using optimization theory to capture the analogy of 'gene as fitness-maximizing agent' in mathematical terms. We provide formal justification for this view of adaptation by deriving mathematical correspondences that translate the optimization formalism into dynamical population genetics. We show that in the context of social interactions between genes, it is the gene's inclusive fitness that provides the appropriate maximand. Hence, genic selection can drive the evolution of altruistic genes. Finally, we use the formalism to assess the various criticisms that have been levelled at the theory of the selfish gene, dispelling some and strengthening others. © 2011 The Authors. Journal of Evolutionary Biology © 2011 European Society For Evolutionary Biology.

  18. An Item Response Theory-Based, Computerized Adaptive Testing Version of the MacArthur-Bates Communicative Development Inventory: Words & Sentences (CDI:WS)

    ERIC Educational Resources Information Center

    Makransky, Guido; Dale, Philip S.; Havmose, Philip; Bleses, Dorthe

    2016-01-01

    Purpose: This study investigated the feasibility and potential validity of an item response theory (IRT)-based computerized adaptive testing (CAT) version of the MacArthur-Bates Communicative Development Inventory: Words & Sentences (CDI:WS; Fenson et al., 2007) vocabulary checklist, with the objective of reducing length while maintaining…

  19. Condition Based Maintenance Technology Impact Study: Assessment Methods, Study Design and Interim Results

    DTIC Science & Technology

    2014-07-01

    Unified Theory of Acceptance and Use of Technology, Structuration Model of Technology, UNCLASSIFIED DSTO-TR-2992 UNCLASSIFIED 5 Adaptive...Structuration Theory , Model of Mutual Adaptation, Model of Technology Appropriation, Diffusion/Implementation Model, and Tri-core Model, among others [11...simulation gaming essay/scenario writing genius forecasting role play/acting backcasting swot brainstorming relevance tree/logic chart scenario workshop

  20. Integrating Adaptability into Special Operations Forces Intermediate Level Education

    DTIC Science & Technology

    2010-10-01

    This model is based on the Experiential Learning Theory (ELT), which states that learning occurs by the transfer of experience into knowledge ( Kolb ...Report 529. Arlington, VA. Kolb , D.A., Boyatzis, R.E., & Mainemelis, C. (2000). Experiential Learning Theory : Previous research and new dimensions. In...adaptive thinking materials. Integrating this information will provide some continuity among concepts for instruction. Experiential Learning Model

  1. When Goal Orientations Collide: Effects of Learning and Performance Orientation on Team Adaptability in Response to Workload Imbalance

    ERIC Educational Resources Information Center

    Porter, Christopher O. L. H.; Webb, Justin W.; Gogus, Celile Itir

    2010-01-01

    The authors draw on resource allocation theory (Kanfer & Ackerman, 1989) to develop hypotheses regarding the conditions under which collective learning and performance orientation have interactive effects and the nature of those effects on teams' ability to adapt to a sudden and dramatic change in workload. Consistent with the theory, results…

  2. Multilevel processes and cultural adaptation: Examples from past and present small-scale societies.

    PubMed

    Reyes-García, V; Balbo, A L; Gomez-Baggethun, E; Gueze, M; Mesoudi, A; Richerson, P; Rubio-Campillo, X; Ruiz-Mallén, I; Shennan, S

    2016-12-01

    Cultural adaptation has become central in the context of accelerated global change with authors increasingly acknowledging the importance of understanding multilevel processes that operate as adaptation takes place. We explore the importance of multilevel processes in explaining cultural adaptation by describing how processes leading to cultural (mis)adaptation are linked through a complex nested hierarchy, where the lower levels combine into new units with new organizations, functions, and emergent properties or collective behaviours. After a brief review of the concept of "cultural adaptation" from the perspective of cultural evolutionary theory and resilience theory, the core of the paper is constructed around the exploration of multilevel processes occurring at the temporal, spatial, social and political scales. We do so by examining small-scale societies' case studies. In each section, we discuss the importance of the selected scale for understanding cultural adaptation and then present an example that illustrates how multilevel processes in the selected scale help explain observed patterns in the cultural adaptive process. We end the paper discussing the potential of modelling and computer simulation for studying multilevel processes in cultural adaptation.

  3. Spatiotemporal variation in local adaptation of a specialist insect herbivore to its long-lived host plant.

    PubMed

    Kalske, Aino; Leimu, Roosa; Scheepens, J F; Mutikainen, Pia

    2016-09-01

    Local adaptation of interacting species to one another indicates geographically variable reciprocal selection. This process of adaptation is central in the organization and maintenance of genetic variation across populations. Given that the strength of selection and responses to it often vary in time and space, the strength of local adaptation should in theory vary between generations and among populations. However, such spatiotemporal variation has rarely been explicitly demonstrated in nature and local adaptation is commonly considered to be relatively static. We report persistent local adaptation of the short-lived herbivore Abrostola asclepiadis to its long-lived host plant Vincetoxicum hirundinaria over three successive generations in two studied populations and considerable temporal variation in local adaptation in six populations supporting the geographic mosaic theory. The observed variation in local adaptation among populations was best explained by geographic distance and population isolation, suggesting that gene flow reduces local adaptation. Changes in herbivore population size did not conclusively explain temporal variation in local adaptation. Our results also imply that short-term studies are likely to capture only a part of the existing variation in local adaptation. © 2016 The Author(s). Evolution © 2016 The Society for the Study of Evolution.

  4. Recruitment dynamics in adaptive social networks

    NASA Astrophysics Data System (ADS)

    Shkarayev, Maxim; Shaw, Leah; Schwartz, Ira

    2011-03-01

    We model recruitment in social networks in the presence of birth and death processes. The recruitment is characterized by nodes changing their status to that of the recruiting class as a result of contact with recruiting nodes. The recruiting nodes may adapt their connections in order to improve recruitment capabilities, thus changing the network structure. We develop a mean-field theory describing the system dynamics. Using mean-field theory we characterize the dependence of the growth threshold of the recruiting class on the adaptation parameter. Furthermore, we investigate the effect of adaptation on the recruitment dynamics, as well as on network topology. The theoretical predictions are confirmed by the direct simulations of the full system.

  5. The Chaos Theory of Careers.

    ERIC Educational Resources Information Center

    Pryor, Robert G. L.; Bright, Jim

    2003-01-01

    Four theoretical streams--contexualism/ecology, systems theory, realism/constructivism, and chaos theory--contributed to a theory of individuals as complex, unique, nonlinear, adaptive chaotic and open systems. Individuals use purposive action to construct careers but can make maladaptive and inappropriate choices. (Contains 42 references.) (SK)

  6. Adapting Growth Pole Theory to Community College Development.

    ERIC Educational Resources Information Center

    Brumbach, Mary A.

    2002-01-01

    Explains growth pole theory, which is the theory that growth manifests itself at poles of growth, rather than everywhere at once. Applies this theory to community college development, and offers advice for implementing growth poles by taking an entrepreneurial approach to education. (NB)

  7. Managing Schools as Complex Adaptive Systems: A Strategic Perspective

    ERIC Educational Resources Information Center

    Fidan, Tuncer; Balci, Ali

    2017-01-01

    This conceptual study examines the analogies between schools and complex adaptive systems and identifies strategies used to manage schools as complex adaptive systems. Complex adaptive systems approach, introduced by the complexity theory, requires school administrators to develop new skills and strategies to realize their agendas in an…

  8. Analyzing Hedges in Verbal Communication: An Adaptation-Based Approach

    ERIC Educational Resources Information Center

    Wang, Yuling

    2010-01-01

    Based on Adaptation Theory, the article analyzes the production process of hedges. The procedure consists of the continuous making of choices in linguistic forms and communicative strategies. These choices are made just for adaptation to the contextual correlates. Besides, the adaptation process is dynamic, intentional and bidirectional.

  9. Adaptability: How Students' Responses to Uncertainty and Novelty Predict Their Academic and Non-Academic Outcomes

    ERIC Educational Resources Information Center

    Martin, Andrew J.; Nejad, Harry G.; Colmar, Susan; Liem, Gregory Arief D.

    2013-01-01

    Adaptability is defined as appropriate cognitive, behavioral, and/or affective adjustment in the face of uncertainty and novelty. Building on prior measurement work demonstrating the psychometric properties of an adaptability construct, the present study investigates dispositional predictors (personality, implicit theories) of adaptability, and…

  10. A Parallel Rendering Algorithm for MIMD Architectures

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.; Orloff, Tobias

    1991-01-01

    Applications such as animation and scientific visualization demand high performance rendering of complex three dimensional scenes. To deliver the necessary rendering rates, highly parallel hardware architectures are required. The challenge is then to design algorithms and software which effectively use the hardware parallelism. A rendering algorithm targeted to distributed memory MIMD architectures is described. For maximum performance, the algorithm exploits both object-level and pixel-level parallelism. The behavior of the algorithm is examined both analytically and experimentally. Its performance for large numbers of processors is found to be limited primarily by communication overheads. An experimental implementation for the Intel iPSC/860 shows increasing performance from 1 to 128 processors across a wide range of scene complexities. It is shown that minimal modifications to the algorithm will adapt it for use on shared memory architectures as well.

  11. Partial Overhaul and Initial Parallel Optimization of KINETICS, a Coupled Dynamics and Chemistry Atmosphere Model

    NASA Technical Reports Server (NTRS)

    Nguyen, Howard; Willacy, Karen; Allen, Mark

    2012-01-01

    KINETICS is a coupled dynamics and chemistry atmosphere model that is data intensive and computationally demanding. The potential performance gain from using a supercomputer motivates the adaptation from a serial version to a parallelized one. Although the initial parallelization had been done, bottlenecks caused by an abundance of communication calls between processors led to an unfavorable drop in performance. Before starting on the parallel optimization process, a partial overhaul was required because a large emphasis was placed on streamlining the code for user convenience and revising the program to accommodate the new supercomputers at Caltech and JPL. After the first round of optimizations, the partial runtime was reduced by a factor of 23; however, performance gains are dependent on the size of the data, the number of processors requested, and the computer used.

  12. On the impact of communication complexity in the design of parallel numerical algorithms

    NASA Technical Reports Server (NTRS)

    Gannon, D.; Vanrosendale, J.

    1984-01-01

    This paper describes two models of the cost of data movement in parallel numerical algorithms. One model is a generalization of an approach due to Hockney, and is suitable for shared memory multiprocessors where each processor has vector capabilities. The other model is applicable to highly parallel nonshared memory MIMD systems. In the second model, algorithm performance is characterized in terms of the communication network design. Techniques used in VLSI complexity theory are also brought in, and algorithm independent upper bounds on system performance are derived for several problems that are important to scientific computation.

  13. On the impact of communication complexity on the design of parallel numerical algorithms

    NASA Technical Reports Server (NTRS)

    Gannon, D. B.; Van Rosendale, J.

    1984-01-01

    This paper describes two models of the cost of data movement in parallel numerical alorithms. One model is a generalization of an approach due to Hockney, and is suitable for shared memory multiprocessors where each processor has vector capabilities. The other model is applicable to highly parallel nonshared memory MIMD systems. In this second model, algorithm performance is characterized in terms of the communication network design. Techniques used in VLSI complexity theory are also brought in, and algorithm-independent upper bounds on system performance are derived for several problems that are important to scientific computation.

  14. Adapting Structuration Theory as a Comprehensive Theory for Distance Education: The ASTIDE Model

    ERIC Educational Resources Information Center

    Aktaruzzaman, Md; Plunkett, Margaret

    2016-01-01

    Distance Education (DE) theorists have argued about the requirement for a theory to be comprehensive in a way that can explicate many of the activities associated with DE. Currently, Transactional Distance Theory (TDT) (Moore, 1993) and the Theory of Instructional Dialogue (IDT) (Caspi & Gorsky, 2006) are the most prominent theories, yet they…

  15. The evolution of religious belief in humans: a brief review with a focus on cognition.

    PubMed

    Singh, Dhairyya; Chatterjee, Garga

    2017-07-01

    Religion has been a widely present feature of human beings. This review explores developments in the evolutionary cognitive psychology of religion and provides critical evaluation of the different theoretical positions. Generally scholars have either believed religion is adaptive, a by-product of adaptive psychological features or maladaptive and varying amounts of empirical evidence supports each position. The adaptive position has generated the costly signalling theory of religious ritual and the group selection theory. The by-product position has identified psychologicalmachinery that has been co-opted by religion. The maladaptive position has generated the meme theory of religion. The review concludes that the by-product camp enjoys the most support in the scientific community and suggests ways forward for an evolutionarily significant study of religion.

  16. Evolution, epigenetics and cooperation.

    PubMed

    Bateson, Patrick

    2014-04-01

    Explanations for biological evolution in terms of changes in gene frequencies refer to outcomes rather than process. Integrating epigenetic studies with older evolutionary theories has drawn attention to the ways in which evolution occurs. Adaptation at the level of the gene is givingway to adaptation at the level of the organism and higher-order assemblages of organisms. These ideas impact on the theories of how cooperation might have evolved. Two of the theories, i.e. that cooperating individuals are genetically related or that they cooperate for self-interested reasons, have been accepted for a long time. The idea that adaptation takes place at the level of groups is much more controversial. However, bringing together studies of development with those of evolution is taking away much of the heat in the debate about the evolution of group behaviour.

  17. Adapter plate assembly for adjustable mounting of objects

    DOEpatents

    Blackburn, R.S.

    1986-05-02

    An adapter plate and two locking discs are together affixed to an optic table with machine screws or bolts threaded into a fixed array of internally threaded holes provided in the table surface. The adapter plate preferably has two, and preferably parallel, elongated locating slots each freely receiving a portion of one of the locking discs for secure affixation of the adapter plate to the optic table. A plurality of threaded apertures provided in the adapter plate are available to attach optical mounts or other devices onto the adapter plate in an orientation not limited by the disposition of the array of threaded holes in the table surface. An axially aligned but radially offset hole through each locking disc receives a screw that tightens onto the table, such that prior to tightening of the screw the locking disc may rotate and translate within each locating slot of the adapter plate for maximum flexibility of the orientation thereof.

  18. Bi-Objective Optimal Control Modification Adaptive Control for Systems with Input Uncertainty

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.

    2012-01-01

    This paper presents a new model-reference adaptive control method based on a bi-objective optimal control formulation for systems with input uncertainty. A parallel predictor model is constructed to relate the predictor error to the estimation error of the control effectiveness matrix. In this work, we develop an optimal control modification adaptive control approach that seeks to minimize a bi-objective linear quadratic cost function of both the tracking error norm and predictor error norm simultaneously. The resulting adaptive laws for the parametric uncertainty and control effectiveness uncertainty are dependent on both the tracking error and predictor error, while the adaptive laws for the feedback gain and command feedforward gain are only dependent on the tracking error. The optimal control modification term provides robustness to the adaptive laws naturally from the optimal control framework. Simulations demonstrate the effectiveness of the proposed adaptive control approach.

  19. Adapter plate assembly for adjustable mounting of objects

    DOEpatents

    Blackburn, Robert S.

    1987-01-01

    An adapter plate and two locking discs are together affixed to an optic table with machine screws or bolts threaded into a fixed array of internally threaded holes provided in the table surface. The adapter plate preferably has two, and preferably parallel, elongated locating slots each freely receiving a portion of one of the locking discs for secure affixation of the adapter plate to the optic table. A plurality of threaded apertures provided in the adapter plate are available to attach optical mounts or other devices onto the adapter plate in an orientation not limited by the disposition of the array of threaded holes in the table surface. An axially aligned but radially offset hole through each locking disc receives a screw that tightens onto the table, such that prior to tightening of the screw the locking disc may rotate and translate within each locating slot of the adapter plate for maximum flexibility of the orientation thereof.

  20. Cartesian Off-Body Grid Adaption for Viscous Time- Accurate Flow Simulation

    NASA Technical Reports Server (NTRS)

    Buning, Pieter G.; Pulliam, Thomas H.

    2011-01-01

    An improved solution adaption capability has been implemented in the OVERFLOW overset grid CFD code. Building on the Cartesian off-body approach inherent in OVERFLOW and the original adaptive refinement method developed by Meakin, the new scheme provides for automated creation of multiple levels of finer Cartesian grids. Refinement can be based on the undivided second-difference of the flow solution variables, or on a specific flow quantity such as vorticity. Coupled with load-balancing and an inmemory solution interpolation procedure, the adaption process provides very good performance for time-accurate simulations on parallel compute platforms. A method of using refined, thin body-fitted grids combined with adaption in the off-body grids is presented, which maximizes the part of the domain subject to adaption. Two- and three-dimensional examples are used to illustrate the effectiveness and performance of the adaption scheme.

Top