Sample records for functional theory implemented

  1. Analytical gradients for subsystem density functional theory within the slater-function-based amsterdam density functional program.

    PubMed

    Schlüns, Danny; Franchini, Mirko; Götz, Andreas W; Neugebauer, Johannes; Jacob, Christoph R; Visscher, Lucas

    2017-02-05

    We present a new implementation of analytical gradients for subsystem density-functional theory (sDFT) and frozen-density embedding (FDE) into the Amsterdam Density Functional program (ADF). The underlying theory and necessary expressions for the implementation are derived and discussed in detail for various FDE and sDFT setups. The parallel implementation is numerically verified and geometry optimizations with different functional combinations (LDA/TF and PW91/PW91K) are conducted and compared to reference data. Our results confirm that sDFT-LDA/TF yields good equilibrium distances for the systems studied here (mean absolute deviation: 0.09 Å) compared to reference wave-function theory results. However, sDFT-PW91/PW91k quite consistently yields smaller equilibrium distances (mean absolute deviation: 0.23 Å). The flexibility of our new implementation is demonstrated for an HCN-trimer test system, for which several different setups are applied. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  2. A Domain-Specific Language for Discrete Mathematics

    NASA Astrophysics Data System (ADS)

    Jha, Rohit; Samuel, Alfy; Pawar, Ashmee; Kiruthika, M.

    2013-05-01

    This paper discusses a Domain Specific Language (DSL) that has been developed to enable implementation of concepts of discrete mathematics. A library of data types and functions provides functionality which is frequently required by users. Covering the areas of Mathematical Logic, Set Theory, Functions, Graph Theory, Number Theory, Linear Algebra and Combinatorics, the language's syntax is close to the actual notation used in the specific fields.

  3. Implementation of Two-Component Time-Dependent Density Functional Theory in TURBOMOLE.

    PubMed

    Kühn, Michael; Weigend, Florian

    2013-12-10

    We report the efficient implementation of a two-component time-dependent density functional theory proposed by Wang et al. (Wang, F.; Ziegler, T.; van Lenthe, E.; van Gisbergen, S.; Baerends, E. J. J. Chem. Phys. 2005, 122, 204103) that accounts for spin-orbit effects on excitations of closed-shell systems by employing a noncollinear exchange-correlation kernel. In contrast to the aforementioned implementation, our method is based on two-component effective core potentials as well as Gaussian-type basis functions. It is implemented in the TURBOMOLE program suite for functionals of the local density approximation and the generalized gradient approximation. Accuracy is assessed by comparison of two-component vertical excitation energies of heavy atoms and ions (Cd, Hg, Au(+)) and small molecules (I2, TlH) to other two- and four-component approaches. Efficiency is demonstrated by calculating the electronic spectrum of Au20.

  4. Raman Optical Activity Spectra from Density Functional Perturbation Theory and Density-Functional-Theory-Based Molecular Dynamics.

    PubMed

    Luber, Sandra

    2017-03-14

    We describe the calculation of Raman optical activity (ROA) tensors from density functional perturbation theory, which has been implemented into the CP2K software package. Using the mixed Gaussian and plane waves method, ROA spectra are evaluated in the double-harmonic approximation. Moreover, an approach for the calculation of ROA spectra by means of density functional theory-based molecular dynamics is derived and used to obtain an ROA spectrum via time correlation functions, which paves the way for the calculation of ROA spectra taking into account anharmonicities and dynamic effects at ambient conditions.

  5. Behavioral and neural Darwinism: selectionist function and mechanism in adaptive behavior dynamics.

    PubMed

    McDowell, J J

    2010-05-01

    An evolutionary theory of behavior dynamics and a theory of neuronal group selection share a common selectionist framework. The theory of behavior dynamics instantiates abstractly the idea that behavior is selected by its consequences. It implements Darwinian principles of selection, reproduction, and mutation to generate adaptive behavior in virtual organisms. The behavior generated by the theory has been shown to be quantitatively indistinguishable from that of live organisms. The theory of neuronal group selection suggests a mechanism whereby the abstract principles of the evolutionary theory may be implemented in the nervous systems of biological organisms. According to this theory, groups of neurons subserving behavior may be selected by synaptic modifications that occur when the consequences of behavior activate value systems in the brain. Together, these theories constitute a framework for a comprehensive account of adaptive behavior that extends from brain function to the behavior of whole organisms in quantitative detail. Copyright (c) 2009 Elsevier B.V. All rights reserved.

  6. Four-Component Relativistic Density-Functional Theory Calculations of Nuclear Spin-Rotation Constants: Relativistic Effects in p-Block Hydrides.

    PubMed

    Komorovsky, Stanislav; Repisky, Michal; Malkin, Elena; Demissie, Taye B; Ruud, Kenneth

    2015-08-11

    We present an implementation of the nuclear spin-rotation (SR) constants based on the relativistic four-component Dirac-Coulomb Hamiltonian. This formalism has been implemented in the framework of the Hartree-Fock and Kohn-Sham theory, allowing assessment of both pure and hybrid exchange-correlation functionals. In the density-functional theory (DFT) implementation of the response equations, a noncollinear generalized gradient approximation (GGA) has been used. The present approach enforces a restricted kinetic balance condition for the small-component basis at the integral level, leading to very efficient calculations of the property. We apply the methodology to study relativistic effects on the spin-rotation constants by performing calculations on XHn (n = 1-4) for all elements X in the p-block of the periodic table and comparing the effects of relativity on the nuclear SR tensors to that observed for the nuclear magnetic shielding tensors. Correlation effects as described by the density-functional theory are shown to be significant for the spin-rotation constants, whereas the differences between the use of GGA and hybrid density functionals are much smaller. Our calculated relativistic spin-rotation constants at the DFT level of theory are only in fair agreement with available experimental data. It is shown that the scaling of the relativistic effects for the spin-rotation constants (varying between Z(3.8) and Z(4.5)) is as strong as for the chemical shieldings but with a much smaller prefactor.

  7. Implementation and benchmark of a long-range corrected functional in the density functional based tight-binding method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lutsker, V.; Niehaus, T. A., E-mail: thomas.niehaus@physik.uni-regensburg.de; Aradi, B.

    2015-11-14

    Bridging the gap between first principles methods and empirical schemes, the density functional based tight-binding method (DFTB) has become a versatile tool in predictive atomistic simulations over the past years. One of the major restrictions of this method is the limitation to local or gradient corrected exchange-correlation functionals. This excludes the important class of hybrid or long-range corrected functionals, which are advantageous in thermochemistry, as well as in the computation of vibrational, photoelectron, and optical spectra. The present work provides a detailed account of the implementation of DFTB for a long-range corrected functional in generalized Kohn-Sham theory. We apply themore » method to a set of organic molecules and compare ionization potentials and electron affinities with the original DFTB method and higher level theory. The new scheme cures the significant overpolarization in electric fields found for local DFTB, which parallels the functional dependence in first principles density functional theory (DFT). At the same time, the computational savings with respect to full DFT calculations are not compromised as evidenced by numerical benchmark data.« less

  8. Splines and control theory

    NASA Technical Reports Server (NTRS)

    Zhang, Zhimin; Tomlinson, John; Martin, Clyde

    1994-01-01

    In this work, the relationship between splines and the control theory has been analyzed. We show that spline functions can be constructed naturally from the control theory. By establishing a framework based on control theory, we provide a simple and systematic way to construct splines. We have constructed the traditional spline functions including the polynomial splines and the classical exponential spline. We have also discovered some new spline functions such as trigonometric splines and the combination of polynomial, exponential and trigonometric splines. The method proposed in this paper is easy to implement. Some numerical experiments are performed to investigate properties of different spline approximations.

  9. Explicit polarization (X-Pol) potential using ab initio molecular orbital theory and density functional theory.

    PubMed

    Song, Lingchun; Han, Jaebeom; Lin, Yen-lin; Xie, Wangshen; Gao, Jiali

    2009-10-29

    The explicit polarization (X-Pol) method has been examined using ab initio molecular orbital theory and density functional theory. The X-Pol potential was designed to provide a novel theoretical framework for developing next-generation force fields for biomolecular simulations. Importantly, the X-Pol potential is a general method, which can be employed with any level of electronic structure theory. The present study illustrates the implementation of the X-Pol method using ab initio Hartree-Fock theory and hybrid density functional theory. The computational results are illustrated by considering a set of bimolecular complexes of small organic molecules and ions with water. The computed interaction energies and hydrogen bond geometries are in good accord with CCSD(T) calculations and B3LYP/aug-cc-pVDZ optimizations.

  10. Full-band quantum simulation of electron devices with the pseudopotential method: Theory, implementation, and applications

    NASA Astrophysics Data System (ADS)

    Pala, M. G.; Esseni, D.

    2018-03-01

    This paper presents the theory, implementation, and application of a quantum transport modeling approach based on the nonequilibrium Green's function formalism and a full-band empirical pseudopotential Hamiltonian. We here propose to employ a hybrid real-space/plane-wave basis that results in a significant reduction of the computational complexity compared to a full plane-wave basis. To this purpose, we provide a theoretical formulation in the hybrid basis of the quantum confinement, the self-energies of the leads, and the coupling between the device and the leads. After discussing the theory and the implementation of the new simulation methodology, we report results for complete, self-consistent simulations of different electron devices, including a silicon Esaki diode, a thin-body silicon field effect transistor (FET), and a germanium tunnel FET. The simulated transistors have technologically relevant geometrical features with a semiconductor film thickness of about 4 nm and a channel length ranging from 10 to 17 nm. We believe that the newly proposed formalism may find applications also in transport models based on ab initio Hamiltonians, as those employed in density functional theory methods.

  11. Parallel implementation of Hartree-Fock and density functional theory analytical second derivatives

    NASA Astrophysics Data System (ADS)

    Baker, Jon; Wolinski, Krzysztof; Malagoli, Massimo; Pulay, Peter

    2004-01-01

    We present an efficient, parallel implementation for the calculation of Hartree-Fock and density functional theory analytical Hessian (force constant, nuclear second derivative) matrices. These are important for the determination of harmonic vibrational frequencies, and to classify stationary points on potential energy surfaces. Our program is designed for modest parallelism (4-16 CPUs) as exemplified by our standard eight-processor QuantumCube™. We can routinely handle systems with up to 100+ atoms and 1000+ basis functions using under 0.5 GB of RAM memory per CPU. Timings are presented for several systems, ranging in size from aspirin (C9H8O4) to nickel octaethylporphyrin (C36H44N4Ni).

  12. Density functional theory for molecular and periodic systems using density fitting and continuous fast multipole method: Analytical gradients.

    PubMed

    Łazarski, Roman; Burow, Asbjörn Manfred; Grajciar, Lukáš; Sierka, Marek

    2016-10-30

    A full implementation of analytical energy gradients for molecular and periodic systems is reported in the TURBOMOLE program package within the framework of Kohn-Sham density functional theory using Gaussian-type orbitals as basis functions. Its key component is a combination of density fitting (DF) approximation and continuous fast multipole method (CFMM) that allows for an efficient calculation of the Coulomb energy gradient. For exchange-correlation part the hierarchical numerical integration scheme (Burow and Sierka, Journal of Chemical Theory and Computation 2011, 7, 3097) is extended to energy gradients. Computational efficiency and asymptotic O(N) scaling behavior of the implementation is demonstrated for various molecular and periodic model systems, with the largest unit cell of hematite containing 640 atoms and 19,072 basis functions. The overall computational effort of energy gradient is comparable to that of the Kohn-Sham matrix formation. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  13. Numerical Evaluation of the "Dual-Kernel Counter-flow" Matric Convolution Integral that Arises in Discrete/Continuous (D/C) Control Theory

    NASA Technical Reports Server (NTRS)

    Nixon, Douglas D.

    2009-01-01

    Discrete/Continuous (D/C) control theory is a new generalized theory of discrete-time control that expands the concept of conventional (exact) discrete-time control to create a framework for design and implementation of discretetime control systems that include a continuous-time command function generator so that actuator commands need not be constant between control decisions, but can be more generally defined and implemented as functions that vary with time across sample period. Because the plant/control system construct contains two linear subsystems arranged in tandem, a novel dual-kernel counter-flow convolution integral appears in the formulation. As part of the D/C system design and implementation process, numerical evaluation of that integral over the sample period is required. Three fundamentally different evaluation methods and associated algorithms are derived for the constant-coefficient case. Numerical results are matched against three available examples that have closed-form solutions.

  14. DC servo motor positioning with anti-windup implementation using C2000 ARM-Texas Instrument

    NASA Astrophysics Data System (ADS)

    Linggarjati, Jimmy

    2017-12-01

    One of the most important topics in control system is DC Motor. At this research, a positioning control system for a DC motor is investigated. Firstly, the DC Motor will be paramaterized to get the transfer function model, in order to be simulated in Matlab, and then implemented in a C2000-ARM microcontroller from TI (Texas Instrument). With this investigation, students in control system theory will be able to understand the importance of classical control theories, in relation to the real world implementation of the position control for the DC Motor, escpecially the importance of Anti-Windup technique in real-world implementation.

  15. Charge transport in nanostructured materials: Implementation and verification of constrained density functional theory

    DOE PAGES

    Goldey, Matthew B.; Brawand, Nicholas P.; Voros, Marton; ...

    2017-04-20

    The in silico design of novel complex materials for energy conversion requires accurate, ab initio simulation of charge transport. In this work, we present an implementation of constrained density functional theory (CDFT) for the calculation of parameters for charge transport in the hopping regime. We verify our implementation against literature results for molecular systems, and we discuss the dependence of results on numerical parameters and the choice of localization potentials. In addition, we compare CDFT results with those of other commonly used methods for simulating charge transport between nanoscale building blocks. As a result, we show that some of thesemore » methods give unphysical results for thermally disordered configurations, while CDFT proves to be a viable and robust approach.« less

  16. Communication: Extended multi-state complete active space second-order perturbation theory: Energy and nuclear gradients

    NASA Astrophysics Data System (ADS)

    Shiozaki, Toru; Győrffy, Werner; Celani, Paolo; Werner, Hans-Joachim

    2011-08-01

    The extended multireference quasi-degenerate perturbation theory, proposed by Granovsky [J. Chem. Phys. 134, 214113 (2011)], is combined with internally contracted multi-state complete active space second-order perturbation theory (XMS-CASPT2). The first-order wavefunction is expanded in terms of the union of internally contracted basis functions generated from all the reference functions, which guarantees invariance of the theory with respect to unitary rotations of the reference functions. The method yields improved potentials in the vicinity of avoided crossings and conical intersections. The theory for computing nuclear energy gradients for MS-CASPT2 and XMS-CASPT2 is also presented and the first implementation of these gradient methods is reported. A number of illustrative applications of the new methods are presented.

  17. Massively parallel and linear-scaling algorithm for second-order Møller-Plesset perturbation theory applied to the study of supramolecular wires

    NASA Astrophysics Data System (ADS)

    Kjærgaard, Thomas; Baudin, Pablo; Bykov, Dmytro; Eriksen, Janus Juul; Ettenhuber, Patrick; Kristensen, Kasper; Larkin, Jeff; Liakh, Dmitry; Pawłowski, Filip; Vose, Aaron; Wang, Yang Min; Jørgensen, Poul

    2017-03-01

    We present a scalable cross-platform hybrid MPI/OpenMP/OpenACC implementation of the Divide-Expand-Consolidate (DEC) formalism with portable performance on heterogeneous HPC architectures. The Divide-Expand-Consolidate formalism is designed to reduce the steep computational scaling of conventional many-body methods employed in electronic structure theory to linear scaling, while providing a simple mechanism for controlling the error introduced by this approximation. Our massively parallel implementation of this general scheme has three levels of parallelism, being a hybrid of the loosely coupled task-based parallelization approach and the conventional MPI +X programming model, where X is either OpenMP or OpenACC. We demonstrate strong and weak scalability of this implementation on heterogeneous HPC systems, namely on the GPU-based Cray XK7 Titan supercomputer at the Oak Ridge National Laboratory. Using the "resolution of the identity second-order Møller-Plesset perturbation theory" (RI-MP2) as the physical model for simulating correlated electron motion, the linear-scaling DEC implementation is applied to 1-aza-adamantane-trione (AAT) supramolecular wires containing up to 40 monomers (2440 atoms, 6800 correlated electrons, 24 440 basis functions and 91 280 auxiliary functions). This represents the largest molecular system treated at the MP2 level of theory, demonstrating an efficient removal of the scaling wall pertinent to conventional quantum many-body methods.

  18. Multicomponent Time-Dependent Density Functional Theory: Proton and Electron Excitation Energies.

    PubMed

    Yang, Yang; Culpitt, Tanner; Hammes-Schiffer, Sharon

    2018-04-05

    The quantum mechanical treatment of both electrons and protons in the calculation of excited state properties is critical for describing nonadiabatic processes such as photoinduced proton-coupled electron transfer. Multicomponent density functional theory enables the consistent quantum mechanical treatment of more than one type of particle and has been implemented previously for studying ground state molecular properties within the nuclear-electronic orbital (NEO) framework, where all electrons and specified protons are treated quantum mechanically. To enable the study of excited state molecular properties, herein the linear response multicomponent time-dependent density functional theory (TDDFT) is derived and implemented within the NEO framework. Initial applications to FHF - and HCN illustrate that NEO-TDDFT provides accurate proton and electron excitation energies within a single calculation. As its computational cost is similar to that of conventional electronic TDDFT, the NEO-TDDFT approach is promising for diverse applications, particularly nonadiabatic proton transfer reactions, which may exhibit mixed electron-proton vibronic excitations.

  19. Real-Space Density Functional Theory on Graphical Processing Units: Computational Approach and Comparison to Gaussian Basis Set Methods.

    PubMed

    Andrade, Xavier; Aspuru-Guzik, Alán

    2013-10-08

    We discuss the application of graphical processing units (GPUs) to accelerate real-space density functional theory (DFT) calculations. To make our implementation efficient, we have developed a scheme to expose the data parallelism available in the DFT approach; this is applied to the different procedures required for a real-space DFT calculation. We present results for current-generation GPUs from AMD and Nvidia, which show that our scheme, implemented in the free code Octopus, can reach a sustained performance of up to 90 GFlops for a single GPU, representing a significant speed-up when compared to the CPU version of the code. Moreover, for some systems, our implementation can outperform a GPU Gaussian basis set code, showing that the real-space approach is a competitive alternative for DFT simulations on GPUs.

  20. Source-Free Exchange-Correlation Magnetic Fields in Density Functional Theory.

    PubMed

    Sharma, S; Gross, E K U; Sanna, A; Dewhurst, J K

    2018-03-13

    Spin-dependent exchange-correlation energy functionals in use today depend on the charge density and the magnetization density: E xc [ρ, m]. However, it is also correct to define the functional in terms of the curl of m for physical external fields: E xc [ρ,∇ × m]. The exchange-correlation magnetic field, B xc , then becomes source-free. We study this variation of the theory by uniquely removing the source term from local and generalized gradient approximations to the functional. By doing so, the total Kohn-Sham moments are improved for a wide range of materials for both functionals. Significantly, the moments for the pnictides are now in good agreement with experiment. This source-free method is simple to implement in all existing density functional theory codes.

  1. Dispersion correction derived from first principles for density functional theory and Hartree-Fock theory.

    PubMed

    Guidez, Emilie B; Gordon, Mark S

    2015-03-12

    The modeling of dispersion interactions in density functional theory (DFT) is commonly performed using an energy correction that involves empirically fitted parameters for all atom pairs of the system investigated. In this study, the first-principles-derived dispersion energy from the effective fragment potential (EFP) method is implemented for the density functional theory (DFT-D(EFP)) and Hartree-Fock (HF-D(EFP)) energies. Overall, DFT-D(EFP) performs similarly to the semiempirical DFT-D corrections for the test cases investigated in this work. HF-D(EFP) tends to underestimate binding energies and overestimate intermolecular equilibrium distances, relative to coupled cluster theory, most likely due to incomplete accounting for electron correlation. Overall, this first-principles dispersion correction yields results that are in good agreement with coupled-cluster calculations at a low computational cost.

  2. Current Density Functional Theory Using Meta-Generalized Gradient Exchange-Correlation Functionals.

    PubMed

    Furness, James W; Verbeke, Joachim; Tellgren, Erik I; Stopkowicz, Stella; Ekström, Ulf; Helgaker, Trygve; Teale, Andrew M

    2015-09-08

    We present the self-consistent implementation of current-dependent (hybrid) meta-generalized gradient approximation (mGGA) density functionals using London atomic orbitals. A previously proposed generalized kinetic energy density is utilized to implement mGGAs in the framework of Kohn-Sham current density functional theory (KS-CDFT). A unique feature of the nonperturbative implementation of these functionals is the ability to seamlessly explore a wide range of magnetic fields up to 1 au (∼235 kT) in strength. CDFT functionals based on the TPSS and B98 forms are investigated, and their performance is assessed by comparison with accurate coupled-cluster singles, doubles, and perturbative triples (CCSD(T)) data. In the weak field regime, magnetic properties such as magnetizabilities and nuclear magnetic resonance shielding constants show modest but systematic improvements over generalized gradient approximations (GGA). However, in the strong field regime, the mGGA-based forms lead to a significantly improved description of the recently proposed perpendicular paramagnetic bonding mechanism, comparing well with CCSD(T) data. In contrast to functionals based on the vorticity, these forms are found to be numerically stable, and their accuracy at high field suggests that the extension of mGGAs to CDFT via the generalized kinetic energy density should provide a useful starting point for further development of CDFT approximations.

  3. Multiconfigurational short-range density-functional theory for open-shell systems

    NASA Astrophysics Data System (ADS)

    Hedegârd, Erik Donovan; Toulouse, Julien; Jensen, Hans Jørgen Aagaard

    2018-06-01

    Many chemical systems cannot be described by quantum chemistry methods based on a single-reference wave function. Accurate predictions of energetic and spectroscopic properties require a delicate balance between describing the most important configurations (static correlation) and obtaining dynamical correlation efficiently. The former is most naturally done through a multiconfigurational (MC) wave function, whereas the latter can be done by, e.g., perturbation theory. We have employed a different strategy, namely, a hybrid between multiconfigurational wave functions and density-functional theory (DFT) based on range separation. The method is denoted by MC short-range DFT (MC-srDFT) and is more efficient than perturbative approaches as it capitalizes on the efficient treatment of the (short-range) dynamical correlation by DFT approximations. In turn, the method also improves DFT with standard approximations through the ability of multiconfigurational wave functions to recover large parts of the static correlation. Until now, our implementation was restricted to closed-shell systems, and to lift this restriction, we present here the generalization of MC-srDFT to open-shell cases. The additional terms required to treat open-shell systems are derived and implemented in the DALTON program. This new method for open-shell systems is illustrated on dioxygen and [Fe(H2O)6]3+.

  4. Implementation of a Natural Language Processor Using Functional Grammar.

    DTIC Science & Technology

    1985-12-01

    in a completely different manner. [Ref. 5:pp. 81-883 ; "C. CASE GRAMMAR When Chomsky published his Aspects of the Theory of Syntax, 0 many linguists...approach was developed at Stanford University .[Ref. 7:pp. 187-2473 E. FUNCTIONAL GRAMMAR Shortly after Transformational Grammar and Case Grammar ... Grammar is a radical approach to linguistic theory when looked at from the Chomsky point of view. However, it compares favorably with the traditional

  5. Second-Order Moller-Plesset Perturbation Theory for Molecular Dirac-Hartree-Fock Wave Functions

    NASA Technical Reports Server (NTRS)

    Dyall, Kenneth G.; Arnold, James O. (Technical Monitor)

    1994-01-01

    Moller-Plesset perturbation theory is developed to second order for a selection of Kramers restricted Dirac-Hartree-Fock closed and open-shell reference wave functions. The open-shell wave functions considered are limited to those with no more than two electrons in open shells, but include the case of a two-configuration SCF reference. Denominator shifts are included in the style of Davidson's OPT2 method. An implementation which uses unordered integrals with labels is presented, and results are given for a few test cases.

  6. Dynamic field theory and executive functions: lending explanation to current theories of development.

    PubMed

    Morton, J Bruce

    2014-06-01

    Buss and Spencer's monograph is an impressive achievement that is sure to have a lasting impact on the field of child development. The dynamic field theory (DFT) model that forms the heart of this contribution is ambitious in scope, detailed in its implementation, and rigorously tested against data, old and new. As such, the ideas contained in this fine document represent a qualitative advance in our understanding of young children's behavior, and lay a foundation for future research into the developmental origins of executive functioning. © 2014 The Society for Research in Child Development, Inc.

  7. A computer architecture for intelligent machines

    NASA Technical Reports Server (NTRS)

    Lefebvre, D. R.; Saridis, G. N.

    1992-01-01

    The theory of intelligent machines proposes a hierarchical organization for the functions of an autonomous robot based on the principle of increasing precision with decreasing intelligence. An analytic formulation of this theory using information-theoretic measures of uncertainty for each level of the intelligent machine has been developed. The authors present a computer architecture that implements the lower two levels of the intelligent machine. The architecture supports an event-driven programming paradigm that is independent of the underlying computer architecture and operating system. Execution-level controllers for motion and vision systems are briefly addressed, as well as the Petri net transducer software used to implement coordination-level functions. A case study illustrates how this computer architecture integrates real-time and higher-level control of manipulator and vision systems.

  8. Local control theory using trajectory surface hopping and linear-response time-dependent density functional theory.

    PubMed

    Curchod, Basile F E; Penfold, Thomas J; Rothlisberger, Ursula; Tavernelli, Ivano

    2013-01-01

    The implementation of local control theory using nonadiabatic molecular dynamics within the framework of linear-response time-dependent density functional theory is discussed. The method is applied to study the photoexcitation of lithium fluoride, for which we demonstrate that this approach can efficiently generate a pulse, on-the-fly, able to control the population transfer between two selected electronic states. Analysis of the computed control pulse yields insights into the photophysics of the process identifying the relevant frequencies associated to the curvature of the initial and final state potential energy curves and their energy differences. The limitations inherent to the use of the trajectory surface hopping approach are also discussed.

  9. Towards the blackbox computation of magnetic exchange coupling parameters in polynuclear transition-metal complexes: theory, implementation, and application.

    PubMed

    Phillips, Jordan J; Peralta, Juan E

    2013-05-07

    We present a method for calculating magnetic coupling parameters from a single spin-configuration via analytic derivatives of the electronic energy with respect to the local spin direction. This method does not introduce new approximations beyond those found in the Heisenberg-Dirac Hamiltonian and a standard Kohn-Sham Density Functional Theory calculation, and in the limit of an ideal Heisenberg system it reproduces the coupling as determined from spin-projected energy-differences. Our method employs a generalized perturbative approach to constrained density functional theory, where exact expressions for the energy to second order in the constraints are obtained by analytic derivatives from coupled-perturbed theory. When the relative angle between magnetization vectors of metal atoms enters as a constraint, this allows us to calculate all the magnetic exchange couplings of a system from derivatives with respect to local spin directions from the high-spin configuration. Because of the favorable computational scaling of our method with respect to the number of spin-centers, as compared to the broken-symmetry energy-differences approach, this opens the possibility for the blackbox exploration of magnetic properties in large polynuclear transition-metal complexes. In this work we outline the motivation, theory, and implementation of this method, and present results for several model systems and transition-metal complexes with a variety of density functional approximations and Hartree-Fock.

  10. The Notional-Functional Approach: Teaching the Real Language in Its Natural Context.

    ERIC Educational Resources Information Center

    Laine, Elaine

    This study of the notional-functional approach to second language teaching reviews the history and theoretical background of the method, current issues, and implementation of a notional-functional syllabus. Chapter 1 discusses the history and theory of the approach and the organization and advantages of the notional-functional syllabus. Chapter 2…

  11. Effects of Self-Monitoring, Likability and Argument Strength on Persuasion.

    ERIC Educational Resources Information Center

    Harnish, Richard J.

    Recently, there has been a renewed interest in the functional theories of attitudes. These theories assume that there are certain individualistic needs that are being met by one's attitudes, and that these attitudes allow the individual to implement certain plans to attain certain goals. This study examined whether source characteristics (i.e.,…

  12. Second-Order Perturbation Theory for Generalized Active Space Self-Consistent-Field Wave Functions.

    PubMed

    Ma, Dongxia; Li Manni, Giovanni; Olsen, Jeppe; Gagliardi, Laura

    2016-07-12

    A multireference second-order perturbation theory approach based on the generalized active space self-consistent-field (GASSCF) wave function is presented. Compared with the complete active space (CAS) and restricted active space (RAS) wave functions, GAS wave functions are more flexible and can employ larger active spaces and/or different truncations of the configuration interaction expansion. With GASSCF, one can explore chemical systems that are not affordable with either CASSCF or RASSCF. Perturbation theory to second order on top of GAS wave functions (GASPT2) has been implemented to recover the remaining electron correlation. The method has been benchmarked by computing the chromium dimer ground-state potential energy curve. These calculations show that GASPT2 gives results similar to CASPT2 even with a configuration interaction expansion much smaller than the corresponding CAS expansion.

  13. Two-component hybrid time-dependent density functional theory within the Tamm-Dancoff approximation.

    PubMed

    Kühn, Michael; Weigend, Florian

    2015-01-21

    We report the implementation of a two-component variant of time-dependent density functional theory (TDDFT) for hybrid functionals that accounts for spin-orbit effects within the Tamm-Dancoff approximation (TDA) for closed-shell systems. The influence of the admixture of Hartree-Fock exchange on excitation energies is investigated for several atoms and diatomic molecules by comparison to numbers for pure density functionals obtained previously [M. Kühn and F. Weigend, J. Chem. Theory Comput. 9, 5341 (2013)]. It is further related to changes upon switching to the local density approximation or using the full TDDFT formalism instead of TDA. Efficiency is demonstrated for a comparably large system, Ir(ppy)3 (61 atoms, 1501 basis functions, lowest 10 excited states), which is a prototype molecule for organic light-emitting diodes, due to its "spin-forbidden" triplet-singlet transition.

  14. Predicting materials for sustainable energy sources: The key role of density functional theory

    NASA Astrophysics Data System (ADS)

    Galli, Giulia

    Climate change and the related need for sustainable energy sources replacing fossil fuels are pressing societal problems. The development of advanced materials is widely recognized as one of the key elements for new technologies that are required to achieve a sustainable environment and provide clean and adequate energy for our planet. We discuss the key role played by Density Functional Theory, and its implementations in high performance computer codes, in understanding, predicting and designing materials for energy applications.

  15. Projecting Grammatical Features in Nominals: Cognitive Processing Theory & Computational Implementation

    DTIC Science & Technology

    2010-03-01

    functionality and plausibility distinguishes this research from most research in computational linguistics and computational psycholinguistics . The... Psycholinguistic Theory There is extensive psycholinguistic evidence that human language processing is essentially incremental and interactive...challenges of psycholinguistic research is to explain how humans can process language effortlessly and accurately given the complexity and ambiguity that is

  16. Fast and accurate quantum molecular dynamics of dense plasmas across temperature regimes

    DOE PAGES

    Sjostrom, Travis; Daligault, Jerome

    2014-10-10

    Here, we develop and implement a new quantum molecular dynamics approximation that allows fast and accurate simulations of dense plasmas from cold to hot conditions. The method is based on a carefully designed orbital-free implementation of density functional theory. The results for hydrogen and aluminum are in very good agreement with Kohn-Sham (orbital-based) density functional theory and path integral Monte Carlo calculations for microscopic features such as the electron density as well as the equation of state. The present approach does not scale with temperature and hence extends to higher temperatures than is accessible in the Kohn-Sham method and lowermore » temperatures than is accessible by path integral Monte Carlo calculations, while being significantly less computationally expensive than either of those two methods.« less

  17. Serenity: A subsystem quantum chemistry program.

    PubMed

    Unsleber, Jan P; Dresselhaus, Thomas; Klahr, Kevin; Schnieders, David; Böckers, Michael; Barton, Dennis; Neugebauer, Johannes

    2018-05-15

    We present the new quantum chemistry program Serenity. It implements a wide variety of functionalities with a focus on subsystem methodology. The modular code structure in combination with publicly available external tools and particular design concepts ensures extensibility and robustness with a focus on the needs of a subsystem program. Several important features of the program are exemplified with sample calculations with subsystem density-functional theory, potential reconstruction techniques, a projection-based embedding approach and combinations thereof with geometry optimization, semi-numerical frequency calculations and linear-response time-dependent density-functional theory. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.

  18. We have the programme, what next? Planning the implementation of an injury prevention programme

    PubMed Central

    Donaldson, Alex; Lloyd, David G; Gabbe, Belinda J; Cook, Jill

    2017-01-01

    Background and aim The impact of any injury prevention programme is a function of the programme and its implementation. However, real world implementation of injury prevention programmes is challenging. Lower limb injuries (LLIs) are common in community Australian football (community-AF) and it is likely that many could be prevented by implementing exercise-based warm-up programmes for players. This paper describes a systematic, evidence-informed approach used to develop the implementation plan for a LLI prevention programme in community-AF in Victoria, Australia. Methods An ecological approach, using Step 5 of the Intervention Mapping health promotion programme planning protocol, was taken. Results An implementation advisory group was established to ensure the implementation plan and associated strategies were relevant to the local context. Coaches were identified as the primary programme adopters and implementers within an ecological system including players, other coaches, first-aid providers, and club and league administrators. Social Cognitive Theory was used to identify likely determinants of programme reach, adoption and implementation among coaches (eg, knowledge, beliefs, skills and environment). Diffusion of Innovations theory, the Implementation Drivers framework and available research evidence were used to identify potential implementation strategies including the use of multiple communication channels, programme resources, coach education and mentoring. Conclusions A strategic evidence-informed approach to implementing interventions will help maximise their population impact. The approach to implementation planning described in this study relied on an effective researcher-practitioner partnership and active engagement of stakeholders. The identified implementation strategies were informed by theory, evidence and an in-depth understanding of the implementation context. PMID:26787739

  19. Dorsolateral prefrontal lesions do not impair tests of scene learning and decision-making that require frontal–temporal interaction

    PubMed Central

    Baxter, Mark G; Gaffan, David; Kyriazis, Diana A; Mitchell, Anna S

    2008-01-01

    Theories of dorsolateral prefrontal cortex (DLPFC) involvement in cognitive function variously emphasize its involvement in rule implementation, cognitive control, or working and/or spatial memory. These theories predict broad effects of DLPFC lesions on tests of visual learning and memory. We evaluated the effects of DLPFC lesions (including both banks of the principal sulcus) in rhesus monkeys on tests of scene learning and strategy implementation that are severely impaired following crossed unilateral lesions of frontal cortex and inferotemporal cortex. Dorsolateral lesions had no effect on learning of new scene problems postoperatively, or on the implementation of preoperatively acquired strategies. They were also without effect on the ability to adjust choice behaviour in response to a change in reinforcer value, a capacity that requires interaction between the amygdala and frontal lobe. These intact abilities following DLPFC damage support specialization of function within the prefrontal cortex, and suggest that many aspects of memory and strategic and goal-directed behaviour can survive ablation of this structure. PMID:18702721

  20. τ hadronic spectral function moments in a nonpower QCD perturbation theory

    NASA Astrophysics Data System (ADS)

    Abbas, Gauhar; Ananthanarayan, B.; Caprini, I.; Fischer, J.

    2016-04-01

    The moments of the hadronic spectral functions are of interest for the extraction of the strong coupling and other QCD parameters from the hadronic decays of the τ lepton. We consider the perturbative behavior of these moments in the framework of a QCD nonpower perturbation theory, defined by the technique of series acceleration by conformal mappings, which simultaneously implements renormalization-group summation and has a tame large-order behavior. Two recently proposed models of the Adler function are employed to generate the higher order coefficients of the perturbation series and to predict the exact values of the moments, required for testing the properties of the perturbative expansions. We show that the contour-improved nonpower perturbation theories and the renormalization-group-summed nonpower perturbation theories have very good convergence properties for a large class of moments of the so-called ;reference model;, including moments that are poorly described by the standard expansions.

  1. Performance of the density matrix functional theory in the quantum theory of atoms in molecules.

    PubMed

    García-Revilla, Marco; Francisco, E; Costales, A; Martín Pendás, A

    2012-02-02

    The generalization to arbitrary molecular geometries of the energetic partitioning provided by the atomic virial theorem of the quantum theory of atoms in molecules (QTAIM) leads to an exact and chemically intuitive energy partitioning scheme, the interacting quantum atoms (IQA) approach, that depends on the availability of second-order reduced density matrices (2-RDMs). This work explores the performance of this approach in particular and of the QTAIM in general with approximate 2-RDMs obtained from the density matrix functional theory (DMFT), which rests on the natural expansion (natural orbitals and their corresponding occupation numbers) of the first-order reduced density matrix (1-RDM). A number of these functionals have been implemented in the promolden code and used to perform QTAIM and IQA analyses on several representative molecules and model chemical reactions. Total energies, covalent intra- and interbasin exchange-correlation interactions, as well as localization and delocalization indices have been determined with these functionals from 1-RDMs obtained at different levels of theory. Results are compared to the values computed from the exact 2-RDMs, whenever possible.

  2. Metaphorical motion in mathematical reasoning: further evidence for pre-motor implementation of structure mapping in abstract domains.

    PubMed

    Fields, Chris

    2013-08-01

    The theory of computation and category theory both employ arrow-based notations that suggest that the basic metaphor "state changes are like motions" plays a fundamental role in all mathematical reasoning involving formal manipulations. If this is correct, structure-mapping inferences implemented by the pre-motor action planning system can be expected to be involved in solving any mathematics problems not solvable by table lookups and number line manipulations alone. Available functional imaging studies of multi-digit arithmetic, algebra, geometry and calculus problem solving are consistent with this expectation.

  3. Nonadiabatic Dynamics for Electrons at Second-Order: Real-Time TDDFT and OSCF2.

    PubMed

    Nguyen, Triet S; Parkhill, John

    2015-07-14

    We develop a new model to simulate nonradiative relaxation and dephasing by combining real-time Hartree-Fock and density functional theory (DFT) with our recent open-systems theory of electronic dynamics. The approach has some key advantages: it has been systematically derived and properly relaxes noninteracting electrons to a Fermi-Dirac distribution. This paper combines the new dissipation theory with an atomistic, all-electron quantum chemistry code and an atom-centered model of the thermal environment. The environment is represented nonempirically and is dependent on molecular structure in a nonlocal way. A production quality, O(N(3)) closed-shell implementation of our theory applicable to realistic molecular systems is presented, including timing information. This scaling implies that the added cost of our nonadiabatic relaxation model, time-dependent open self-consistent field at second order (OSCF2), is computationally inexpensive, relative to adiabatic propagation of real-time time-dependent Hartree-Fock (TDHF) or time-dependent density functional theory (TDDFT). Details of the implementation and numerical algorithm, including factorization and efficiency, are discussed. We demonstrate that OSCF2 approaches the stationary self-consistent field (SCF) ground state when the gap is large relative to k(b)T. The code is used to calculate linear-response spectra including the effects of bath dynamics. Finally, we show how our theory of finite-temperature relaxation can be used to correct ground-state DFT calculations.

  4. In Silico Modeling of Indigo and Tyrian Purple Single-Electron Nano-Transistors Using Density Functional Theory Approach

    NASA Astrophysics Data System (ADS)

    Shityakov, Sergey; Roewer, Norbert; Förster, Carola; Broscheit, Jens-Albert

    2017-07-01

    The purpose of this study was to develop and implement an in silico model of indigoid-based single-electron transistor (SET) nanodevices, which consist of indigoid molecules from natural dye weakly coupled to gold electrodes that function in a Coulomb blockade regime. The electronic properties of the indigoid molecules were investigated using the optimized density-functional theory (DFT) with a continuum model. Higher electron transport characteristics were determined for Tyrian purple, consistent with experimentally derived data. Overall, these results can be used to correctly predict and emphasize the electron transport functions of organic SETs, demonstrating their potential for sustainable nanoelectronics comprising the biodegradable and biocompatible materials.

  5. Generalized conformal structure, dilaton gravity and SYK

    NASA Astrophysics Data System (ADS)

    Taylor, Marika

    2018-01-01

    A theory admits generalized conformal structure if the only scale in the quantum theory is set by a dimensionful coupling. SYK is an example of a theory with generalized conformal structure and in this paper we investigate the consequences of this structure for correlation functions and for the holographic realization of SYK. The Ward identities associated with the generalized conformal structure of SYK are implemented holographically in gravity/multiple scalar theories, which always have a parent AdS3 origin. For questions involving only the graviton/running scalar sector, one can always describe the bulk running in terms of a single scalar but multiple running scalars are in general needed once one includes the bulk fields corresponding to all SYK operators. We then explore chaos in holographic theories with generalized conformal structure. The four point function explored by Maldacena, Shenker and Stanford exhibits exactly the same chaotic behaviour in any such theory as in holographic realizations of conformal theories i.e. the dimensionful coupling scale does not affect the chaotic exponential growth.

  6. Criteria for selecting implementation science theories and frameworks: results from an international survey.

    PubMed

    Birken, Sarah A; Powell, Byron J; Shea, Christopher M; Haines, Emily R; Alexis Kirk, M; Leeman, Jennifer; Rohweder, Catherine; Damschroder, Laura; Presseau, Justin

    2017-10-30

    Theories provide a synthesizing architecture for implementation science. The underuse, superficial use, and misuse of theories pose a substantial scientific challenge for implementation science and may relate to challenges in selecting from the many theories in the field. Implementation scientists may benefit from guidance for selecting a theory for a specific study or project. Understanding how implementation scientists select theories will help inform efforts to develop such guidance. Our objective was to identify which theories implementation scientists use, how they use theories, and the criteria used to select theories. We identified initial lists of uses and criteria for selecting implementation theories based on seminal articles and an iterative consensus process. We incorporated these lists into a self-administered survey for completion by self-identified implementation scientists. We recruited potential respondents at the 8th Annual Conference on the Science of Dissemination and Implementation in Health and via several international email lists. We used frequencies and percentages to report results. Two hundred twenty-three implementation scientists from 12 countries responded to the survey. They reported using more than 100 different theories spanning several disciplines. Respondents reported using theories primarily to identify implementation determinants, inform data collection, enhance conceptual clarity, and guide implementation planning. Of the 19 criteria presented in the survey, the criteria used by the most respondents to select theory included analytic level (58%), logical consistency/plausibility (56%), empirical support (53%), and description of a change process (54%). The criteria used by the fewest respondents included fecundity (10%), uniqueness (12%), and falsifiability (15%). Implementation scientists use a large number of criteria to select theories, but there is little consensus on which are most important. Our results suggest that the selection of implementation theories is often haphazard or driven by convenience or prior exposure. Variation in approaches to selecting theory warn against prescriptive guidance for theory selection. Instead, implementation scientists may benefit from considering the criteria that we propose in this paper and using them to justify their theory selection. Future research should seek to refine the criteria for theory selection to promote more consistent and appropriate use of theory in implementation science.

  7. Optimising implementation of reforms to better prevent and respond to child sexual abuse in institutions: Insights from public health, regulatory theory, and Australia's Royal Commission.

    PubMed

    Mathews, Ben

    2017-12-01

    The Australian Royal Commission Into Institutional Responses to Child Sexual Abuse has identified multiple systemic failures to protect children in government and non-government organizations providing educational, religious, welfare, sporting, cultural, arts and recreational activities. Its recommendations for reform will aim to ensure organizations adopt more effective and ethical measures to prevent, identify and respond to child sexual abuse. However, apart from the question of what measures institutions should adopt, an under-explored question is how to implement and regulate those measures. Major challenges confronting reform include the diversity of organizations providing services to children; organizational resistance; and the need for effective oversight. Failure to adopt theoretically sound strategies to overcome implementation barriers will jeopardize reform and compromise reduction of institutional child sexual abuse. This article first explains the nature of the Royal Commission, and focuses on key findings from case studies and data analysis. It then analyzes public health theory and regulatory theory to present a novel analysis of theoretically justified approaches to the implementation of measures to prevent, identify and respond to CSA, while isolating challenges to implementation. The article reviews literature on challenges to reform and compliance, and on prevention of institutional CSA and situational crime prevention, to identify measures which have attracted emerging consensus as recommended practice. Finally, it applies its novel integration of regulatory theory and public health theory to the context of CSA in institutional contexts, to develop a theoretical basis for a model of implementation and regulation, and to indicate the nature and functions of a regulatory body for this context. Copyright © 2017 The Author. Published by Elsevier Ltd.. All rights reserved.

  8. A new quasi-relativistic approach for density functional theory based on the normalized elimination of the small component

    NASA Astrophysics Data System (ADS)

    Filatov, Michael; Cremer, Dieter

    2002-01-01

    A recently developed variationally stable quasi-relativistic method, which is based on the low-order approximation to the method of normalized elimination of the small component, was incorporated into density functional theory (DFT). The new method was tested for diatomic molecules involving Ag, Cd, Au, and Hg by calculating equilibrium bond lengths, vibrational frequencies, and dissociation energies. The method is easy to implement into standard quantum chemical programs and leads to accurate results for the benchmark systems studied.

  9. Lattice dynamics calculations based on density-functional perturbation theory in real space

    NASA Astrophysics Data System (ADS)

    Shang, Honghui; Carbogno, Christian; Rinke, Patrick; Scheffler, Matthias

    2017-06-01

    A real-space formalism for density-functional perturbation theory (DFPT) is derived and applied for the computation of harmonic vibrational properties in molecules and solids. The practical implementation using numeric atom-centered orbitals as basis functions is demonstrated exemplarily for the all-electron Fritz Haber Institute ab initio molecular simulations (FHI-aims) package. The convergence of the calculations with respect to numerical parameters is carefully investigated and a systematic comparison with finite-difference approaches is performed both for finite (molecules) and extended (periodic) systems. Finally, the scaling tests and scalability tests on massively parallel computer systems demonstrate the computational efficiency.

  10. Aerodynamic shape optimization of wing and wing-body configurations using control theory

    NASA Technical Reports Server (NTRS)

    Reuther, James; Jameson, Antony

    1995-01-01

    This paper describes the implementation of optimization techniques based on control theory for wing and wing-body design. In previous studies it was shown that control theory could be used to devise an effective optimization procedure for airfoils and wings in which the shape and the surrounding body-fitted mesh are both generated analytically, and the control is the mapping function. Recently, the method has been implemented for both potential flows and flows governed by the Euler equations using an alternative formulation which employs numerically generated grids, so that it can more easily be extended to treat general configurations. Here results are presented both for the optimization of a swept wing using an analytic mapping, and for the optimization of wing and wing-body configurations using a general mesh.

  11. Orbital dependent functionals: An atom projector augmented wave method implementation

    NASA Astrophysics Data System (ADS)

    Xu, Xiao

    This thesis explores the formulation and numerical implementation of orbital dependent exchange-correlation functionals within electronic structure calculations. These orbital-dependent exchange-correlation functionals have recently received renewed attention as a means to improve the physical representation of electron interactions within electronic structure calculations. In particular, electron self-interaction terms can be avoided. In this thesis, an orbital-dependent functional is considered in the context of Hartree-Fock (HF) theory as well as the Optimized Effective Potential (OEP) method and the approximate OEP method developed by Krieger, Li, and Iafrate, known as the KLI approximation. In this thesis, the Fock exchange term is used as a simple well-defined example of an orbital-dependent functional. The Projected Augmented Wave (PAW) method developed by P. E. Blochl has proven to be accurate and efficient for electronic structure calculations for local and semi-local functions because of its accurate evaluation of interaction integrals by controlling multiple moments. We have extended the PAW method to treat orbital-dependent functionals in Hartree-Fock theory and the Optimized Effective Potential method, particularly in the KLI approximation. In the course of study we develop a frozen-core orbital approximation that accurately treats the core electron contributions for above three methods. The main part of the thesis focuses on the treatment of spherical atoms. We have investigated the behavior of PAW-Hartree Fock and PAW-KLI basis, projector, and pseudopotential functions for several elements throughout the periodic table. We have also extended the formalism to the treatment of solids in a plane wave basis and implemented PWPAW-KLI code, which will appear in future publications.

  12. Massively parallel and linear-scaling algorithm for second-order Moller–Plesset perturbation theory applied to the study of supramolecular wires

    DOE PAGES

    Kjaergaard, Thomas; Baudin, Pablo; Bykov, Dmytro; ...

    2016-11-16

    Here, we present a scalable cross-platform hybrid MPI/OpenMP/OpenACC implementation of the Divide–Expand–Consolidate (DEC) formalism with portable performance on heterogeneous HPC architectures. The Divide–Expand–Consolidate formalism is designed to reduce the steep computational scaling of conventional many-body methods employed in electronic structure theory to linear scaling, while providing a simple mechanism for controlling the error introduced by this approximation. Our massively parallel implementation of this general scheme has three levels of parallelism, being a hybrid of the loosely coupled task-based parallelization approach and the conventional MPI +X programming model, where X is either OpenMP or OpenACC. We demonstrate strong and weak scalabilitymore » of this implementation on heterogeneous HPC systems, namely on the GPU-based Cray XK7 Titan supercomputer at the Oak Ridge National Laboratory. Using the “resolution of the identity second-order Moller–Plesset perturbation theory” (RI-MP2) as the physical model for simulating correlated electron motion, the linear-scaling DEC implementation is applied to 1-aza-adamantane-trione (AAT) supramolecular wires containing up to 40 monomers (2440 atoms, 6800 correlated electrons, 24 440 basis functions and 91 280 auxiliary functions). This represents the largest molecular system treated at the MP2 level of theory, demonstrating an efficient removal of the scaling wall pertinent to conventional quantum many-body methods.« less

  13. Integrated Application of Active Controls (IAAC) technology to an advanced subsonic transport project: Current and advanced act control system definition study, volume 1

    NASA Technical Reports Server (NTRS)

    Hanks, G. W.; Shomber, H. A.; Dethman, H. A.; Gratzer, L. B.; Maeshiro, A.; Gangsaas, D.; Blight, J. D.; Buchan, S. M.; Crumb, C. B.; Dorwart, R. J.

    1981-01-01

    An active controls technology (ACT) system architecture was selected based on current technology system elements and optimal control theory was evaluated for use in analyzing and synthesizing ACT multiple control laws. The system selected employs three redundant computers to implement all of the ACT functions, four redundant smaller computers to implement the crucial pitch-augmented stability function, and a separate maintenance and display computer. The reliability objective of probability of crucial function failure of less than 1 x 10 to the -9th power per flight of 1 hr can be met with current technology system components, if the software is assumed fault free and coverage approaching 1.0 can be provided. The optimal control theory approach to ACT control law synthesis yielded comparable control law performance much more systematically and directly than the classical s-domain approach. The ACT control law performance, although somewhat degraded by the inclusion of representative nonlinearities, remained quite effective. Certain high-frequency gust-load alleviation functions may require increased surface rate capability.

  14. How to improve medical education website design.

    PubMed

    Sisson, Stephen D; Hill-Briggs, Felicia; Levine, David

    2010-04-21

    The Internet provides a means of disseminating medical education curricula, allowing institutions to share educational resources. Much of what is published online is poorly planned, does not meet learners' needs, or is out of date. Applying principles of curriculum development, adult learning theory and educational website design may result in improved online educational resources. Key steps in developing and implementing an education website include: 1) Follow established principles of curriculum development; 2) Perform a needs assessment and repeat the needs assessment regularly after curriculum implementation; 3) Include in the needs assessment targeted learners, educators, institutions, and society; 4) Use principles of adult learning and behavioral theory when developing content and website function; 5) Design the website and curriculum to demonstrate educational effectiveness at an individual and programmatic level; 6) Include a mechanism for sustaining website operations and updating content over a long period of time. Interactive, online education programs are effective for medical training, but require planning, implementation, and maintenance that follow established principles of curriculum development, adult learning, and behavioral theory.

  15. Organizational theory for dissemination and implementation research.

    PubMed

    Birken, Sarah A; Bunger, Alicia C; Powell, Byron J; Turner, Kea; Clary, Alecia S; Klaman, Stacey L; Yu, Yan; Whitaker, Daniel J; Self, Shannon R; Rostad, Whitney L; Chatham, Jenelle R Shanley; Kirk, M Alexis; Shea, Christopher M; Haines, Emily; Weiner, Bryan J

    2017-05-12

    Even under optimal internal organizational conditions, implementation can be undermined by changes in organizations' external environments, such as fluctuations in funding, adjustments in contracting practices, new technology, new legislation, changes in clinical practice guidelines and recommendations, or other environmental shifts. Internal organizational conditions are increasingly reflected in implementation frameworks, but nuanced explanations of how organizations' external environments influence implementation success are lacking in implementation research. Organizational theories offer implementation researchers a host of existing, highly relevant, and heretofore largely untapped explanations of the complex interaction between organizations and their environment. In this paper, we demonstrate the utility of organizational theories for implementation research. We applied four well-known organizational theories (institutional theory, transaction cost economics, contingency theories, and resource dependency theory) to published descriptions of efforts to implement SafeCare, an evidence-based practice for preventing child abuse and neglect. Transaction cost economics theory explained how frequent, uncertain processes for contracting for SafeCare may have generated inefficiencies and thus compromised implementation among private child welfare organizations. Institutional theory explained how child welfare systems may have been motivated to implement SafeCare because doing so aligned with expectations of key stakeholders within child welfare systems' professional communities. Contingency theories explained how efforts such as interagency collaborative teams promoted SafeCare implementation by facilitating adaptation to child welfare agencies' internal and external contexts. Resource dependency theory (RDT) explained how interagency relationships, supported by contracts, memoranda of understanding, and negotiations, facilitated SafeCare implementation by balancing autonomy and dependence on funding agencies and SafeCare developers. In addition to the retrospective application of organizational theories demonstrated above, we advocate for the proactive use of organizational theories to design implementation research. For example, implementation strategies should be selected to minimize transaction costs, promote and maintain congruence between organizations' dynamic internal and external contexts over time, and simultaneously attend to organizations' financial needs while preserving their autonomy. We describe implications of applying organizational theory in implementation research for implementation strategies, the evaluation of implementation efforts, measurement, research design, theory, and practice. We also offer guidance to implementation researchers for applying organizational theory.

  16. Affordable and accurate large-scale hybrid-functional calculations on GPU-accelerated supercomputers

    NASA Astrophysics Data System (ADS)

    Ratcliff, Laura E.; Degomme, A.; Flores-Livas, José A.; Goedecker, Stefan; Genovese, Luigi

    2018-03-01

    Performing high accuracy hybrid functional calculations for condensed matter systems containing a large number of atoms is at present computationally very demanding or even out of reach if high quality basis sets are used. We present a highly optimized multiple graphics processing unit implementation of the exact exchange operator which allows one to perform fast hybrid functional density-functional theory (DFT) calculations with systematic basis sets without additional approximations for up to a thousand atoms. With this method hybrid DFT calculations of high quality become accessible on state-of-the-art supercomputers within a time-to-solution that is of the same order of magnitude as traditional semilocal-GGA functionals. The method is implemented in a portable open-source library.

  17. Generalized Wall Function for Complex Turbulent Flows

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Povinelli, Louis A.; Liu, Nan-Suey; Chen, Kuo-Huey

    2000-01-01

    A generalized wall function was proposed by Shih et al., (1999). It accounts the effect of pressure gradients on the flow near the wall. Theory shows that the effect of pressure gradients on the flow in the inertial sublayer is very significant and the standard wall function should be replaced by a generalized wall function. Since the theory is also valid for boundary layer flows toward separation, the generalized wall function may be applied to complex turbulent flows with acceleration, deceleration, separation and recirculation. This paper is to verify the generalized wall function with numerical simulations for boundary layer flows with various adverse and favorable pressure gradients, including flows about to separate. Furthermore, a general procedure of implementation of the generalized wall function for National Combustion Code (NCC) is described, it can be applied to both structured and unstructured CFD codes.

  18. Quasi-Newton methods for parameter estimation in functional differential equations

    NASA Technical Reports Server (NTRS)

    Brewer, Dennis W.

    1988-01-01

    A state-space approach to parameter estimation in linear functional differential equations is developed using the theory of linear evolution equations. A locally convergent quasi-Newton type algorithm is applied to distributed systems with particular emphasis on parameters that induce unbounded perturbations of the state. The algorithm is computationally implemented on several functional differential equations, including coefficient and delay estimation in linear delay-differential equations.

  19. Self-consistent DFT +U method for real-space time-dependent density functional theory calculations

    NASA Astrophysics Data System (ADS)

    Tancogne-Dejean, Nicolas; Oliveira, Micael J. T.; Rubio, Angel

    2017-12-01

    We implemented various DFT+U schemes, including the Agapito, Curtarolo, and Buongiorno Nardelli functional (ACBN0) self-consistent density-functional version of the DFT +U method [Phys. Rev. X 5, 011006 (2015), 10.1103/PhysRevX.5.011006] within the massively parallel real-space time-dependent density functional theory (TDDFT) code octopus. We further extended the method to the case of the calculation of response functions with real-time TDDFT+U and to the description of noncollinear spin systems. The implementation is tested by investigating the ground-state and optical properties of various transition-metal oxides, bulk topological insulators, and molecules. Our results are found to be in good agreement with previously published results for both the electronic band structure and structural properties. The self-consistent calculated values of U and J are also in good agreement with the values commonly used in the literature. We found that the time-dependent extension of the self-consistent DFT+U method yields improved optical properties when compared to the empirical TDDFT+U scheme. This work thus opens a different theoretical framework to address the nonequilibrium properties of correlated systems.

  20. Implementation of a method for calculating temperature-dependent resistivities in the KKR formalism

    NASA Astrophysics Data System (ADS)

    Mahr, Carsten E.; Czerner, Michael; Heiliger, Christian

    2017-10-01

    We present a method to calculate the electron-phonon induced resistivity of metals in scattering-time approximation based on the nonequilibrium Green's function formalism. The general theory as well as its implementation in a density-functional theory based Korringa-Kohn-Rostoker code are described and subsequently verified by studying copper as a test system. We model the thermal expansion by fitting a Debye-Grüneisen curve to experimental data. Both the electronic and vibrational structures are discussed for different temperatures, and employing a Wannier interpolation of these quantities we evaluate the scattering time by integrating the electron linewidth on a triangulation of the Fermi surface. Based thereupon, the temperature-dependent resistivity is calculated and found to be in good agreement with experiment. We show that the effect of thermal expansion has to be considered in the whole calculation regime. Further, for low temperatures, an accurate sampling of the Fermi surface becomes important.

  1. Communication: A novel implementation to compute MP2 correlation energies without basis set superposition errors and complete basis set extrapolation.

    PubMed

    Dixit, Anant; Claudot, Julien; Lebègue, Sébastien; Rocca, Dario

    2017-06-07

    By using a formulation based on the dynamical polarizability, we propose a novel implementation of second-order Møller-Plesset perturbation (MP2) theory within a plane wave (PW) basis set. Because of the intrinsic properties of PWs, this method is not affected by basis set superposition errors. Additionally, results are converged without relying on complete basis set extrapolation techniques; this is achieved by using the eigenvectors of the static polarizability as an auxiliary basis set to compactly and accurately represent the response functions involved in the MP2 equations. Summations over the large number of virtual states are avoided by using a formalism inspired by density functional perturbation theory, and the Lanczos algorithm is used to include dynamical effects. To demonstrate this method, applications to three weakly interacting dimers are presented.

  2. A computer architecture for intelligent machines

    NASA Technical Reports Server (NTRS)

    Lefebvre, D. R.; Saridis, G. N.

    1991-01-01

    The Theory of Intelligent Machines proposes a hierarchical organization for the functions of an autonomous robot based on the Principle of Increasing Precision With Decreasing Intelligence. An analytic formulation of this theory using information-theoretic measures of uncertainty for each level of the intelligent machine has been developed in recent years. A computer architecture that implements the lower two levels of the intelligent machine is presented. The architecture supports an event-driven programming paradigm that is independent of the underlying computer architecture and operating system. Details of Execution Level controllers for motion and vision systems are addressed, as well as the Petri net transducer software used to implement Coordination Level functions. Extensions to UNIX and VxWorks operating systems which enable the development of a heterogeneous, distributed application are described. A case study illustrates how this computer architecture integrates real-time and higher-level control of manipulator and vision systems.

  3. Self-consistent hybrid functionals for solids: a fully-automated implementation

    NASA Astrophysics Data System (ADS)

    Erba, A.

    2017-08-01

    A fully-automated algorithm for the determination of the system-specific optimal fraction of exact exchange in self-consistent hybrid functionals of the density-functional-theory is illustrated, as implemented into the public Crystal program. The exchange fraction of this new class of functionals is self-consistently updated proportionally to the inverse of the dielectric response of the system within an iterative procedure (Skone et al 2014 Phys. Rev. B 89, 195112). Each iteration of the present scheme, in turn, implies convergence of a self-consistent-field (SCF) and a coupled-perturbed-Hartree-Fock/Kohn-Sham (CPHF/KS) procedure. The present implementation, beside improving the user-friendliness of self-consistent hybrids, exploits the unperturbed and electric-field perturbed density matrices from previous iterations as guesses for subsequent SCF and CPHF/KS iterations, which is documented to reduce the overall computational cost of the whole process by a factor of 2.

  4. A formulation of directivity for earthquake sources using isochrone theory

    USGS Publications Warehouse

    Spudich, Paul; Chiou, Brian S.J.; Graves, Robert; Collins, Nancy; Somerville, Paul

    2004-01-01

    A functional form for directivity effects can be derived from isochrone theory, in which the measure of the directivity-induced amplification of an S body wave is c, the isochrone velocity. Ground displacement of the near-, intermediate-, and far-field terms of P and S waves is linear in isochrone velocity for a finite source in a whole space. We have developed an approximation c-tilde-prime of isochrone velocity that can easily be implemented as a predictor of directivity effects in empirical ground motion prediction relations. Typically, for a given fault surface, hypocenter, and site geometry, c-tilde-prime is a simple function of the hypocentral distance, the rupture distance, the crustal shear wave speed in the seismogenic zone, and the rupture velocity. c-tilde-prime typically ranges in the interval 0.44, for rupture away from the station, to about 4, for rupture toward the station. In this version of the theory directivity is independent of period. Additionally, we have created another functional form which is c-tilde-prime modified to include the approximate radiation pattern of a finite fault having a given rake. This functional form can be used to model the spatial variations of fault-parallel and fault-normal horizontal ground motions. The strengths of this formulation are 1) the proposed functional form is based on theory, 2) the predictor is unambiguously defined for all possible site locations and source rakes, and 3) it can easily be implemented for well-studied important previous earthquakes. We compare predictions of our functional form with synthetic ground motions calculated for finite strike-slip and dip-slip faults in the magnitude range 6.5 - 7.5. In general our functional form correlates best with computed fault-normal and fault-parallel motions in the synthetic motions calculated for events with M6.5. Correlation degrades but is still useful for larger events and for the geometric average horizontal motions. We have had limited success applying it to geometrically complicated faults.

  5. Coarse-grained density functional theories for metallic alloys: Generalized coherent-potential approximations and charge-excess functional theory

    NASA Astrophysics Data System (ADS)

    Bruno, Ezio; Mammano, Francesco; Fiorino, Antonino; Morabito, Emanuela V.

    2008-04-01

    The class of the generalized coherent-potential approximations (GCPAs) to the density functional theory (DFT) is introduced within the multiple scattering theory formalism with the aim of dealing with ordered or disordered metallic alloys. All GCPA theories are based on a common ansatz for the kinetic part of the Hohenberg-Kohn functional and each theory of the class is specified by an external model concerning the potential reconstruction. Most existing DFT implementations of CPA-based theories belong to the GCPA class. The analysis of the formal properties of the density functional defined by GCPA theories shows that it consists of marginally coupled local contributions. Furthermore, it is shown that the GCPA functional does not depend on the details of the charge density and that it can be exactly rewritten as a function of the appropriate charge multipole moments to be associated with each lattice site. A general procedure based on the integration of the qV laws is described that allows for the explicit construction of the same function. The coarse-grained nature of the GCPA density functional implies a great deal of computational advantages and is connected with the O(N) scalability of GCPA algorithms. Moreover, it is shown that a convenient truncated series expansion of the GCPA functional leads to the charge-excess functional (CEF) theory [E. Bruno , Phys. Rev. Lett. 91, 166401 (2003)], which here is offered in a generalized version that includes multipolar interactions. CEF and the GCPA numerical results are compared with status of art linearized augmented plane wave (LAPW) full-potential density functional calculations for 62 bcc- and fcc-based ordered CuZn alloys, in all the range of concentrations. Two facts clearly emerge from these extensive tests. In the first place, the discrepancies between GCPA and CEF results are always within the numerical accuracy of the calculations, both for the site charges and the total energies. In the second place, the GCPA (or the CEF) is able to very carefully reproduce the LAPW site charges and a good agreement is obtained also about the total energies.

  6. Balancing power: A grounded theory study on partnership of academic service institutes.

    PubMed

    Heshmati Nabavi, Fatemeh; Vanaki, Zohreh; Mohammadi, Eesa; Yazdani, Shahram

    2017-07-01

    Governments and professional organizations have called for new partnerships between health care providers and academics to improve clinical education for the benefit of both students and patients. To develop a substantive grounded theory on the process of forming academic-service partnerships in implementing clinical education, from the perspective of academic and clinical nursing staff members and managers working in Iranian settings. The participants included 15 hospital nurses, nurse managers, nurse educators, and educational managers from two central universities and clinical settings from 2009 to 2012. Data were collected through 30 in-depth, semi-structure interviews with the individual participants and then analyzed using the methodology of Strauss and Corbin's grounded theory. Utilizing "balancing power" as the core variable enabled us to integrate the concepts concerning the partnership processes between clinical and educational institutes. Three distinct and significant categories emerged to explain the process of partnership: 1) divergence, 2) conflict between educational and caring functions, and 3) creation of balance between educational and caring functions. In implementing clinical education, partnerships have been formed within a challenging context in Iran. Conflict between clinical and educational functions was the main concern of both sides of the partnership in forming a collaborative relationship, with our findings emphasizing the importance of nursing educators' role in the establishment of partnership programs.

  7. Sierra Structural Dynamics Theory Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reese, Garth M.

    Sierra/SD provides a massively parallel implementation of structural dynamics finite element analysis, required for high fidelity, validated models used in modal, vibration, static and shock analysis of structural systems. This manual describes the theory behind many of the constructs in Sierra/SD. For a more detailed description of how to use Sierra/SD , we refer the reader to Sierra/SD, User's Notes . Many of the constructs in Sierra/SD are pulled directly from published material. Where possible, these materials are referenced herein. However, certain functions in Sierra/SD are specific to our implementation. We try to be far more complete in those areas.more » The theory manual was developed from several sources including general notes, a programmer notes manual, the user's notes and of course the material in the open literature. This page intentionally left blank.« less

  8. Applying an Activity Theory Lens to Designing Instruction for Learning about the Structure, Behavior, and Function of a Honeybee System

    ERIC Educational Resources Information Center

    Danish, Joshua A.

    2014-01-01

    This article reports on a study in which activity theory was used to design, implement, and analyze a 10-week curriculum unit about how honeybees collect nectar with a particular focus on complex systems concepts. Students (n = 42) in a multi-year kindergarten and 1st-grade classroom participated in this study as part of their 10 regular classroom…

  9. Multicomponent density functional theory embedding formulation.

    PubMed

    Culpitt, Tanner; Brorsen, Kurt R; Pak, Michael V; Hammes-Schiffer, Sharon

    2016-07-28

    Multicomponent density functional theory (DFT) methods have been developed to treat two types of particles, such as electrons and nuclei, quantum mechanically at the same level. In the nuclear-electronic orbital (NEO) approach, all electrons and select nuclei, typically key protons, are treated quantum mechanically. For multicomponent DFT methods developed within the NEO framework, electron-proton correlation functionals based on explicitly correlated wavefunctions have been designed and used in conjunction with well-established electronic exchange-correlation functionals. Herein a general theory for multicomponent embedded DFT is developed to enable the accurate treatment of larger systems. In the general theory, the total electronic density is separated into two subsystem densities, denoted as regular and special, and different electron-proton correlation functionals are used for these two electronic densities. In the specific implementation, the special electron density is defined in terms of spatially localized Kohn-Sham electronic orbitals, and electron-proton correlation is included only for the special electron density. The electron-proton correlation functional depends on only the special electron density and the proton density, whereas the electronic exchange-correlation functional depends on the total electronic density. This scheme includes the essential electron-proton correlation, which is a relatively local effect, as well as the electronic exchange-correlation for the entire system. This multicomponent DFT-in-DFT embedding theory is applied to the HCN and FHF(-) molecules in conjunction with two different electron-proton correlation functionals and three different electronic exchange-correlation functionals. The results illustrate that this approach provides qualitatively accurate nuclear densities in a computationally tractable manner. The general theory is also easily extended to other types of partitioning schemes for multicomponent systems.

  10. Multicomponent density functional theory embedding formulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Culpitt, Tanner; Brorsen, Kurt R.; Pak, Michael V.

    Multicomponent density functional theory (DFT) methods have been developed to treat two types of particles, such as electrons and nuclei, quantum mechanically at the same level. In the nuclear-electronic orbital (NEO) approach, all electrons and select nuclei, typically key protons, are treated quantum mechanically. For multicomponent DFT methods developed within the NEO framework, electron-proton correlation functionals based on explicitly correlated wavefunctions have been designed and used in conjunction with well-established electronic exchange-correlation functionals. Herein a general theory for multicomponent embedded DFT is developed to enable the accurate treatment of larger systems. In the general theory, the total electronic density ismore » separated into two subsystem densities, denoted as regular and special, and different electron-proton correlation functionals are used for these two electronic densities. In the specific implementation, the special electron density is defined in terms of spatially localized Kohn-Sham electronic orbitals, and electron-proton correlation is included only for the special electron density. The electron-proton correlation functional depends on only the special electron density and the proton density, whereas the electronic exchange-correlation functional depends on the total electronic density. This scheme includes the essential electron-proton correlation, which is a relatively local effect, as well as the electronic exchange-correlation for the entire system. This multicomponent DFT-in-DFT embedding theory is applied to the HCN and FHF{sup −} molecules in conjunction with two different electron-proton correlation functionals and three different electronic exchange-correlation functionals. The results illustrate that this approach provides qualitatively accurate nuclear densities in a computationally tractable manner. The general theory is also easily extended to other types of partitioning schemes for multicomponent systems.« less

  11. A General Sparse Tensor Framework for Electronic Structure Theory

    DOE PAGES

    Manzer, Samuel; Epifanovsky, Evgeny; Krylov, Anna I.; ...

    2017-01-24

    Linear-scaling algorithms must be developed in order to extend the domain of applicability of electronic structure theory to molecules of any desired size. But, the increasing complexity of modern linear-scaling methods makes code development and maintenance a significant challenge. A major contributor to this difficulty is the lack of robust software abstractions for handling block-sparse tensor operations. We therefore report the development of a highly efficient symbolic block-sparse tensor library in order to provide access to high-level software constructs to treat such problems. Our implementation supports arbitrary multi-dimensional sparsity in all input and output tensors. We then avoid cumbersome machine-generatedmore » code by implementing all functionality as a high-level symbolic C++ language library and demonstrate that our implementation attains very high performance for linear-scaling sparse tensor contractions.« less

  12. Embedded-cluster calculations in a numeric atomic orbital density-functional theory framework.

    PubMed

    Berger, Daniel; Logsdail, Andrew J; Oberhofer, Harald; Farrow, Matthew R; Catlow, C Richard A; Sherwood, Paul; Sokol, Alexey A; Blum, Volker; Reuter, Karsten

    2014-07-14

    We integrate the all-electron electronic structure code FHI-aims into the general ChemShell package for solid-state embedding quantum and molecular mechanical (QM/MM) calculations. A major undertaking in this integration is the implementation of pseudopotential functionality into FHI-aims to describe cations at the QM/MM boundary through effective core potentials and therewith prevent spurious overpolarization of the electronic density. Based on numeric atomic orbital basis sets, FHI-aims offers particularly efficient access to exact exchange and second order perturbation theory, rendering the established QM/MM setup an ideal tool for hybrid and double-hybrid level density functional theory calculations of solid systems. We illustrate this capability by calculating the reduction potential of Fe in the Fe-substituted ZSM-5 zeolitic framework and the reaction energy profile for (photo-)catalytic water oxidation at TiO2(110).

  13. Embedded-cluster calculations in a numeric atomic orbital density-functional theory framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berger, Daniel, E-mail: daniel.berger@ch.tum.de; Oberhofer, Harald; Reuter, Karsten

    2014-07-14

    We integrate the all-electron electronic structure code FHI-aims into the general ChemShell package for solid-state embedding quantum and molecular mechanical (QM/MM) calculations. A major undertaking in this integration is the implementation of pseudopotential functionality into FHI-aims to describe cations at the QM/MM boundary through effective core potentials and therewith prevent spurious overpolarization of the electronic density. Based on numeric atomic orbital basis sets, FHI-aims offers particularly efficient access to exact exchange and second order perturbation theory, rendering the established QM/MM setup an ideal tool for hybrid and double-hybrid level density functional theory calculations of solid systems. We illustrate this capabilitymore » by calculating the reduction potential of Fe in the Fe-substituted ZSM-5 zeolitic framework and the reaction energy profile for (photo-)catalytic water oxidation at TiO{sub 2}(110)« less

  14. Developing program theory for purveyor programs

    PubMed Central

    2013-01-01

    Background Frequently, social interventions produce less for the intended beneficiaries than was initially planned. One possible reason is that ideas embodied in interventions are not self-executing and require careful and systematic translation to put into practice. The capacity of implementers to deliver interventions is thus paramount. Purveyor organizations provide external support to implementers to develop that capacity and to encourage high-fidelity implementation behavior. Literature on the theory underlying this type of program is not plentiful. Research shows that detailed, explicit, and agreed-upon program theory contributes to and encourages high-fidelity implementation behavior. The process of developing and depicting program theory is flexible and leaves the researcher with what might be seen as an overwhelming number of options. Methods This study was designed to develop and depict the program theory underlying the support services delivered by a South African purveyor. The purveyor supports seventeen local organizations in delivering a peer education program to young people as an HIV/AIDS prevention intervention. Purposive sampling was employed to identify and select study participants. An iterative process that involved site visits, a desktop review of program documentation, one-on-one unstructured interviews, and a subsequent verification process, was used to develop a comprehensive program logic model. Results The study resulted in a formalized logic model of how the specific purveyor is supposed to function; that model was accepted by all study participants. Conclusion The study serves as an example of how program theory of a ‘real life’ program can be developed and depicted. It highlights the strengths and weakness of this evaluation approach, and provides direction and recommendations for future research on programs that employ the purveyor method to disseminate interventions. PMID:23421855

  15. Optimum aerodynamic design via boundary control

    NASA Technical Reports Server (NTRS)

    Jameson, Antony

    1994-01-01

    These lectures describe the implementation of optimization techniques based on control theory for airfoil and wing design. In previous studies it was shown that control theory could be used to devise an effective optimization procedure for two-dimensional profiles in which the shape is determined by a conformal transformation from a unit circle, and the control is the mapping function. Recently the method has been implemented in an alternative formulation which does not depend on conformal mapping, so that it can more easily be extended to treat general configurations. The method has also been extended to treat the Euler equations, and results are presented for both two and three dimensional cases, including the optimization of a swept wing.

  16. Communication: The description of strong correlation within self-consistent Green's function second-order perturbation theory

    NASA Astrophysics Data System (ADS)

    Phillips, Jordan J.; Zgid, Dominika

    2014-06-01

    We report an implementation of self-consistent Green's function many-body theory within a second-order approximation (GF2) for application with molecular systems. This is done by iterative solution of the Dyson equation expressed in matrix form in an atomic orbital basis, where the Green's function and self-energy are built on the imaginary frequency and imaginary time domain, respectively, and fast Fourier transform is used to efficiently transform these quantities as needed. We apply this method to several archetypical examples of strong correlation, such as a H32 finite lattice that displays a highly multireference electronic ground state even at equilibrium lattice spacing. In all cases, GF2 gives a physically meaningful description of the metal to insulator transition in these systems, without resorting to spin-symmetry breaking. Our results show that self-consistent Green's function many-body theory offers a viable route to describing strong correlations while remaining within a computationally tractable single-particle formalism.

  17. The pair correlation function of krypton in the critical region: theory and experiment

    NASA Astrophysics Data System (ADS)

    Barocchi, F.; Chieux, P.; Fontana, R.; Magli, R.; Meroni, A.; Parola, A.; Reatto, L.; Tau, M.

    1997-10-01

    We present the results of high-precision measurements of the structure factor S(k) of krypton in the near-critical region of the liquid - vapour phase transition for values of k ranging from 1.5 up to 0953-8984/9/42/003/img15. The experimental results are compared with a theoretical calculation based on the hierarchical reference theory (HRT) with an accurate potential which includes two- and three-body contributions. The theory is based on a new implementation of HRT in which we avoid the use of hard spheres as a reference system. With this soft-core formulation we find a generally good agreement with experiments both at large k, where S(k) probes the short-range correlations, as well as at small k, where critical fluctuations become dominant. Also, for the density derivative of the pair correlation function there is an overall good agreement between theory and experiment.

  18. Nuclear shielding constants by density functional theory with gauge including atomic orbitals

    NASA Astrophysics Data System (ADS)

    Helgaker, Trygve; Wilson, Philip J.; Amos, Roger D.; Handy, Nicholas C.

    2000-08-01

    Recently, we introduced a new density-functional theory (DFT) approach for the calculation of NMR shielding constants. First, a hybrid DFT calculation (using 5% exact exchange) is performed on the molecule to determine Kohn-Sham orbitals and their energies; second, the constants are determined as in nonhybrid DFT theory, that is, the paramagnetic contribution to the constants is calculated from a noniterative, uncoupled sum-over-states expression. The initial results suggested that this semiempirical DFT approach gives shielding constants in good agreement with the best ab initio and experimental data; in this paper, we further validate this procedure, using London orbitals in the theory, having implemented DFT into the ab initio code DALTON. Calculations on a number of small and medium-sized molecules confirm that our approach produces shieldings in excellent agreement with experiment and the best ab initio results available, demonstrating its potential for the study of shielding constants of large systems.

  19. Research on the SIM card implementing functions of transport card

    NASA Astrophysics Data System (ADS)

    Li, Yi; Wang, Lin

    2015-12-01

    This paper is based on the analysis for theory and key technologies of contact communication, contactless communication card and STK menu, and proposes complete software and hardware solution for achieving convenience and secure mobile payment system on SIM card.

  20. FDE-vdW: A van der Waals inclusive subsystem density-functional theory.

    PubMed

    Kevorkyants, Ruslan; Eshuis, Henk; Pavanello, Michele

    2014-07-28

    We present a formally exact van der Waals inclusive electronic structure theory, called FDE-vdW, based on the Frozen Density Embedding formulation of subsystem Density-Functional Theory. In subsystem DFT, the energy functional is composed of subsystem additive and non-additive terms. We show that an appropriate definition of the long-range correlation energy is given by the value of the non-additive correlation functional. This functional is evaluated using the fluctuation-dissipation theorem aided by a formally exact decomposition of the response functions into subsystem contributions. FDE-vdW is derived in detail and several approximate schemes are proposed, which lead to practical implementations of the method. We show that FDE-vdW is Casimir-Polder consistent, i.e., it reduces to the generalized Casimir-Polder formula for asymptotic inter-subsystems separations. Pilot calculations of binding energies of 13 weakly bound complexes singled out from the S22 set show a dramatic improvement upon semilocal subsystem DFT, provided that an appropriate exchange functional is employed. The convergence of FDE-vdW with basis set size is discussed, as well as its dependence on the choice of associated density functional approximant.

  1. FDE-vdW: A van der Waals inclusive subsystem density-functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kevorkyants, Ruslan; Pavanello, Michele, E-mail: m.pavanello@rutgers.edu; Eshuis, Henk

    2014-07-28

    We present a formally exact van der Waals inclusive electronic structure theory, called FDE-vdW, based on the Frozen Density Embedding formulation of subsystem Density-Functional Theory. In subsystem DFT, the energy functional is composed of subsystem additive and non-additive terms. We show that an appropriate definition of the long-range correlation energy is given by the value of the non-additive correlation functional. This functional is evaluated using the fluctuation–dissipation theorem aided by a formally exact decomposition of the response functions into subsystem contributions. FDE-vdW is derived in detail and several approximate schemes are proposed, which lead to practical implementations of the method.more » We show that FDE-vdW is Casimir-Polder consistent, i.e., it reduces to the generalized Casimir-Polder formula for asymptotic inter-subsystems separations. Pilot calculations of binding energies of 13 weakly bound complexes singled out from the S22 set show a dramatic improvement upon semilocal subsystem DFT, provided that an appropriate exchange functional is employed. The convergence of FDE-vdW with basis set size is discussed, as well as its dependence on the choice of associated density functional approximant.« less

  2. ELSI: A unified software interface for Kohn–Sham electronic structure solvers

    DOE PAGES

    Yu, Victor Wen-zhe; Corsetti, Fabiano; Garcia, Alberto; ...

    2017-09-15

    Solving the electronic structure from a generalized or standard eigenproblem is often the bottleneck in large scale calculations based on Kohn-Sham density-functional theory. This problem must be addressed by essentially all current electronic structure codes, based on similar matrix expressions, and by high-performance computation. We here present a unified software interface, ELSI, to access different strategies that address the Kohn-Sham eigenvalue problem. Currently supported algorithms include the dense generalized eigensolver library ELPA, the orbital minimization method implemented in libOMM, and the pole expansion and selected inversion (PEXSI) approach with lower computational complexity for semilocal density functionals. The ELSI interface aimsmore » to simplify the implementation and optimal use of the different strategies, by offering (a) a unified software framework designed for the electronic structure solvers in Kohn-Sham density-functional theory; (b) reasonable default parameters for a chosen solver; (c) automatic conversion between input and internal working matrix formats, and in the future (d) recommendation of the optimal solver depending on the specific problem. As a result, comparative benchmarks are shown for system sizes up to 11,520 atoms (172,800 basis functions) on distributed memory supercomputing architectures.« less

  3. ELSI: A unified software interface for Kohn-Sham electronic structure solvers

    NASA Astrophysics Data System (ADS)

    Yu, Victor Wen-zhe; Corsetti, Fabiano; García, Alberto; Huhn, William P.; Jacquelin, Mathias; Jia, Weile; Lange, Björn; Lin, Lin; Lu, Jianfeng; Mi, Wenhui; Seifitokaldani, Ali; Vázquez-Mayagoitia, Álvaro; Yang, Chao; Yang, Haizhao; Blum, Volker

    2018-01-01

    Solving the electronic structure from a generalized or standard eigenproblem is often the bottleneck in large scale calculations based on Kohn-Sham density-functional theory. This problem must be addressed by essentially all current electronic structure codes, based on similar matrix expressions, and by high-performance computation. We here present a unified software interface, ELSI, to access different strategies that address the Kohn-Sham eigenvalue problem. Currently supported algorithms include the dense generalized eigensolver library ELPA, the orbital minimization method implemented in libOMM, and the pole expansion and selected inversion (PEXSI) approach with lower computational complexity for semilocal density functionals. The ELSI interface aims to simplify the implementation and optimal use of the different strategies, by offering (a) a unified software framework designed for the electronic structure solvers in Kohn-Sham density-functional theory; (b) reasonable default parameters for a chosen solver; (c) automatic conversion between input and internal working matrix formats, and in the future (d) recommendation of the optimal solver depending on the specific problem. Comparative benchmarks are shown for system sizes up to 11,520 atoms (172,800 basis functions) on distributed memory supercomputing architectures.

  4. ELSI: A unified software interface for Kohn–Sham electronic structure solvers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Victor Wen-zhe; Corsetti, Fabiano; Garcia, Alberto

    Solving the electronic structure from a generalized or standard eigenproblem is often the bottleneck in large scale calculations based on Kohn-Sham density-functional theory. This problem must be addressed by essentially all current electronic structure codes, based on similar matrix expressions, and by high-performance computation. We here present a unified software interface, ELSI, to access different strategies that address the Kohn-Sham eigenvalue problem. Currently supported algorithms include the dense generalized eigensolver library ELPA, the orbital minimization method implemented in libOMM, and the pole expansion and selected inversion (PEXSI) approach with lower computational complexity for semilocal density functionals. The ELSI interface aimsmore » to simplify the implementation and optimal use of the different strategies, by offering (a) a unified software framework designed for the electronic structure solvers in Kohn-Sham density-functional theory; (b) reasonable default parameters for a chosen solver; (c) automatic conversion between input and internal working matrix formats, and in the future (d) recommendation of the optimal solver depending on the specific problem. As a result, comparative benchmarks are shown for system sizes up to 11,520 atoms (172,800 basis functions) on distributed memory supercomputing architectures.« less

  5. Using Tabulated Experimental Data to Drive an Orthotropic Elasto-Plastic Three-Dimensional Model for Impact Analysis

    NASA Technical Reports Server (NTRS)

    Hoffarth, C.; Khaled, B.; Rajan, S. D.; Goldberg, R.; Carney, K.; DuBois, P.; Blankenhorn, Gunther

    2016-01-01

    An orthotropic elasto-plastic-damage three-dimensional model with tabulated input has been developed to analyze the impact response of composite materials. The theory has been implemented as MAT 213 into a tailored version of LS-DYNA being developed under a joint effort of the FAA and NASA and has the following features: (a) the theory addresses any composite architecture that can be experimentally characterized as an orthotropic material and includes rate and temperature sensitivities, (b) the formulation is applicable for solid as well as shell element implementations and utilizes input data in a tabulated form directly from processed experimental data, (c) deformation and damage mechanics are both accounted for within the material model, (d) failure criteria are established that are functions of strain and damage parameters, and mesh size dependence is included, and (e) the theory can be efficiently implemented into a commercial code for both sequential and parallel executions. The salient features of the theory as implemented in LS-DYNA are illustrated using a widely used composite - the T800S/3900-2B[P2352W-19] BMS8-276 Rev-H-Unitape fiber/resin unidirectional composite. First, the experimental tests to characterize the deformation, damage and failure parameters in the material behavior are discussed. Second, the MAT213 input model and implementation details are presented with particular attention given to procedures that have been incorporated to ensure that the yield surfaces in the rate and temperature dependent plasticity model are convex. Finally, the paper concludes with a validation test designed to test the stability, accuracy and efficiency of the implemented model.

  6. molgw 1: Many-body perturbation theory software for atoms, molecules, and clusters

    DOE PAGES

    Bruneval, Fabien; Rangel, Tonatiuh; Hamed, Samia M.; ...

    2016-07-12

    Here, we summarize the MOLGW code that implements density-functional theory and many-body perturbation theory in a Gaussian basis set. The code is dedicated to the calculation of the many-body self-energy within the GW approximation and the solution of the Bethe–Salpeter equation. These two types of calculations allow the user to evaluate physical quantities that can be compared to spectroscopic experiments. Quasiparticle energies, obtained through the calculation of the GW self-energy, can be compared to photoemission or transport experiments, and neutral excitation energies and oscillator strengths, obtained via solution of the Bethe–Salpeter equation, are measurable by optical absorption. The implementation choicesmore » outlined here have aimed at the accuracy and robustness of calculated quantities with respect to measurements. Furthermore, the algorithms implemented in MOLGW allow users to consider molecules or clusters containing up to 100 atoms with rather accurate basis sets, and to choose whether or not to apply the resolution-of-the-identity approximation. Finally, we demonstrate the parallelization efficacy of the MOLGW code over several hundreds of processors.« less

  7. Parallel implementation of geometrical shock dynamics for two dimensional converging shock waves

    NASA Astrophysics Data System (ADS)

    Qiu, Shi; Liu, Kuang; Eliasson, Veronica

    2016-10-01

    Geometrical shock dynamics (GSD) theory is an appealing method to predict the shock motion in the sense that it is more computationally efficient than solving the traditional Euler equations, especially for converging shock waves. However, to solve and optimize large scale configurations, the main bottleneck is the computational cost. Among the existing numerical GSD schemes, there is only one that has been implemented on parallel computers, with the purpose to analyze detonation waves. To extend the computational advantage of the GSD theory to more general applications such as converging shock waves, a numerical implementation using a spatial decomposition method has been coupled with a front tracking approach on parallel computers. In addition, an efficient tridiagonal system solver for massively parallel computers has been applied to resolve the most expensive function in this implementation, resulting in an efficiency of 0.93 while using 32 HPCC cores. Moreover, symmetric boundary conditions have been developed to further reduce the computational cost, achieving a speedup of 19.26 for a 12-sided polygonal converging shock.

  8. Advanced capabilities for materials modelling with Quantum ESPRESSO

    NASA Astrophysics Data System (ADS)

    Giannozzi, P.; Andreussi, O.; Brumme, T.; Bunau, O.; Buongiorno Nardelli, M.; Calandra, M.; Car, R.; Cavazzoni, C.; Ceresoli, D.; Cococcioni, M.; Colonna, N.; Carnimeo, I.; Dal Corso, A.; de Gironcoli, S.; Delugas, P.; DiStasio, R. A., Jr.; Ferretti, A.; Floris, A.; Fratesi, G.; Fugallo, G.; Gebauer, R.; Gerstmann, U.; Giustino, F.; Gorni, T.; Jia, J.; Kawamura, M.; Ko, H.-Y.; Kokalj, A.; Küçükbenli, E.; Lazzeri, M.; Marsili, M.; Marzari, N.; Mauri, F.; Nguyen, N. L.; Nguyen, H.-V.; Otero-de-la-Roza, A.; Paulatto, L.; Poncé, S.; Rocca, D.; Sabatini, R.; Santra, B.; Schlipf, M.; Seitsonen, A. P.; Smogunov, A.; Timrov, I.; Thonhauser, T.; Umari, P.; Vast, N.; Wu, X.; Baroni, S.

    2017-11-01

    Quantum EXPRESSO is an integrated suite of open-source computer codes for quantum simulations of materials using state-of-the-art electronic-structure techniques, based on density-functional theory, density-functional perturbation theory, and many-body perturbation theory, within the plane-wave pseudopotential and projector-augmented-wave approaches. Quantum EXPRESSO owes its popularity to the wide variety of properties and processes it allows to simulate, to its performance on an increasingly broad array of hardware architectures, and to a community of researchers that rely on its capabilities as a core open-source development platform to implement their ideas. In this paper we describe recent extensions and improvements, covering new methodologies and property calculators, improved parallelization, code modularization, and extended interoperability both within the distribution and with external software.

  9. Advanced capabilities for materials modelling with Quantum ESPRESSO.

    PubMed

    Giannozzi, P; Andreussi, O; Brumme, T; Bunau, O; Buongiorno Nardelli, M; Calandra, M; Car, R; Cavazzoni, C; Ceresoli, D; Cococcioni, M; Colonna, N; Carnimeo, I; Dal Corso, A; de Gironcoli, S; Delugas, P; DiStasio, R A; Ferretti, A; Floris, A; Fratesi, G; Fugallo, G; Gebauer, R; Gerstmann, U; Giustino, F; Gorni, T; Jia, J; Kawamura, M; Ko, H-Y; Kokalj, A; Küçükbenli, E; Lazzeri, M; Marsili, M; Marzari, N; Mauri, F; Nguyen, N L; Nguyen, H-V; Otero-de-la-Roza, A; Paulatto, L; Poncé, S; Rocca, D; Sabatini, R; Santra, B; Schlipf, M; Seitsonen, A P; Smogunov, A; Timrov, I; Thonhauser, T; Umari, P; Vast, N; Wu, X; Baroni, S

    2017-10-24

    Quantum EXPRESSO is an integrated suite of open-source computer codes for quantum simulations of materials using state-of-the-art electronic-structure techniques, based on density-functional theory, density-functional perturbation theory, and many-body perturbation theory, within the plane-wave pseudopotential and projector-augmented-wave approaches. Quantum EXPRESSO owes its popularity to the wide variety of properties and processes it allows to simulate, to its performance on an increasingly broad array of hardware architectures, and to a community of researchers that rely on its capabilities as a core open-source development platform to implement their ideas. In this paper we describe recent extensions and improvements, covering new methodologies and property calculators, improved parallelization, code modularization, and extended interoperability both within the distribution and with external software.

  10. Advanced capabilities for materials modelling with Quantum ESPRESSO.

    PubMed

    Andreussi, Oliviero; Brumme, Thomas; Bunau, Oana; Buongiorno Nardelli, Marco; Calandra, Matteo; Car, Roberto; Cavazzoni, Carlo; Ceresoli, Davide; Cococcioni, Matteo; Colonna, Nicola; Carnimeo, Ivan; Dal Corso, Andrea; de Gironcoli, Stefano; Delugas, Pietro; DiStasio, Robert; Ferretti, Andrea; Floris, Andrea; Fratesi, Guido; Fugallo, Giorgia; Gebauer, Ralph; Gerstmann, Uwe; Giustino, Feliciano; Gorni, Tommaso; Jia, Junteng; Kawamura, Mitsuaki; Ko, Hsin-Yu; Kokalj, Anton; Küçükbenli, Emine; Lazzeri, Michele; Marsili, Margherita; Marzari, Nicola; Mauri, Francesco; Nguyen, Ngoc Linh; Nguyen, Huy-Viet; Otero-de-la-Roza, Alberto; Paulatto, Lorenzo; Poncé, Samuel; Giannozzi, Paolo; Rocca, Dario; Sabatini, Riccardo; Santra, Biswajit; Schlipf, Martin; Seitsonen, Ari Paavo; Smogunov, Alexander; Timrov, Iurii; Thonhauser, Timo; Umari, Paolo; Vast, Nathalie; Wu, Xifan; Baroni, Stefano

    2017-09-27

    Quantum ESPRESSO is an integrated suite of open-source computer codes for quantum simulations of materials using state-of-the art electronic-structure techniques, based on density-functional theory, density-functional perturbation theory, and many-body perturbation theory, within the plane-wave pseudo-potential and projector-augmented-wave approaches. Quantum ESPRESSO owes its popularity to the wide variety of properties and processes it allows to simulate, to its performance on an increasingly broad array of hardware architectures, and to a community of researchers that rely on its capabilities as a core open-source development platform to implement theirs ideas. In this paper we describe recent extensions and improvements, covering new methodologies and property calculators, improved parallelization, code modularization, and extended interoperability both within the distribution and with external software. © 2017 IOP Publishing Ltd.

  11. Projected quasiparticle theory for molecular electronic structure

    NASA Astrophysics Data System (ADS)

    Scuseria, Gustavo E.; Jiménez-Hoyos, Carlos A.; Henderson, Thomas M.; Samanta, Kousik; Ellis, Jason K.

    2011-09-01

    We derive and implement symmetry-projected Hartree-Fock-Bogoliubov (HFB) equations and apply them to the molecular electronic structure problem. All symmetries (particle number, spin, spatial, and complex conjugation) are deliberately broken and restored in a self-consistent variation-after-projection approach. We show that the resulting method yields a comprehensive black-box treatment of static correlations with effective one-electron (mean-field) computational cost. The ensuing wave function is of multireference character and permeates the entire Hilbert space of the problem. The energy expression is different from regular HFB theory but remains a functional of an independent quasiparticle density matrix. All reduced density matrices are expressible as an integration of transition density matrices over a gauge grid. We present several proof-of-principle examples demonstrating the compelling power of projected quasiparticle theory for quantum chemistry.

  12. Supersonic wing and wing-body shape optimization using an adjoint formulation

    NASA Technical Reports Server (NTRS)

    Reuther, James; Jameson, Antony

    1995-01-01

    This paper describes the implementation of optimization techniques based on control theory for wing and wing-body design of supersonic configurations. The work represents an extension of our earlier research in which control theory is used to devise a design procedure that significantly reduces the computational cost by employing an adjoint equation. In previous studies it was shown that control theory could be used toeviseransonic design methods for airfoils and wings in which the shape and the surrounding body-fitted mesh are both generated analytically, and the control is the mapping function. The method has also been implemented for both transonic potential flows and transonic flows governed by the Euler equations using an alternative formulation which employs numerically generated grids, so that it can treat more general configurations. Here results are presented for three-dimensional design cases subject to supersonic flows governed by the Euler equation.

  13. Massively parallel sparse matrix function calculations with NTPoly

    NASA Astrophysics Data System (ADS)

    Dawson, William; Nakajima, Takahito

    2018-04-01

    We present NTPoly, a massively parallel library for computing the functions of sparse, symmetric matrices. The theory of matrix functions is a well developed framework with a wide range of applications including differential equations, graph theory, and electronic structure calculations. One particularly important application area is diagonalization free methods in quantum chemistry. When the input and output of the matrix function are sparse, methods based on polynomial expansions can be used to compute matrix functions in linear time. We present a library based on these methods that can compute a variety of matrix functions. Distributed memory parallelization is based on a communication avoiding sparse matrix multiplication algorithm. OpenMP task parallellization is utilized to implement hybrid parallelization. We describe NTPoly's interface and show how it can be integrated with programs written in many different programming languages. We demonstrate the merits of NTPoly by performing large scale calculations on the K computer.

  14. Molecular density functional theory of water describing hydrophobicity at short and long length scales

    NASA Astrophysics Data System (ADS)

    Jeanmairet, Guillaume; Levesque, Maximilien; Borgis, Daniel

    2013-10-01

    We present an extension of our recently introduced molecular density functional theory of water [G. Jeanmairet et al., J. Phys. Chem. Lett. 4, 619 (2013)] to the solvation of hydrophobic solutes of various sizes, going from angstroms to nanometers. The theory is based on the quadratic expansion of the excess free energy in terms of two classical density fields: the particle density and the multipolar polarization density. Its implementation requires as input a molecular model of water and three measurable bulk properties, namely, the structure factor and the k-dependent longitudinal and transverse dielectric susceptibilities. The fine three-dimensional water structure around small hydrophobic molecules is found to be well reproduced. In contrast, the computed solvation free-energies appear overestimated and do not exhibit the correct qualitative behavior when the hydrophobic solute is grown in size. These shortcomings are corrected, in the spirit of the Lum-Chandler-Weeks theory, by complementing the functional with a truncated hard-sphere functional acting beyond quadratic order in density, and making the resulting functional compatible with the Van-der-Waals theory of liquid-vapor coexistence at long range. Compared to available molecular simulations, the approach yields reasonable solvation structure and free energy of hard or soft spheres of increasing size, with a correct qualitative transition from a volume-driven to a surface-driven regime at the nanometer scale.

  15. Highly efficient implementation of pseudospectral time-dependent density-functional theory for the calculation of excitation energies of large molecules.

    PubMed

    Cao, Yixiang; Hughes, Thomas; Giesen, Dave; Halls, Mathew D; Goldberg, Alexander; Vadicherla, Tati Reddy; Sastry, Madhavi; Patel, Bhargav; Sherman, Woody; Weisman, Andrew L; Friesner, Richard A

    2016-06-15

    We have developed and implemented pseudospectral time-dependent density-functional theory (TDDFT) in the quantum mechanics package Jaguar to calculate restricted singlet and restricted triplet, as well as unrestricted excitation energies with either full linear response (FLR) or the Tamm-Dancoff approximation (TDA) with the pseudospectral length scales, pseudospectral atomic corrections, and pseudospectral multigrid strategy included in the implementations to improve the chemical accuracy and to speed the pseudospectral calculations. The calculations based on pseudospectral time-dependent density-functional theory with full linear response (PS-FLR-TDDFT) and within the Tamm-Dancoff approximation (PS-TDA-TDDFT) for G2 set molecules using B3LYP/6-31G*(*) show mean and maximum absolute deviations of 0.0015 eV and 0.0081 eV, 0.0007 eV and 0.0064 eV, 0.0004 eV and 0.0022 eV for restricted singlet excitation energies, restricted triplet excitation energies, and unrestricted excitation energies, respectively; compared with the results calculated from the conventional spectral method. The application of PS-FLR-TDDFT to OLED molecules and organic dyes, as well as the comparisons for results calculated from PS-FLR-TDDFT and best estimations demonstrate that the accuracy of both PS-FLR-TDDFT and PS-TDA-TDDFT. Calculations for a set of medium-sized molecules, including Cn fullerenes and nanotubes, using the B3LYP functional and 6-31G(**) basis set show PS-TDA-TDDFT provides 19- to 34-fold speedups for Cn fullerenes with 450-1470 basis functions, 11- to 32-fold speedups for nanotubes with 660-3180 basis functions, and 9- to 16-fold speedups for organic molecules with 540-1340 basis functions compared to fully analytic calculations without sacrificing chemical accuracy. The calculations on a set of larger molecules, including the antibiotic drug Ramoplanin, the 46-residue crambin protein, fullerenes up to C540 and nanotubes up to 14×(6,6), using the B3LYP functional and 6-31G(**) basis set with up to 8100 basis functions show that PS-FLR-TDDFT CPU time scales as N(2.05) with the number of basis functions. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  16. Neuronal Reward and Decision Signals: From Theories to Data

    PubMed Central

    Schultz, Wolfram

    2015-01-01

    Rewards are crucial objects that induce learning, approach behavior, choices, and emotions. Whereas emotions are difficult to investigate in animals, the learning function is mediated by neuronal reward prediction error signals which implement basic constructs of reinforcement learning theory. These signals are found in dopamine neurons, which emit a global reward signal to striatum and frontal cortex, and in specific neurons in striatum, amygdala, and frontal cortex projecting to select neuronal populations. The approach and choice functions involve subjective value, which is objectively assessed by behavioral choices eliciting internal, subjective reward preferences. Utility is the formal mathematical characterization of subjective value and a prime decision variable in economic choice theory. It is coded as utility prediction error by phasic dopamine responses. Utility can incorporate various influences, including risk, delay, effort, and social interaction. Appropriate for formal decision mechanisms, rewards are coded as object value, action value, difference value, and chosen value by specific neurons. Although all reward, reinforcement, and decision variables are theoretical constructs, their neuronal signals constitute measurable physical implementations and as such confirm the validity of these concepts. The neuronal reward signals provide guidance for behavior while constraining the free will to act. PMID:26109341

  17. Implementing communication and decision-making interventions directed at goals of care: a theory-led scoping review

    PubMed Central

    Cummings, Amanda; Lund, Susi; Campling, Natasha; May, Carl; Richardson, Alison; Myall, Michelle

    2017-01-01

    Objectives To identify the factors that promote and inhibit the implementation of interventions that improve communication and decision-making directed at goals of care in the event of acute clinical deterioration. Design and methods A scoping review was undertaken based on the methodological framework of Arksey and O’Malley for conducting this type of review. Searches were carried out in Medline and Cumulative Index to Nursing and Allied Health Literature (CINAHL) to identify peer-reviewed papers and in Google to identify grey literature. Searches were limited to those published in the English language from 2000 onwards. Inclusion and exclusion criteria were applied, and only papers that had a specific focus on implementation in practice were selected. Data extracted were treated as qualitative and subjected to directed content analysis. A theory-informed coding framework using Normalisation Process Theory (NPT) was applied to characterise and explain implementation processes. Results Searches identified 2619 citations, 43 of which met the inclusion criteria. Analysis generated six themes fundamental to successful implementation of goals of care interventions: (1) input into development; (2) key clinical proponents; (3) training and education; (4) intervention workability and functionality; (5) setting and context; and (6) perceived value and appraisal. Conclusions A broad and diverse literature focusing on implementation of goals of care interventions was identified. Our review recognised these interventions as both complex and contentious in nature, making their incorporation into routine clinical practice dependent on a number of factors. Implementing such interventions presents challenges at individual, organisational and systems levels, which make them difficult to introduce and embed. We have identified a series of factors that influence successful implementation and our analysis has distilled key learning points, conceptualised as a set of propositions, we consider relevant to implementing other complex and contentious interventions. PMID:28988176

  18. Tailoring a psychophysical discrimination experiment upon assessment of the psychometric function: Predictions and results

    NASA Astrophysics Data System (ADS)

    Vilardi, Andrea; Tabarelli, Davide; Ricci, Leonardo

    2015-02-01

    Decision making is a widespread research topic and plays a crucial role in neuroscience as well as in other research and application fields of, for example, biology, medicine and economics. The most basic implementation of decision making, namely binary discrimination, is successfully interpreted by means of signal detection theory (SDT), a statistical model that is deeply linked to physics. An additional, widespread tool to investigate discrimination ability is the psychometric function, which measures the probability of a given response as a function of the magnitude of a physical quantity underlying the stimulus. However, the link between psychometric functions and binary discrimination experiments is often neglected or misinterpreted. Aim of the present paper is to provide a detailed description of an experimental investigation on a prototypical discrimination task and to discuss the results in terms of SDT. To this purpose, we provide an outline of the theory and describe the implementation of two behavioural experiments in the visual modality: upon the assessment of the so-called psychometric function, we show how to tailor a binary discrimination experiment on performance and decisional bias, and to measure these quantities on a statistical base. Attention is devoted to the evaluation of uncertainties, an aspect which is also often overlooked in the scientific literature.

  19. Dispersion interactions in Density Functional Theory

    NASA Astrophysics Data System (ADS)

    Andrinopoulos, Lampros; Hine, Nicholas; Mostofi, Arash

    2012-02-01

    Semilocal functionals in Density Functional Theory (DFT) achieve high accuracy simulating a wide range of systems, but miss the effect of dispersion (vdW) interactions, important in weakly bound systems. We study two different methods to include vdW in DFT: First, we investigate a recent approach [1] to evaluate the vdW contribution to the total energy using maximally-localized Wannier functions. Using a set of simple dimers, we show that it has a number of shortcomings that hamper its predictive power; we then develop and implement a series of improvements [2] and obtain binding energies and equilibrium geometries in closer agreement to quantum-chemical coupled-cluster calculations. Second, we implement the vdW-DF functional [3], using Soler's method [4], within ONETEP [5], a linear-scaling DFT code, and apply it to a range of systems. This method within a linear-scaling DFT code allows the simulation of weakly bound systems of larger scale, such as organic/inorganic interfaces, biological systems and implicit solvation models. [1] P. Silvestrelli, JPC A 113, 5224 (2009). [2] L. Andrinopoulos et al, JCP 135, 154105 (2011). [3] M. Dion et al, PRL 92, 246401 (2004). [4] G. Rom'an-P'erez, J.M. Soler, PRL 103, 096102 (2009). [5] C. Skylaris et al, JCP 122, 084119 (2005).

  20. Here and now: the intersection of computational science, quantum-mechanical simulations, and materials science

    NASA Astrophysics Data System (ADS)

    Marzari, Nicola

    The last 30 years have seen the steady and exhilarating development of powerful quantum-simulation engines for extended systems, dedicated to the solution of the Kohn-Sham equations of density-functional theory, often augmented by density-functional perturbation theory, many-body perturbation theory, time-dependent density-functional theory, dynamical mean-field theory, and quantum Monte Carlo. Their implementation on massively parallel architectures, now leveraging also GPUs and accelerators, has started a massive effort in the prediction from first principles of many or of complex materials properties, leading the way to the exascale through the combination of HPC (high-performance computing) and HTC (high-throughput computing). Challenges and opportunities abound: complementing hardware and software investments and design; developing the materials' informatics infrastructure needed to encode knowledge into complex protocols and workflows of calculations; managing and curating data; resisting the complacency that we have already reached the predictive accuracy needed for materials design, or a robust level of verification of the different quantum engines. In this talk I will provide an overview of these challenges, with the ultimate prize being the computational understanding, prediction, and design of properties and performance for novel or complex materials and devices.

  1. Quantum mechanical/molecular mechanical/continuum style solvation model: time-dependent density functional theory.

    PubMed

    Thellamurege, Nandun M; Cui, Fengchao; Li, Hui

    2013-08-28

    A combined quantum mechanical/molecular mechanical/continuum (QM/MMpol/C) style method is developed for time-dependent density functional theory (TDDFT, including long-range corrected TDDFT) method, induced dipole polarizable force field, and induced surface charge continuum model. Induced dipoles and induced charges are included in the TDDFT equations to solve for the transition energies, relaxed density, and transition density. Analytic gradient is derived and implemented for geometry optimization and molecular dynamics simulation. QM/MMpol/C style DFT and TDDFT methods are used to study the hydrogen bonding of the photoactive yellow protein chromopore in ground state and excited state.

  2. Scalable nuclear density functional theory with Sky3D

    NASA Astrophysics Data System (ADS)

    Afibuzzaman, Md; Schuetrumpf, Bastian; Aktulga, Hasan Metin

    2018-02-01

    In nuclear astrophysics, quantum simulations of large inhomogeneous dense systems as they appear in the crusts of neutron stars present big challenges. The number of particles in a simulation with periodic boundary conditions is strongly limited due to the immense computational cost of the quantum methods. In this paper, we describe techniques for an efficient and scalable parallel implementation of Sky3D, a nuclear density functional theory solver that operates on an equidistant grid. Presented techniques allow Sky3D to achieve good scaling and high performance on a large number of cores, as demonstrated through detailed performance analysis on a Cray XC40 supercomputer.

  3. Collaborative action around implementation in Collaborations for Leadership in Applied Health Research and Care: towards a programme theory.

    PubMed

    Rycroft-Malone, Jo; Wilkinson, Joyce; Burton, Christopher R; Harvey, Gill; McCormack, Brendan; Graham, Ian; Staniszewska, Sophie

    2013-10-01

    In theory, greater interaction between researchers and practitioners should result in increased potential for implementation. However, we know little about whether this is the case, or what mechanisms might operate to make it happen. This paper reports findings from a study that is identifying and tracking implementation mechanisms, processes, influences and impacts in real time, over time in the Collaborations for Leadership in Applied Health Research and Care (CLAHRCs). This is a longitudinal, realist evaluation case study. The development of the conceptual framework and initial hypotheses involved literature reviewing and stakeholder consultation. Primary data were collected through interviews, observations and documents within three CLAHRCs, and analysed thematically against the framework and hypotheses. The first round of data collection shows that the mechanisms of collaborative action, relationship building, engagement, motivation, knowledge exchange and learning are important to the processes and outcomes of CLAHRCs' activity, including their capacity for implementation. These mechanisms operated in different contexts such as competing agendas, availability of resources and the CLAHRCs' brand. Contexts and mechanisms result in different impact, including the CLAHRCs' approach to implementation, quality of collaboration, commitment and ownership, and degree of sharing and managing knowledge. Emerging features of a middle range theory of implementation within collaboration include alignment in organizational structures and cognitive processes, history of partnerships, responsiveness and resilience in rapidly changing contexts. CLARHCs' potential to mobilize knowledge may be further realized by how they develop insights into their function as collaborative entities.

  4. Optimal policy for value-based decision-making.

    PubMed

    Tajima, Satohiro; Drugowitsch, Jan; Pouget, Alexandre

    2016-08-18

    For decades now, normative theories of perceptual decisions, and their implementation as drift diffusion models, have driven and significantly improved our understanding of human and animal behaviour and the underlying neural processes. While similar processes seem to govern value-based decisions, we still lack the theoretical understanding of why this ought to be the case. Here, we show that, similar to perceptual decisions, drift diffusion models implement the optimal strategy for value-based decisions. Such optimal decisions require the models' decision boundaries to collapse over time, and to depend on the a priori knowledge about reward contingencies. Diffusion models only implement the optimal strategy under specific task assumptions, and cease to be optimal once we start relaxing these assumptions, by, for example, using non-linear utility functions. Our findings thus provide the much-needed theory for value-based decisions, explain the apparent similarity to perceptual decisions, and predict conditions under which this similarity should break down.

  5. Optimal policy for value-based decision-making

    PubMed Central

    Tajima, Satohiro; Drugowitsch, Jan; Pouget, Alexandre

    2016-01-01

    For decades now, normative theories of perceptual decisions, and their implementation as drift diffusion models, have driven and significantly improved our understanding of human and animal behaviour and the underlying neural processes. While similar processes seem to govern value-based decisions, we still lack the theoretical understanding of why this ought to be the case. Here, we show that, similar to perceptual decisions, drift diffusion models implement the optimal strategy for value-based decisions. Such optimal decisions require the models' decision boundaries to collapse over time, and to depend on the a priori knowledge about reward contingencies. Diffusion models only implement the optimal strategy under specific task assumptions, and cease to be optimal once we start relaxing these assumptions, by, for example, using non-linear utility functions. Our findings thus provide the much-needed theory for value-based decisions, explain the apparent similarity to perceptual decisions, and predict conditions under which this similarity should break down. PMID:27535638

  6. Methode de calcul a N-corps basee sur la G0W0: Etude du couplage electron-phonon dans le C60 et developpement d'une approche acceleree pour materiaux organiques

    NASA Astrophysics Data System (ADS)

    Laflamme Janssen, Jonathan

    This thesis studies the limitations of density functional theory. These limits are explored in the context of a traditional implementation using a plane waves basis set. First, we investigate the limit of the size of the systems that can be treated. Cutting edge methods that assess these limitations are then used to simulate nanoscale systems. More specifically, the grafting of bromophenyl molecules on the sidewall of carbon nanotubes is studied with these methods, as a better understanding of this procedure could have substantial impact on the electronic industry. Second, the limitations of the precision of density functional theory are explored. We begin with a quantitative study of the uncertainty of this method for the case of electron-phonon coupling calculations and find it to be substantially higher than what is widely presumed in the literature. The uncertainty on electronphonon coupling calculations is then explored within the G0W0 method, which is found to be a substantially more precise alternative. However, this method has the drawback of being severely limitated in the size of systems that can be computed. In the following, theoretical solutions to overcome these limitations are developed and presented. The increased performance and precision of the resulting implementation opens new possibilities for the study and design of materials, such as superconductors, polymers for organic photovoltaics and semiconductors. Keywords: Condensed matter physics, ab initio calculations, density functional theory, nanotechnology, carbon nanotubes, many-body perturbation theory, G0W0 method..

  7. Balancing power: A grounded theory study on partnership of academic service institutes

    PubMed Central

    HESHMATI NABAVI, FATEMEH; VANAKI, ZOHREH; MOHAMMADI, EESA; YAZDANI, SHAHRAM

    2017-01-01

    Introduction: Governments and professional organizations have called for new partnerships between health care providers and academics to improve clinical education for the benefit of both students and patients. To develop a substantive grounded theory on the process of forming academic-service partnerships in implementing clinical education, from the perspective of academic and clinical nursing staff members and managers working in Iranian settings. Methods: The participants included 15 hospital nurses, nurse managers, nurse educators, and educational managers from two central universities and clinical settings from 2009 to 2012. Data were collected through 30 in-depth, semi-structure interviews with the individual participants and then analyzed using the methodology of Strauss and Corbin's grounded theory. Results: Utilizing “balancing power” as the core variable enabled us to integrate the concepts concerning the partnership processes between clinical and educational institutes. Three distinct and significant categories emerged to explain the process of partnership: 1) divergence, 2) conflict between educational and caring functions, and 3) creation of balance between educational and caring functions. Conclusions: In implementing clinical education, partnerships have been formed within a challenging context in Iran. Conflict between clinical and educational functions was the main concern of both sides of the partnership in forming a collaborative relationship, with our findings emphasizing the importance of nursing educators' role in the establishment of partnership programs. PMID:28761886

  8. The optimized effective potential and the self-interaction correction in density functional theory: Application to molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garza, Jorge; Nichols, Jeffrey A.; Dixon, David A.

    2000-05-08

    The Krieger, Li, and Iafrate approximation to the optimized effective potential including the self-interaction correction for density functional theory has been implemented in a molecular code, NWChem, that uses Gaussian functions to represent the Kohn and Sham spin-orbitals. The differences between the implementation of the self-interaction correction in codes where planewaves are used with an optimized effective potential are discussed. The importance of the localization of the spin-orbitals to maximize the exchange-correlation of the self-interaction correction is discussed. We carried out exchange-only calculations to compare the results obtained with these approximations, and those obtained with the local spin density approximation,more » the generalized gradient approximation and Hartree-Fock theory. Interesting results for the energy difference (GAP) between the highest occupied molecular orbital, HOMO, and the lowest unoccupied molecular orbital, LUMO, (spin-orbital energies of closed shell atoms and molecules) using the optimized effective potential and the self-interaction correction have been obtained. The effect of the diffuse character of the basis set on the HOMO and LUMO eigenvalues at the various levels is discussed. Total energies obtained with the optimized effective potential and the self-interaction correction show that the exchange energy with these approximations is overestimated and this will be an important topic for future work. (c) 2000 American Institute of Physics.« less

  9. Use of theory to plan or evaluate guideline implementation among physicians: a scoping review.

    PubMed

    Liang, Laurel; Bernhardsson, Susanne; Vernooij, Robin W M; Armstrong, Melissa J; Bussières, André; Brouwers, Melissa C; Gagliardi, Anna R

    2017-02-27

    Guidelines support health care decision-making and high quality care and outcomes. However, their implementation is sub-optimal. Theory-informed, tailored implementation is associated with guideline use. Few guideline implementation studies published up to 1998 employed theory. This study aimed to describe if and how theory is now used to plan or evaluate guideline implementation among physicians. A scoping review was conducted. MEDLINE, EMBASE, and The Cochrane Library were searched from 2006 to April 2016. English language studies that planned or evaluated guideline implementation targeted to physicians based on explicitly named theory were eligible. Screening and data extraction were done in duplicate. Study characteristics and details about theory use were analyzed. A total of 1244 published reports were identified, 891 were unique, and 716 were excluded based on title and abstract. Among 175 full-text articles, 89 planned or evaluated guideline implementation targeted to physicians; 42 (47.2%) were based on theory and included. The number of studies using theory increased yearly and represented a wide array of countries, guideline topics and types of physicians. The Theory of Planned Behavior (38.1%) and the Theoretical Domains Framework (23.8%) were used most frequently. Many studies rationalized choice of theory (83.3%), most often by stating that the theory described implementation or its determinants, but most failed to explicitly link barriers with theoretical constructs. The majority of studies used theory to inform surveys or interviews that identified barriers of guideline use as a preliminary step in implementation planning (76.2%). All studies that evaluated interventions reported positive impact on reported physician or patient outcomes. While the use of theory to design or evaluate interventions appears to be increasing over time, this review found that one half of guideline implementation studies were based on theory and many of those provided scant details about how theory was used. This limits interpretation and replication of those interventions, and seems to result in multifaceted interventions, which may not be feasible outside of scientific investigation. Further research is needed to better understand how to employ theory in guideline implementation planning or evaluation.

  10. Autism as a neural systems disorder: a theory of frontal-posterior underconnectivity.

    PubMed

    Just, Marcel Adam; Keller, Timothy A; Malave, Vicente L; Kana, Rajesh K; Varma, Sashank

    2012-04-01

    The underconnectivity theory of autism attributes the disorder to lower anatomical and functional systems connectivity between frontal and more posterior cortical processing. Here we review evidence for the theory and present a computational model of an executive functioning task (Tower of London) implementing the assumptions of underconnectivity. We make two modifications to a previous computational account of performance and brain activity in typical individuals in the Tower of London task (Newman et al., 2003): (1) the communication bandwidth between frontal and parietal areas was decreased and (2) the posterior centers were endowed with more executive capability (i.e., more autonomy, an adaptation is proposed to arise in response to the lowered frontal-posterior bandwidth). The autism model succeeds in matching the lower frontal-posterior functional connectivity (lower synchronization of activation) seen in fMRI data, as well as providing insight into behavioral response time results. The theory provides a unified account of how a neural dysfunction can produce a neural systems disorder and a psychological disorder with the widespread and diverse symptoms of autism. Copyright © 2012 Elsevier Ltd. All rights reserved.

  11. Autism as a neural systems disorder: A theory of frontal-posterior underconnectivity

    PubMed Central

    Just, Marcel Adam; Keller, Timothy A.; Malave, Vicente L.; Kana, Rajesh K.; Varma, Sashank

    2012-01-01

    The underconnectivity theory of autism attributes the disorder to lower anatomical and functional systems connectivity between frontal and more posterior cortical processing. Here we review evidence for the theory and present a computational model of an executive functioning task (Tower of London) implementing the assumptions of underconnectivity. We make two modifications to a previous computational account of performance and brain activity in typical individuals in the Tower of London task (Newman et al., 2003): (1) the communication bandwidth between frontal and parietal areas was decreased and (2) the posterior centers were endowed with more executive capability (i.e., more autonomy, an adaptation is proposed to arise in response to the lowered frontal-posterior bandwidth). The autism model succeeds in matching the lower frontal-posterior functional connectivity (lower synchronization of activation) seen in fMRI data, as well as providing insight into behavioral response time results. The theory provides a unified account of how a neural dysfunction can produce a neural systems disorder and a psychological disorder with the widespread and diverse symptoms of autism. PMID:22353426

  12. Resonant-convergent PCM response theory for the calculation of second harmonic generation in makaluvamines A-V: pyrroloiminoquinone marine natural products from poriferans of genus Zyzzya.

    PubMed

    Milne, Bruce F; Norman, Patrick

    2015-05-28

    The first-order hyperpolarizability, β, has been calculated for a group of marine natural products, the makaluvamines. These compounds possess a common cationic pyrroloiminoquinone structure that is substituted to varying degrees. Calculations at the MP2 level indicate that makaluvamines possessing phenolic side chains conjugated with the pyrroloiminoquinone moiety display large β values, while breaking this conjugation leads to a dramatic decrease in the calculated hyperpolarizability. This is consistent with a charge-transfer donor-π-acceptor (D-π-A) structure type, characteristic of nonlinear optical chromophores. Dynamic hyperpolarizabilities calculated using resonance-convergent time-dependent density functional theory coupled to polarizable continuum model (PCM) solvation suggest that significant resonance enhancement effects can be expected for incident radiation with wavelengths around 800 nm. The results of the current work suggest that the pyrroloiminoquinone moiety represents a potentially useful new chromophore subunit, in particular for the development of molecular probes for biological imaging. The introduction of solvent-solute interactions in the theory is conventionally made in a density matrix formalism, and the present work will provide detailed account of the approximations that need to be introduced in wave function theory and our program implementation. The program implementation as such is achieved by a mere combination of existing modules from previous developments, and it is here only briefly reviewed.

  13. First-principles calculations of lattice dynamics and thermal properties of polar solids

    DOE PAGES

    Wang, Yi; Shang, Shun -Li; Fang, Huazhi; ...

    2016-05-13

    Although the theory of lattice dynamics was established six decades ago, its accurate implementation for polar solids using the direct (or supercell, small displacement, frozen phonon) approach within the framework of density-function-theory-based first-principles calculations had been a challenge until recently. It arises from the fact that the vibration-induced polarization breaks the lattice periodicity, whereas periodic boundary conditions are required by typical first-principles calculations, leading to an artificial macroscopic electric field. In conclusion, the article reviews a mixed-space approach to treating the interactions between lattice vibration and polarization, its applications to accurately predicting the phonon and associated thermal properties, and itsmore » implementations in a number of existing phonon codes.« less

  14. An atomic orbital based real-time time-dependent density functional theory for computing electronic circular dichroism band spectra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goings, Joshua J.; Li, Xiaosong, E-mail: xsli@uw.edu

    2016-06-21

    One of the challenges of interpreting electronic circular dichroism (ECD) band spectra is that different states may have different rotatory strength signs, determined by their absolute configuration. If the states are closely spaced and opposite in sign, observed transitions may be washed out by nearby states, unlike absorption spectra where transitions are always positive additive. To accurately compute ECD bands, it is necessary to compute a large number of excited states, which may be prohibitively costly if one uses the linear-response time-dependent density functional theory (TDDFT) framework. Here we implement a real-time, atomic-orbital based TDDFT method for computing the entiremore » ECD spectrum simultaneously. The method is advantageous for large systems with a high density of states. In contrast to previous implementations based on real-space grids, the method is variational, independent of nuclear orientation, and does not rely on pseudopotential approximations, making it suitable for computation of chiroptical properties well into the X-ray regime.« less

  15. Plane-wave pseudopotential implementation and performance of SCAN meta-GGA exchange-correlation functional for extended systems

    NASA Astrophysics Data System (ADS)

    Yao, Yi; Kanai, Yosuke

    2017-06-01

    We present the implementation and performance of the strongly constrained and appropriately normed, SCAN, meta-GGA exchange-correlation (XC) approximation in the planewave-pseudopotential (PW-PP) formalism using the Troullier-Martins pseudopotential scheme. We studied its performance by applying the PW-PP implementation to several practical applications of interest in condensed matter sciences: (a) crystalline silicon and germanium, (b) martensitic phase transition energetics of phosphorene, and (c) a single water molecule physisorption on a graphene sheet. Given the much-improved accuracy over the GGA functionals and its relatively low computational cost compared to hybrid XC functionals, the SCAN functional is highly promising for various practical applications of density functional theory calculations for condensed matter systems. At same time, the SCAN meta-GGA functional appears to require more careful attention to numerical details. The meta-GGA functional shows more significant dependence on the fast Fourier transform grid, which is used for evaluating the XC potential in real space in the PW-PP formalism, than other more conventional GGA functionals do. Additionally, using pseudopotentials that are generated at a different/lower level of XC approximation could introduce noticeable errors in calculating some properties such as phase transition energetics.

  16. Elaborating on theory with middle managers' experience implementing healthcare innovations in practice.

    PubMed

    Birken, Sarah A; DiMartino, Lisa D; Kirk, Meredith A; Lee, Shoou-Yih D; McClelland, Mark; Albert, Nancy M

    2016-01-04

    The theory of middle managers' role in implementing healthcare innovations hypothesized that middle managers influence implementation effectiveness by fulfilling the following four roles: diffusing information, synthesizing information, mediating between strategy and day-to-day activities, and selling innovation implementation. The theory also suggested several activities in which middle managers might engage to fulfill the four roles. The extent to which the theory aligns with middle managers' experience in practice is unclear. We surveyed middle managers (n = 63) who attended a nursing innovation summit to (1) assess alignment between the theory and middle managers' experience in practice and (2) elaborate on the theory with examples from middle managers' experience overseeing innovation implementation in practice. Middle managers rated all of the theory's hypothesized four roles as "extremely important" but ranked diffusing and synthesizing information as the most important and selling innovation implementation as the least important. They reported engaging in several activities that were consistent with the theory's hypothesized roles and activities such as diffusing information via meetings and training. They also reported engaging in activities not described in the theory such as appraising employee performance. Middle managers' experience aligned well with the theory and expanded definitions of the roles and activities that it hypothesized. Future studies should assess the relationship between hypothesized roles and the effectiveness with which innovations are implemented in practice. If evidence supports the theory, the theory should be leveraged to promote the fulfillment of hypothesized roles among middle managers, doing so may promote innovation implementation.

  17. Verification of floating-point software

    NASA Technical Reports Server (NTRS)

    Hoover, Doug N.

    1990-01-01

    Floating point computation presents a number of problems for formal verification. Should one treat the actual details of floating point operations, or accept them as imprecisely defined, or should one ignore round-off error altogether and behave as if floating point operations are perfectly accurate. There is the further problem that a numerical algorithm usually only approximately computes some mathematical function, and we often do not know just how good the approximation is, even in the absence of round-off error. ORA has developed a theory of asymptotic correctness which allows one to verify floating point software with a minimum entanglement in these problems. This theory and its implementation in the Ariel C verification system are described. The theory is illustrated using a simple program which finds a zero of a given function by bisection. This paper is presented in viewgraph form.

  18. Modeling of Complex Coupled Fluid-Structure Interaction Systems in Arbitrary Water Depth

    DTIC Science & Technology

    2009-01-01

    basin. For the particle finite- element method ( PFEM ) near-field fluid model we completed: (4) the development of a fully-coupled fluid/flexible...method ( PFEM ) based framework for the ALE-RANS solver [1]. We presented the theory of ALE-RANS with a k- turbulence closure model and several numerical...implemented by PFEM (Task (4)). In this work a universal wall function (UWF) is introduced and implemented to more accurately predict the boundary

  19. Ab initio theory of the N2V defect in diamond for quantum memory implementation

    NASA Astrophysics Data System (ADS)

    Udvarhelyi, Péter; Thiering, Gergő; Londero, Elisa; Gali, Adam

    2017-10-01

    The N2V defect in diamond is characterized by means of ab initio methods relying on density functional theory calculated parameters of a Hubbard model Hamiltonian. It is shown that this approach appropriately describes the energy levels of correlated excited states induced by this defect. By determining its critical magneto-optical parameters, we propose to realize a long-living quantum memory by N2V defect, i.e., H 3 color center in diamond.

  20. Matrix thermalization

    NASA Astrophysics Data System (ADS)

    Craps, Ben; Evnin, Oleg; Nguyen, Kévin

    2017-02-01

    Matrix quantum mechanics offers an attractive environment for discussing gravitational holography, in which both sides of the holographic duality are well-defined. Similarly to higher-dimensional implementations of holography, collapsing shell solutions in the gravitational bulk correspond in this setting to thermalization processes in the dual quantum mechanical theory. We construct an explicit, fully nonlinear supergravity solution describing a generic collapsing dilaton shell, specify the holographic renormalization prescriptions necessary for computing the relevant boundary observables, and apply them to evaluating thermalizing two-point correlation functions in the dual matrix theory.

  1. Perfect Undetectable Acoustic Device from Fabry-Pérot Resonances

    NASA Astrophysics Data System (ADS)

    Chen, Huanyang; Zhou, Yangyang; Zhou, Mengying; Xu, Lin; Liu, Qing Huo

    2018-02-01

    Transformation acoustics is a method to design novel acoustic devices, while the complexity of the material parameters hinders its progress. In this paper, we analytically present a three-dimensional perfect undetectable acoustic device from Fabry-Pérot resonances and confirm its functionality from Mie theory. Such a mechanism goes beyond the traditional transformation acoustics. In addition, such a reduced version can be realized by holey-structured metamaterials. Our theory paves a way to the implementation of three-dimensional transformation acoustic devices.

  2. Ab initio calculation of resonant Raman intensities of transition metal dichalcogenides

    NASA Astrophysics Data System (ADS)

    Miranda, Henrique; Reichardt, Sven; Molina-Sanchez, Alejandro; Wirtz, Ludger

    Raman spectroscopy is used to characterize optical and vibrational properties of materials. Its computational simulation is important for the interpretation of experimental results. Two approaches are the bond polarizability model and density functional perturbation theory. However, both are known to not capture resonance effects. These resonances and quantum interference effects are important to correctly reproduce the intensities as a function of laser energy as, e.g., reported for the case of multi-layer MoTe21.We present two fully ab initio approaches that overcome this limitation. In the first, we calculate finite difference derivatives of the dielectric susceptibility with the phonon displacements2. In the second we calculate electron-light and electron-phonon matrix elements from density functional theory and use them to evaluate expressions for the Raman intensity derived from time-dependent perturbation theory. These expressions are implemented in a computer code that performs the calculations as a post-processing step. We compare both methods and study the case of triple-layer MoTe2. Luxembourg National Research Fund (FNR).

  3. Energy–density functional plus quasiparticle–phonon model theory as a powerful tool for nuclear structure and astrophysics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsoneva, N., E-mail: Nadia.Tsoneva@theo.physik.uni-giessen.de; Lenske, H.

    During the last decade, a theoretical method based on the energy–density functional theory and quasiparticle–phonon model, including up to three-phonon configurations was developed. The main advantages of themethod are that it incorporates a self-consistentmean-field and multi-configuration mixing which are found of crucial importance for systematic investigations of nuclear low-energy excitations, pygmy and giant resonances in an unified way. In particular, the theoretical approach has been proven to be very successful in predictions of new modes of excitations, namely pygmy quadrupole resonance which is also lately experimentally observed. Recently, our microscopically obtained dipole strength functions are implemented in predictions of nucleon-capturemore » reaction rates of astrophysical importance. A comparison to available experimental data is discussed.« less

  4. Axial postbuckling analysis of multilayer functionally graded composite nanoplates reinforced with GPLs based on nonlocal strain gradient theory

    NASA Astrophysics Data System (ADS)

    Sahmani, S.; Aghdam, M. M.

    2017-11-01

    In this paper, a new size-dependent inhomogeneous plate model is constructed to analyze the nonlinear buckling and postbuckling characteristics of multilayer functionally graded composite nanoplates reinforced with graphene platelet (GPL) nanofillers under axial compressive load. To this purpose, the nonlocal strain gradient theory of elasticity is implemented into a refined hyperbolic shear deformation plate theory. The mechanical properties of multilayer graphene platelet-reinforced composite (GPLRC) nanoplates are evaluated based upon the Halpin-Tsai micromechanical scheme. The weight fraction of randomly dispersed GPLs remain constant in each individual layer, which results in U-GPLRC nanoplate, or changes layerwise in accordance with three different functionally graded patterns, which make X-GPLRC, O-GPLRC and A-GPLRC nanoplates. Via a two-stepped perturbation technique, explicit analytical expressions for nonlocal strain gradient stability paths are established for layerwise functionally graded GPLRC nanoplates. It is demonstrated that both the nonlocal and strain gradient size dependencies are more significant for multilayer GPLRC nanoplates filling by GPL nanofillers with higher length-to-thickness and width-to-thickness ratios.

  5. Large-Scale Hybrid Density Functional Theory Calculations in the Condensed-Phase: Ab Initio Molecular Dynamics in the Isobaric-Isothermal Ensemble

    NASA Astrophysics Data System (ADS)

    Ko, Hsin-Yu; Santra, Biswajit; Distasio, Robert A., Jr.; Wu, Xifan; Car, Roberto

    Hybrid functionals are known to alleviate the self-interaction error in density functional theory (DFT) and provide a more accurate description of the electronic structure of molecules and materials. However, hybrid DFT in the condensed-phase has a prohibitively high associated computational cost which limits their applicability to large systems of interest. In this work, we present a general-purpose order(N) implementation of hybrid DFT in the condensed-phase using Maximally localized Wannier function; this implementation is optimized for massively parallel computing architectures. This algorithm is used to perform large-scale ab initio molecular dynamics simulations of liquid water, ice, and aqueous ionic solutions. We have performed simulations in the isothermal-isobaric ensemble to quantify the effects of exact exchange on the equilibrium density properties of water at different thermodynamic conditions. We find that the anomalous density difference between ice I h and liquid water at ambient conditions as well as the enthalpy differences between ice I h, II, and III phases at the experimental triple point (238 K and 20 Kbar) are significantly improved using hybrid DFT over previous estimates using the lower rungs of DFT This work has been supported by the Department of Energy under Grants No. DE-FG02-05ER46201 and DE-SC0008626.

  6. Slater-type geminals in explicitly-correlated perturbation theory: application to n-alkanols and analysis of errors and basis-set requirements.

    PubMed

    Höfener, Sebastian; Bischoff, Florian A; Glöss, Andreas; Klopper, Wim

    2008-06-21

    In the recent years, Slater-type geminals (STGs) have been used with great success to expand the first-order wave function in an explicitly-correlated perturbation theory. The present work reports on this theory's implementation in the framework of the Turbomole suite of programs. A formalism is presented for evaluating all of the necessary molecular two-electron integrals by means of the Obara-Saika recurrence relations, which can be applied when the STG is expressed as a linear combination of a small number (n) of Gaussians (STG-nG geminal basis). In the Turbomole implementation of the theory, density fitting is employed and a complementary auxiliary basis set (CABS) is used for the resolution-of-the-identity (RI) approximation of explicitly-correlated theory. By virtue of this RI approximation, the calculation of molecular three- and four-electron integrals is avoided. An approximation is invoked to avoid the two-electron integrals over the commutator between the operators of kinetic energy and the STG. This approximation consists of computing commutators between matrices in place of operators. Integrals over commutators between operators would have occurred if the theory had been formulated and implemented as proposed originally. The new implementation in Turbomole was tested by performing a series of calculations on rotational conformers of the alkanols n-propanol through n-pentanol. Basis-set requirements concerning the orbital basis, the auxiliary basis set for density fitting and the CABS were investigated. Furthermore, various (constrained) optimizations of the amplitudes of the explicitly-correlated double excitations were studied. These amplitudes can be optimized in orbital-variant and orbital-invariant manners, or they can be kept fixed at the values governed by the rational generator approach, that is, by the electron cusp conditions. Electron-correlation effects beyond the level of second-order perturbation theory were accounted for by conventional coupled-cluster calculations with single, double and perturbative triple excitations [CCSD(T)]. The explicitly-correlated perturbation theory results were combined with CCSD(T) results and compared with literature data obtained by basis-set extrapolation.

  7. Real World Cognitive Multi-Tasking and Problem Solving: A Large Scale Cognitive Architecture Simulation Through High Performance Computing-Project Casie

    DTIC Science & Technology

    2008-03-01

    computational version of the CASIE architecture serves to demonstrate the functionality of our primary theories. However, implementation of several other...following facts. First, based on Theorem 3 and Theorem 5, the objective function is non -increasing under updating rule (6); second, by the criteria for...reassignment in updating rule (7), it is trivial to show that the objective function is non -increasing under updating rule (7). A Unified View to Graph

  8. Separated-pair independent particle model and the generalized Brillouin theorem: ab initio calculations on the dissociation of polyatomic molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sundberg, Kenneth Randall

    1976-01-01

    A method is developed to optimize the separated-pair independent particle (SPIP) wave function; it is a special case of the separated-pair theory obtained by using two-term natural expansions of the geminals. The orbitals are optimized by a theory based on the generalized Brillouin theorem and iterative configuration interaction (CI) calculations in the space of the SPIP function and its single excitations. The geminal expansion coefficients are optimized by serial 2 x 2 CI calculations. Formulas are derived for the matrix elements. An algorithm to implement the method is presented, and the work needed to evaluate the molecular integrals is discussed.

  9. Multiphoton ionization of many-electron atoms and highly-charged ions in intense laser fields: a relativistic time-dependent density functional theory approach

    NASA Astrophysics Data System (ADS)

    Tumakov, Dmitry A.; Telnov, Dmitry A.; Maltsev, Ilia A.; Plunien, Günter; Shabaev, Vladimir M.

    2017-10-01

    We develop an efficient numerical implementation of the relativistic time-dependent density functional theory (RTDDFT) to study multielectron highly-charged ions subject to intense linearly-polarized laser fields. The interaction with the electromagnetic field is described within the electric dipole approximation. The resulting time-dependent relativistic Kohn-Sham (RKS) equations possess an axial symmetry and are solved accurately and efficiently with the help of the time-dependent generalized pseudospectral method. As a case study, we calculate multiphoton ionization probabilities of the neutral argon atom and argon-like xenon ion. Relativistic effects are assessed by comparison of our present results with existing non-relativistic data.

  10. Canonical-ensemble extended Lagrangian Born-Oppenheimer molecular dynamics for the linear scaling density functional theory.

    PubMed

    Hirakawa, Teruo; Suzuki, Teppei; Bowler, David R; Miyazaki, Tsuyoshi

    2017-10-11

    We discuss the development and implementation of a constant temperature (NVT) molecular dynamics scheme that combines the Nosé-Hoover chain thermostat with the extended Lagrangian Born-Oppenheimer molecular dynamics (BOMD) scheme, using a linear scaling density functional theory (DFT) approach. An integration scheme for this canonical-ensemble extended Lagrangian BOMD is developed and discussed in the context of the Liouville operator formulation. Linear scaling DFT canonical-ensemble extended Lagrangian BOMD simulations are tested on bulk silicon and silicon carbide systems to evaluate our integration scheme. The results show that the conserved quantity remains stable with no systematic drift even in the presence of the thermostat.

  11. Prototyping Visual Learning Analytics Guided by an Educational Theory Informed Goal

    ERIC Educational Resources Information Center

    Hillaire, Garron; Rappolt-Schlichtmann, Gabrielle; Ducharme, Kim

    2016-01-01

    Prototype work can support the creation of data visualizations throughout the research and development process through paper prototypes with sketching, designed prototypes with graphic design tools, and functional prototypes to explore how the implementation will work. One challenging aspect of data visualization work is coordinating the expertise…

  12. Teaching Function and Practice Thinking of Psychological Movies

    ERIC Educational Resources Information Center

    Wu, Weidong

    2010-01-01

    Psychology teaching was implemented in virtue of excellent psychological movies, which not only could help to stimulate students' interest, and make the abstract theory concretion and visualization, but also provide the scenes similar to the reality for students' learning with attempts to improve their learning achievement. However, as for the…

  13. The Holst spin foam model via cubulations

    NASA Astrophysics Data System (ADS)

    Baratin, Aristide; Flori, Cecilia; Thiemann, Thomas

    2012-10-01

    Spin foam models are an attempt at a covariant or path integral formulation of canonical loop quantum gravity. The construction of such models usually relies on the Plebanski formulation of general relativity as a constrained BF theory and is based on the discretization of the action on a simplicial triangulation, which may be viewed as an ultraviolet regulator. The triangulation dependence can be removed by means of group field theory techniques, which allows one to sum over all triangulations. The main tasks for these models are the correct quantum implementation of the Plebanski constraints, the existence of a semiclassical sector implementing additional ‘Regge-like’ constraints arising from simplicial triangulations and the definition of the physical inner product of loop quantum gravity via group field theory. Here we propose a new approach to tackle these issues stemming directly from the Holst action for general relativity, which is also a proper starting point for canonical loop quantum gravity. The discretization is performed by means of a ‘cubulation’ of the manifold rather than a triangulation. We give a direct interpretation of the resulting spin foam model as a generating functional for the n-point functions on the physical Hilbert space at finite regulator. This paper focuses on ideas and tasks to be performed before the model can be taken seriously. However, our analysis reveals some interesting features of this model: firstly, the structure of its amplitudes differs from the standard spin foam models. Secondly, the tetrad n-point functions admit a ‘Wick-like’ structure. Thirdly, the restriction to simple representations does not automatically occur—unless one makes use of the time gauge, just as in the classical theory.

  14. Cyclic density functional theory: A route to the first principles simulation of bending in nanostructures

    NASA Astrophysics Data System (ADS)

    Banerjee, Amartya S.; Suryanarayana, Phanish

    2016-11-01

    We formulate and implement Cyclic Density Functional Theory (Cyclic DFT) - a self-consistent first principles simulation method for nanostructures with cyclic symmetries. Using arguments based on Group Representation Theory, we rigorously demonstrate that the Kohn-Sham eigenvalue problem for such systems can be reduced to a fundamental domain (or cyclic unit cell) augmented with cyclic-Bloch boundary conditions. Analogously, the equations of electrostatics appearing in Kohn-Sham theory can be reduced to the fundamental domain augmented with cyclic boundary conditions. By making use of this symmetry cell reduction, we show that the electronic ground-state energy and the Hellmann-Feynman forces on the atoms can be calculated using quantities defined over the fundamental domain. We develop a symmetry-adapted finite-difference discretization scheme to obtain a fully functional numerical realization of the proposed approach. We verify that our formulation and implementation of Cyclic DFT is both accurate and efficient through selected examples. The connection of cyclic symmetries with uniform bending deformations provides an elegant route to the ab-initio study of bending in nanostructures using Cyclic DFT. As a demonstration of this capability, we simulate the uniform bending of a silicene nanoribbon and obtain its energy-curvature relationship from first principles. A self-consistent ab-initio simulation of this nature is unprecedented and well outside the scope of any other systematic first principles method in existence. Our simulations reveal that the bending stiffness of the silicene nanoribbon is intermediate between that of graphene and molybdenum disulphide - a trend which can be ascribed to the variation in effective thickness of these materials. We describe several future avenues and applications of Cyclic DFT, including its extension to the study of non-uniform bending deformations and its possible use in the study of the nanoscale flexoelectric effect.

  15. A systematic review of the use of theory in the design of guideline dissemination and implementation strategies and interpretation of the results of rigorous evaluations.

    PubMed

    Davies, Philippa; Walker, Anne E; Grimshaw, Jeremy M

    2010-02-09

    There is growing interest in the use of cognitive, behavioural, and organisational theories in implementation research. However, the extent of use of theory in implementation research is uncertain. We conducted a systematic review of use of theory in 235 rigorous evaluations of guideline dissemination and implementation studies published between 1966 and 1998. Use of theory was classified according to type of use (explicitly theory based, some conceptual basis, and theoretical construct used) and stage of use (choice/design of intervention, process/mediators/moderators, and post hoc/explanation). Fifty-three of 235 studies (22.5%) were judged to have employed theories, including 14 studies that explicitly used theory. The majority of studies (n = 42) used only one theory; the maximum number of theories employed by any study was three. Twenty-five different theories were used. A small number of theories accounted for the majority of theory use including PRECEDE (Predisposing, Reinforcing, and Enabling Constructs in Educational Diagnosis and Evaluation), diffusion of innovations, information overload and social marketing (academic detailing). There was poor justification of choice of intervention and use of theory in implementation research in the identified studies until at least 1998. Future research should explicitly identify the justification for the interventions. Greater use of explicit theory to understand barriers, design interventions, and explore mediating pathways and moderators is needed to advance the science of implementation research.

  16. Implementation of a Quality Improvement Process Aimed to Deliver Higher-Value Physical Therapy for Patients With Low Back Pain: Case Report.

    PubMed

    Karlen, Emily; McCathie, Becky

    2015-12-01

    The current state of health care demands higher-value care. Due to many barriers, clinicians routinely do not implement evidence-based care even though it is known to improve quality and reduce cost of care. The purpose of this case report is to describe a theory-based, multitactic implementation of a quality improvement process aimed to deliver higher-value physical therapy for patients with low back pain. Patients were treated from January 2010 through December 2014 in 1 of 32 outpatient physical therapy clinics within an academic health care system. Data were examined from 47,755 patients (mean age=50.3 years) entering outpatient physical therapy for management of nonspecific low back pain, with or without radicular pain. Development and implementation tactics were constructed from adult learning and change management theory to enhance adherence to best practice care among 130 physical therapists. A quality improvement team implemented 4 tactics: establish care delivery expectations, facilitate peer-led clinical and operational teams, foster a learning environment focused on meeting a population's needs, and continuously collect and analyze outcomes data. Physical therapy utilization and change in functional disability were measured to assess relative cost and quality of care. Secondarily, charge data assessed change in physical therapists' application of evidence-based care. Implementation of a quality improvement process was measured by year-over-year improved clinical outcomes, decreased utilization, and increased adherence to evidence-based physical therapy, which was associated with higher-value care. When adult learning and change management theory are combined in quality improvement efforts, common barriers to implementing evidence-based care can be overcome, creating an environment supportive of delivering higher-value physical therapy for patients with low back pain. © 2015 American Physical Therapy Association.

  17. Ab initio quantum chemistry: methodology and applications.

    PubMed

    Friesner, Richard A

    2005-05-10

    This Perspective provides an overview of state-of-the-art ab initio quantum chemical methodology and applications. The methods that are discussed include coupled cluster theory, localized second-order Moller-Plesset perturbation theory, multireference perturbation approaches, and density functional theory. The accuracy of each approach for key chemical properties is summarized, and the computational performance is analyzed, emphasizing significant advances in algorithms and implementation over the past decade. Incorporation of a condensed-phase environment by means of mixed quantum mechanical/molecular mechanics or self-consistent reaction field techniques, is presented. A wide range of illustrative applications, focusing on materials science and biology, are discussed briefly.

  18. A theory of organizational readiness for change

    PubMed Central

    Weiner, Bryan J

    2009-01-01

    Background Change management experts have emphasized the importance of establishing organizational readiness for change and recommended various strategies for creating it. Although the advice seems reasonable, the scientific basis for it is limited. Unlike individual readiness for change, organizational readiness for change has not been subject to extensive theoretical development or empirical study. In this article, I conceptually define organizational readiness for change and develop a theory of its determinants and outcomes. I focus on the organizational level of analysis because many promising approaches to improving healthcare delivery entail collective behavior change in the form of systems redesign--that is, multiple, simultaneous changes in staffing, work flow, decision making, communication, and reward systems. Discussion Organizational readiness for change is a multi-level, multi-faceted construct. As an organization-level construct, readiness for change refers to organizational members' shared resolve to implement a change (change commitment) and shared belief in their collective capability to do so (change efficacy). Organizational readiness for change varies as a function of how much organizational members value the change and how favorably they appraise three key determinants of implementation capability: task demands, resource availability, and situational factors. When organizational readiness for change is high, organizational members are more likely to initiate change, exert greater effort, exhibit greater persistence, and display more cooperative behavior. The result is more effective implementation. Summary The theory described in this article treats organizational readiness as a shared psychological state in which organizational members feel committed to implementing an organizational change and confident in their collective abilities to do so. This way of thinking about organizational readiness is best suited for examining organizational changes where collective behavior change is necessary in order to effectively implement the change and, in some instances, for the change to produce anticipated benefits. Testing the theory would require further measurement development and careful sampling decisions. The theory offers a means of reconciling the structural and psychological views of organizational readiness found in the literature. Further, the theory suggests the possibility that the strategies that change management experts recommend are equifinal. That is, there is no 'one best way' to increase organizational readiness for change. PMID:19840381

  19. GPU-Accelerated Large-Scale Electronic Structure Theory on Titan with a First-Principles All-Electron Code

    NASA Astrophysics Data System (ADS)

    Huhn, William Paul; Lange, Björn; Yu, Victor; Blum, Volker; Lee, Seyong; Yoon, Mina

    Density-functional theory has been well established as the dominant quantum-mechanical computational method in the materials community. Large accurate simulations become very challenging on small to mid-scale computers and require high-performance compute platforms to succeed. GPU acceleration is one promising approach. In this talk, we present a first implementation of all-electron density-functional theory in the FHI-aims code for massively parallel GPU-based platforms. Special attention is paid to the update of the density and to the integration of the Hamiltonian and overlap matrices, realized in a domain decomposition scheme on non-uniform grids. The initial implementation scales well across nodes on ORNL's Titan Cray XK7 supercomputer (8 to 64 nodes, 16 MPI ranks/node) and shows an overall speed up in runtime due to utilization of the K20X Tesla GPUs on each Titan node of 1.4x, with the charge density update showing a speed up of 2x. Further acceleration opportunities will be discussed. Work supported by the LDRD Program of ORNL managed by UT-Battle, LLC, for the U.S. DOE and by the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725.

  20. Nuclear magnetic resonance spin-spin coupling constants from coupled perturbed density functional theory

    NASA Astrophysics Data System (ADS)

    Sychrovský, Vladimír; Gräfenstein, Jürgen; Cremer, Dieter

    2000-09-01

    For the first time, a complete implementation of coupled perturbed density functional theory (CPDFT) for the calculation of NMR spin-spin coupling constants (SSCCs) with pure and hybrid DFT is presented. By applying this method to several hydrides, hydrocarbons, and molecules with multiple bonds, the performance of DFT for the calculation of SSCCs is analyzed in dependence of the XC functional used. The importance of electron correlation effects is demonstrated and it is shown that the hybrid functional B3LYP leads to the best accuracy of calculated SSCCs. Also, CPDFT is compared with sum-over-states (SOS) DFT where it turns out that the former method is superior to the latter because it explicitly considers the dependence of the Kohn-Sham operator on the perturbed orbitals in DFT when calculating SSCCs. The four different coupling mechanisms contributing to the SSCC are discussed in connection with the electronic structure of the molecule.

  1. A comparison of design variables for control theory based airfoil optimization

    NASA Technical Reports Server (NTRS)

    Reuther, James; Jameson, Antony

    1995-01-01

    This paper describes the implementation of optimization techniques based on control theory for airfoil design. In our previous work in the area it was shown that control theory could be employed to devise effective optimization procedures for two-dimensional profiles by using either the potential flow or the Euler equations with either a conformal mapping or a general coordinate system. We have also explored three-dimensional extensions of these formulations recently. The goal of our present work is to demonstrate the versatility of the control theory approach by designing airfoils using both Hicks-Henne functions and B-spline control points as design variables. The research also demonstrates that the parameterization of the design space is an open question in aerodynamic design.

  2. Microwave photonic filters with negative coefficients based on phase inversion in an electro-optic modulator.

    PubMed

    Capmany, José; Pastor, Daniel; Martinez, Alfonso; Ortega, Beatriz; Sales, Salvador

    2003-08-15

    We report on a novel technical approach to the implementation of photonic rf filters that is based on the pi phase inversion that a rf modulating signal suffers in an electro-optic Mach-Zehnder modulator, which depends on whether the positive or the negative linear slope of the signal's modulation transfer function is employed. Experimental evidence is provided of the implementation of filters with negative coefficients that shows excellent agreement with results predicted by the theory.

  3. Models for twistable elastic polymers in Brownian dynamics, and their implementation for LAMMPS.

    PubMed

    Brackley, C A; Morozov, A N; Marenduzzo, D

    2014-04-07

    An elastic rod model for semi-flexible polymers is presented. Theory for a continuum rod is reviewed, and it is shown that a popular discretised model used in numerical simulations gives the correct continuum limit. Correlation functions relating to both bending and twisting of the rod are derived for both continuous and discrete cases, and results are compared with numerical simulations. Finally, two possible implementations of the discretised model in the multi-purpose molecular dynamics software package LAMMPS are described.

  4. Classifying Infrastructure in an Urban Battlespace Using Thermal IR Signatures

    DTIC Science & Technology

    2006-11-01

    Huntsville, Alabama for sharing their ATLAS data for Atlanta. REFERENCES Bentz , D . P . (2000). A Computer Model to Predict the Surface Temperature...10: 2 2 xt α Δ Δ ≤ (10) 2.2 Implementing the Model Bentz uses a 1- D finite difference grid with a varying number of nodes. The nodes are equally...and rooftops were modeled as a function of time and environmental conditions using 1- D heat transfer theory. The model was implemented in MATLAB

  5. Electronic Structure and Transport in Solids from First Principles

    NASA Astrophysics Data System (ADS)

    Mustafa, Jamal Ibrahim

    The focus of this dissertation is the determination of the electronic structure and trans- port properties of solids. We first review some of the theory and computational methodology used in the calculation of electronic structure and materials properties. Throughout the dissertation, we make extensive use of state-of-the-art software packages that implement density functional theory, density functional perturbation theory, and the GW approximation, in addition to specialized methods for interpolating matrix elements for extremely accurate results. The first application of the computational framework introduced is the determination of band offsets in semiconductor heterojunctions using a theory of quantum dipoles at the interface. This method is applied to the case of heterojunction formed between a new metastable phase of silicon, with a rhombohedral structure, and cubic silicon. Next, we introduce a novel method for the construction of localized Wannier functions, which we have named the optimized projection functions method (OPFM). We illustrate the method on a variety of systems and find that it can reliably construct localized Wannier functions with minimal user intervention. We further develop the OPFM to investigate a class of materials called topological insulators, which are insulating in the bulk but have conductive surface states. These properties are a result of a nontrivial topology in their band structure, which has interesting effects on the character of the Wannier functions. In the last sections of the main text, the noble metals are studied in great detail, including their electronic properties and carrier dynamics. In particular, we investigate, the Fermi surface properties of the noble metals, specifically electron-phonon scattering lifetimes, and subsequently the transport properties determined by carriers on the Fermi surface. To achieve this, a novel sampling technique is developed, with wide applicability to transport calculations. Additionally, the generation and transport of hot carriers is studied extensively. The distribution of hot carriers generated from the decay of plasmons is explored over a range of energy, and the transport properties, particularly the lifetimes and mean-free-paths, of the hot carriers are determined. Lastly, appendices detailing the implementation of the algorithms developed in the work is presented, along with a useful derivation of the electron-plasmon matrix elements.

  6. A comparison of different methods to implement higher order derivatives of density functionals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    van Dam, Hubertus J.J.

    Density functional theory is the dominant approach in electronic structure methods today. To calculate properties higher order derivatives of the density functionals are required. These derivatives might be implemented manually,by automatic differentiation, or by symbolic algebra programs. Different authors have cited different reasons for using the particular method of their choice. This paper presents work where all three approaches were used and the strengths and weaknesses of each approach are considered. It is found that all three methods produce code that is suffficiently performanted for practical applications, despite the fact that our symbolic algebra generated code and our automatic differentiationmore » code still have scope for significant optimization. The automatic differentiation approach is the best option for producing readable and maintainable code.« less

  7. Electronic Coupling Calculations for Bridge-Mediated Charge Transfer Using Constrained Density Functional Theory (CDFT) and Effective Hamiltonian Approaches at the Density Functional Theory (DFT) and Fragment-Orbital Density Functional Tight Binding (FODFTB) Level

    DOE PAGES

    Gillet, Natacha; Berstis, Laura; Wu, Xiaojing; ...

    2016-09-09

    In this paper, four methods to calculate charge transfer integrals in the context of bridge-mediated electron transfer are tested. These methods are based on density functional theory (DFT). We consider two perturbative Green's function effective Hamiltonian methods (first, at the DFT level of theory, using localized molecular orbitals; second, applying a tight-binding DFT approach, using fragment orbitals) and two constrained DFT implementations with either plane-wave or local basis sets. To assess the performance of the methods for through-bond (TB)-dominated or through-space (TS)-dominated transfer, different sets of molecules are considered. For through-bond electron transfer (ET), several molecules that were originally synthesizedmore » by Paddon-Row and co-workers for the deduction of electronic coupling values from photoemission and electron transmission spectroscopies, are analyzed. The tested methodologies prove to be successful in reproducing experimental data, the exponential distance decay constant and the superbridge effects arising from interference among ET pathways. For through-space ET, dedicated p-stacked systems with heterocyclopentadiene molecules were created and analyzed on the basis of electronic coupling dependence on donor-acceptor distance, structure of the bridge, and ET barrier height. The inexpensive fragment-orbital density functional tight binding (FODFTB) method gives similar results to constrained density functional theory (CDFT) and both reproduce the expected exponential decay of the coupling with donor-acceptor distances and the number of bridging units. Finally, these four approaches appear to give reliable results for both TB and TS ET and present a good alternative to expensive ab initio methodologies for large systems involving long-range charge transfers.« less

  8. Electronic Coupling Calculations for Bridge-Mediated Charge Transfer Using Constrained Density Functional Theory (CDFT) and Effective Hamiltonian Approaches at the Density Functional Theory (DFT) and Fragment-Orbital Density Functional Tight Binding (FODFTB) Level

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gillet, Natacha; Berstis, Laura; Wu, Xiaojing

    In this paper, four methods to calculate charge transfer integrals in the context of bridge-mediated electron transfer are tested. These methods are based on density functional theory (DFT). We consider two perturbative Green's function effective Hamiltonian methods (first, at the DFT level of theory, using localized molecular orbitals; second, applying a tight-binding DFT approach, using fragment orbitals) and two constrained DFT implementations with either plane-wave or local basis sets. To assess the performance of the methods for through-bond (TB)-dominated or through-space (TS)-dominated transfer, different sets of molecules are considered. For through-bond electron transfer (ET), several molecules that were originally synthesizedmore » by Paddon-Row and co-workers for the deduction of electronic coupling values from photoemission and electron transmission spectroscopies, are analyzed. The tested methodologies prove to be successful in reproducing experimental data, the exponential distance decay constant and the superbridge effects arising from interference among ET pathways. For through-space ET, dedicated p-stacked systems with heterocyclopentadiene molecules were created and analyzed on the basis of electronic coupling dependence on donor-acceptor distance, structure of the bridge, and ET barrier height. The inexpensive fragment-orbital density functional tight binding (FODFTB) method gives similar results to constrained density functional theory (CDFT) and both reproduce the expected exponential decay of the coupling with donor-acceptor distances and the number of bridging units. Finally, these four approaches appear to give reliable results for both TB and TS ET and present a good alternative to expensive ab initio methodologies for large systems involving long-range charge transfers.« less

  9. Electronic Coupling Calculations for Bridge-Mediated Charge Transfer Using Constrained Density Functional Theory (CDFT) and Effective Hamiltonian Approaches at the Density Functional Theory (DFT) and Fragment-Orbital Density Functional Tight Binding (FODFTB) Level.

    PubMed

    Gillet, Natacha; Berstis, Laura; Wu, Xiaojing; Gajdos, Fruzsina; Heck, Alexander; de la Lande, Aurélien; Blumberger, Jochen; Elstner, Marcus

    2016-10-11

    In this article, four methods to calculate charge transfer integrals in the context of bridge-mediated electron transfer are tested. These methods are based on density functional theory (DFT). We consider two perturbative Green's function effective Hamiltonian methods (first, at the DFT level of theory, using localized molecular orbitals; second, applying a tight-binding DFT approach, using fragment orbitals) and two constrained DFT implementations with either plane-wave or local basis sets. To assess the performance of the methods for through-bond (TB)-dominated or through-space (TS)-dominated transfer, different sets of molecules are considered. For through-bond electron transfer (ET), several molecules that were originally synthesized by Paddon-Row and co-workers for the deduction of electronic coupling values from photoemission and electron transmission spectroscopies, are analyzed. The tested methodologies prove to be successful in reproducing experimental data, the exponential distance decay constant and the superbridge effects arising from interference among ET pathways. For through-space ET, dedicated π-stacked systems with heterocyclopentadiene molecules were created and analyzed on the basis of electronic coupling dependence on donor-acceptor distance, structure of the bridge, and ET barrier height. The inexpensive fragment-orbital density functional tight binding (FODFTB) method gives similar results to constrained density functional theory (CDFT) and both reproduce the expected exponential decay of the coupling with donor-acceptor distances and the number of bridging units. These four approaches appear to give reliable results for both TB and TS ET and present a good alternative to expensive ab initio methodologies for large systems involving long-range charge transfers.

  10. Grounding a new information technology implementation framework in behavioral science: a systematic analysis of the literature on IT use.

    PubMed

    Kukafka, Rita; Johnson, Stephen B; Linfante, Allison; Allegrante, John P

    2003-06-01

    Many interventions to improve the success of information technology (IT) implementations are grounded in behavioral science, using theories, and models to identify conditions and determinants of successful use. However, each model in the IT literature has evolved to address specific theoretical problems of particular disciplinary concerns, and each model has been tested and has evolved using, in most cases, a more or less restricted set of IT implementation procedures. Functionally, this limits the perspective for taking into account the multiple factors at the individual, group, and organizational levels that influence use behavior. While a rich body of literature has emerged, employing prominent models such as the Technology Adoption Model, Social-Cognitive Theory, and Diffusion of Innovation Theory, the complexity of defining a suitable multi-level intervention has largely been overlooked. A gap exists between the implementation of IT and the integration of theories and models that can be utilized to develop multi-level approaches to identify factors that impede usage behavior. We present a novel framework that is intended to guide synthesis of more than one theoretical perspective for the purpose of planning multi-level interventions to enhance IT use. This integrative framework is adapted from PRECEDE/PROCEDE, a conceptual framework used by health planners in hundreds of published studies to direct interventions that account for the multiple determinants of behavior. Since we claim that the literature on IT use behavior does not now include a multi-level approach, we undertook a systematic literature analysis to confirm this assertion. Our framework facilitated organizing this literature synthesis and our analysis was aimed at determining if the IT implementation approaches in the published literature were characterized by an approach that considered at least two levels of IT usage determinants. We found that while 61% of studies mentioned or referred to theory, none considered two or more levels. In other words, although the researchers employ behavioral theory, they omit two fundamental propositions: (1) IT usage is influenced by multiple factors and (2) interventions must be multi-dimensional. Our literature synthesis may provide additional insight into the reason for high failure rates associated with underutilized systems, and underscores the need to move beyond the current dominant approach that employs a single model to guide IT implementation plans that aim to address factors associated with IT acceptance and subsequent positive use behavior.

  11. Control theory based airfoil design for potential flow and a finite volume discretization

    NASA Technical Reports Server (NTRS)

    Reuther, J.; Jameson, A.

    1994-01-01

    This paper describes the implementation of optimization techniques based on control theory for airfoil design. In previous studies it was shown that control theory could be used to devise an effective optimization procedure for two-dimensional profiles in which the shape is determined by a conformal transformation from a unit circle, and the control is the mapping function. The goal of our present work is to develop a method which does not depend on conformal mapping, so that it can be extended to treat three-dimensional problems. Therefore, we have developed a method which can address arbitrary geometric shapes through the use of a finite volume method to discretize the potential flow equation. Here the control law serves to provide computationally inexpensive gradient information to a standard numerical optimization method. Results are presented, where both target speed distributions and minimum drag are used as objective functions.

  12. Numerical implementation of multiple peeling theory and its application to spider web anchorages.

    PubMed

    Brely, Lucas; Bosia, Federico; Pugno, Nicola M

    2015-02-06

    Adhesion of spider web anchorages has been studied in recent years, including the specific functionalities achieved through different architectures. To better understand the delamination mechanisms of these and other biological or artificial fibrillar adhesives, and how their adhesion can be optimized, we develop a novel numerical model to simulate the multiple peeling of structures with arbitrary branching and adhesion angles, including complex architectures. The numerical model is based on a recently developed multiple peeling theory, which extends the energy-based single peeling theory of Kendall, and can be applied to arbitrarily complex structures. In particular, we numerically show that a multiple peeling problem can be treated as the superposition of single peeling configurations even for complex structures. Finally, we apply the developed numerical approach to study spider web anchorages, showing how their function is achieved through optimal geometrical configurations.

  13. Numerical implementation of multiple peeling theory and its application to spider web anchorages

    PubMed Central

    Brely, Lucas; Bosia, Federico; Pugno, Nicola M.

    2015-01-01

    Adhesion of spider web anchorages has been studied in recent years, including the specific functionalities achieved through different architectures. To better understand the delamination mechanisms of these and other biological or artificial fibrillar adhesives, and how their adhesion can be optimized, we develop a novel numerical model to simulate the multiple peeling of structures with arbitrary branching and adhesion angles, including complex architectures. The numerical model is based on a recently developed multiple peeling theory, which extends the energy-based single peeling theory of Kendall, and can be applied to arbitrarily complex structures. In particular, we numerically show that a multiple peeling problem can be treated as the superposition of single peeling configurations even for complex structures. Finally, we apply the developed numerical approach to study spider web anchorages, showing how their function is achieved through optimal geometrical configurations. PMID:25657835

  14. Solvatochromic shifts from coupled-cluster theory embedded in density functional theory

    NASA Astrophysics Data System (ADS)

    Höfener, Sebastian; Gomes, André Severo Pereira; Visscher, Lucas

    2013-09-01

    Building on the framework recently reported for determining general response properties for frozen-density embedding [S. Höfener, A. S. P. Gomes, and L. Visscher, J. Chem. Phys. 136, 044104 (2012)], 10.1063/1.3675845, in this work we report a first implementation of an embedded coupled-cluster in density-functional theory (CC-in-DFT) scheme for electronic excitations, where only the response of the active subsystem is taken into account. The formalism is applied to the calculation of coupled-cluster excitation energies of water and uracil in aqueous solution. We find that the CC-in-DFT results are in good agreement with reference calculations and experimental results. The accuracy of calculations is mainly sensitive to factors influencing the correlation treatment (basis set quality, truncation of the cluster operator) and to the embedding treatment of the ground-state (choice of density functionals). This allows for efficient approximations at the excited state calculation step without compromising the accuracy. This approximate scheme makes it possible to use a first principles approach to investigate environment effects with specific interactions at coupled-cluster level of theory at a cost comparable to that of calculations of the individual subsystems in vacuum.

  15. Temporal self-regulation theory: a neurobiologically informed model for physical activity behavior

    PubMed Central

    Hall, Peter A.; Fong, Geoffrey T.

    2015-01-01

    Dominant explanatory models for physical activity behavior are limited by the exclusion of several important components, including temporal dynamics, ecological forces, and neurobiological factors. The latter may be a critical omission, given the relevance of several aspects of cognitive function for the self-regulatory processes that are likely required for consistent implementation of physical activity behavior in everyday life. This narrative review introduces temporal self-regulation theory (TST; Hall and Fong, 2007, 2013) as a new explanatory model for physical activity behavior. Important features of the model include consideration of the default status of the physical activity behavior, as well as the disproportionate influence of temporally proximal behavioral contingencies. Most importantly, the TST model proposes positive feedback loops linking executive function (EF) and the performance of physical activity behavior. Specifically, those with relatively stronger executive control (and optimized brain structures supporting it, such as the dorsolateral prefrontal cortex (PFC)) are able to implement physical activity with more consistency than others, which in turn serves to strengthen the executive control network itself. The TST model has the potential to explain everyday variants of incidental physical activity, sport-related excellence via capacity for deliberate practice, and variability in the propensity to schedule and implement exercise routines. PMID:25859196

  16. Three Decades of Implementation Research in Higher Education: Limitations and Prospects of Theory Development

    ERIC Educational Resources Information Center

    Kohoutek, Jan

    2013-01-01

    The article adopts a comparative approach to review three periods of theory development in research into higher education policy implementation. Given the conceptual affinity between Cerych and Sabatier's 1986 seminal study into higher education policy implementation and public policy implementation theory, the field of public policy is chosen for…

  17. Grid-Based Projector Augmented Wave (GPAW) Implementation of Quantum Mechanics/Molecular Mechanics (QM/MM) Electrostatic Embedding and Application to a Solvated Diplatinum Complex.

    PubMed

    Dohn, A O; Jónsson, E Ö; Levi, G; Mortensen, J J; Lopez-Acevedo, O; Thygesen, K S; Jacobsen, K W; Ulstrup, J; Henriksen, N E; Møller, K B; Jónsson, H

    2017-12-12

    A multiscale density functional theory-quantum mechanics/molecular mechanics (DFT-QM/MM) scheme is presented, based on an efficient electrostatic coupling between the electronic density obtained from a grid-based projector augmented wave (GPAW) implementation of density functional theory and a classical potential energy function. The scheme is implemented in a general fashion and can be used with various choices for the descriptions of the QM or MM regions. Tests on H 2 O clusters, ranging from dimer to decamer show that no systematic energy errors are introduced by the coupling that exceeds the differences in the QM and MM descriptions. Over 1 ns of liquid water, Born-Oppenheimer QM/MM molecular dynamics (MD) are sampled combining 10 parallel simulations, showing consistent liquid water structure over the QM/MM border. The method is applied in extensive parallel MD simulations of an aqueous solution of the diplatinum [Pt 2 (P 2 O 5 H 2 ) 4 ] 4- complex (PtPOP), spanning a total time period of roughly half a nanosecond. An average Pt-Pt distance deviating only 0.01 Å from experimental results, and a ground-state Pt-Pt oscillation frequency deviating by <2% from experimental results were obtained. The simulations highlight a remarkable harmonicity of the Pt-Pt oscillation, while also showing clear signs of Pt-H hydrogen bonding and directional coordination of water molecules along the Pt-Pt axis of the complex.

  18. Four-component relativistic calculations in solution with the polarizable continuum model of solvation: theory, implementation, and application to the group 16 dihydrides H2X (X = O, S, Se, Te, Po).

    PubMed

    Remigio, Roberto Di; Bast, Radovan; Frediani, Luca; Saue, Trond

    2015-05-28

    We present a formulation of four-component relativistic self-consistent field (SCF) theory for a molecular solute described within the framework of the polarizable continuum model (PCM) for solvation. The linear response function for a four-component PCM-SCF state is also derived, as well as the explicit form of the additional contributions to the first-order response equations. The implementation of such a four-component PCM-SCF model, as carried out in a development version of the DIRAC program package, is documented. In particular, we present the newly developed application programming interface PCMSolver used in the actual implementation with DIRAC. To demonstrate the applicability of the approach, we present and analyze calculations of solvation effects on the geometries, electric dipole moments, and static electric dipole polarizabilities for the group 16 dihydrides H2X (X = O, S, Se, Te, Po).

  19. Rationale for switching to nonlocal functionals in density functional theory

    NASA Astrophysics Data System (ADS)

    Lazić, P.; Atodiresei, N.; Caciuc, V.; Brako, R.; Gumhalter, B.; Blügel, S.

    2012-10-01

    Density functional theory (DFT) has been steadily improving over the past few decades, becoming the standard tool for electronic structure calculations. The early local functionals (LDA) were eventually replaced by more accurate semilocal functionals (GGA) which are in use today. A major persisting drawback is the lack of the nonlocal correlation which is at the core of dispersive (van der Waals) forces, so that a large and important class of systems remains outside the scope of DFT. The vdW-DF correlation functional of Langreth and Lundqvist, published in 2004, was the first nonlocal functional which could be easily implemented. Beyond expectations, the nonlocal functional has brought significant improvement to systems that were believed not to be sensitive to nonlocal correlations. In this paper, we use the example of graphene nanodomes growing on the Ir(111) surface, where with an increase of the size of the graphene islands the character of the bonding changes from strong chemisorption towards almost pure physisorption. We demonstrate how the seamless character of the vdW-DF functionals makes it possible to treat all regimes self-consistently, proving to be a systematic and consistent improvement of DFT regardless of the nature of bonding. We also discuss the typical surface science example of CO adsorption on (111) surfaces of metals, which shows that the nonlocal correlation may also be crucial for strongly chemisorbed systems. We briefly discuss open questions, in particular the choice of the most appropriate exchange part of the functional. As the vdW-DF begins to appear implemented self-consistently in a number of popular DFT codes, with numerical costs close to the GGA calculations, we draw the attention of the DFT community to the advantages and benefits of the adoption of this new class of functionals.

  20. Rationale for switching to nonlocal functionals in density functional theory.

    PubMed

    Lazić, P; Atodiresei, N; Caciuc, V; Brako, R; Gumhalter, B; Blügel, S

    2012-10-24

    Density functional theory (DFT) has been steadily improving over the past few decades, becoming the standard tool for electronic structure calculations. The early local functionals (LDA) were eventually replaced by more accurate semilocal functionals (GGA) which are in use today. A major persisting drawback is the lack of the nonlocal correlation which is at the core of dispersive (van der Waals) forces, so that a large and important class of systems remains outside the scope of DFT. The vdW-DF correlation functional of Langreth and Lundqvist, published in 2004, was the first nonlocal functional which could be easily implemented. Beyond expectations, the nonlocal functional has brought significant improvement to systems that were believed not to be sensitive to nonlocal correlations. In this paper, we use the example of graphene nanodomes growing on the Ir(111) surface, where with an increase of the size of the graphene islands the character of the bonding changes from strong chemisorption towards almost pure physisorption. We demonstrate how the seamless character of the vdW-DF functionals makes it possible to treat all regimes self-consistently, proving to be a systematic and consistent improvement of DFT regardless of the nature of bonding. We also discuss the typical surface science example of CO adsorption on (111) surfaces of metals, which shows that the nonlocal correlation may also be crucial for strongly chemisorbed systems. We briefly discuss open questions, in particular the choice of the most appropriate exchange part of the functional. As the vdW-DF begins to appear implemented self-consistently in a number of popular DFT codes, with numerical costs close to the GGA calculations, we draw the attention of the DFT community to the advantages and benefits of the adoption of this new class of functionals.

  1. Learning from doing: the case for combining normalisation process theory and participatory learning and action research methodology for primary healthcare implementation research.

    PubMed

    de Brún, Tomas; O'Reilly-de Brún, Mary; O'Donnell, Catherine A; MacFarlane, Anne

    2016-08-03

    The implementation of research findings is not a straightforward matter. There are substantive and recognised gaps in the process of translating research findings into practice and policy. In order to overcome some of these translational difficulties, a number of strategies have been proposed for researchers. These include greater use of theoretical approaches in research focused on implementation, and use of a wider range of research methods appropriate to policy questions and the wider social context in which they are placed. However, questions remain about how to combine theory and method in implementation research. In this paper, we respond to these proposals. Focussing on a contemporary social theory, Normalisation Process Theory, and a participatory research methodology, Participatory Learning and Action, we discuss the potential of their combined use for implementation research. We note ways in which Normalisation Process Theory and Participatory Learning and Action are congruent and may therefore be used as heuristic devices to explore, better understand and support implementation. We also provide examples of their use in our own research programme about community involvement in primary healthcare. Normalisation Process Theory alone has, to date, offered useful explanations for the success or otherwise of implementation projects post-implementation. We argue that Normalisation Process Theory can also be used to prospectively support implementation journeys. Furthermore, Normalisation Process Theory and Participatory Learning and Action can be used together so that interventions to support implementation work are devised and enacted with the expertise of key stakeholders. We propose that the specific combination of this theory and methodology possesses the potential, because of their combined heuristic force, to offer a more effective means of supporting implementation projects than either one might do on its own, and of providing deeper understandings of implementation contexts, rather than merely describing change.

  2. Implementing communication and decision-making interventions directed at goals of care: a theory-led scoping review.

    PubMed

    Cummings, Amanda; Lund, Susi; Campling, Natasha; May, Carl R; Richardson, Alison; Myall, Michelle

    2017-10-06

    To identify the factors that promote and inhibit the implementation of interventions that improve communication and decision-making directed at goals of care in the event of acute clinical deterioration. A scoping review was undertaken based on the methodological framework of Arksey and O'Malley for conducting this type of review. Searches were carried out in Medline and Cumulative Index to Nursing and Allied Health Literature (CINAHL) to identify peer-reviewed papers and in Google to identify grey literature. Searches were limited to those published in the English language from 2000 onwards. Inclusion and exclusion criteria were applied, and only papers that had a specific focus on implementation in practice were selected. Data extracted were treated as qualitative and subjected to directed content analysis. A theory-informed coding framework using Normalisation Process Theory (NPT) was applied to characterise and explain implementation processes. Searches identified 2619 citations, 43 of which met the inclusion criteria. Analysis generated six themes fundamental to successful implementation of goals of care interventions: (1) input into development; (2) key clinical proponents; (3) training and education; (4) intervention workability and functionality; (5) setting and context; and (6) perceived value and appraisal. A broad and diverse literature focusing on implementation of goals of care interventions was identified. Our review recognised these interventions as both complex and contentious in nature, making their incorporation into routine clinical practice dependent on a number of factors. Implementing such interventions presents challenges at individual, organisational and systems levels, which make them difficult to introduce and embed. We have identified a series of factors that influence successful implementation and our analysis has distilled key learning points, conceptualised as a set of propositions, we consider relevant to implementing other complex and contentious interventions. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  3. Active vibration control of functionally graded beams with piezoelectric layers based on higher order shear deformation theory

    NASA Astrophysics Data System (ADS)

    Bendine, K.; Boukhoulda, F. B.; Nouari, M.; Satla, Z.

    2016-12-01

    This paper reports on a study of active vibration control of functionally graded beams with upper and lower surface-bonded piezoelectric layers. The model is based on higher-order shear deformation theory and implemented using the finite element method (FEM). The proprieties of the functionally graded beam (FGB) are graded along the thickness direction. The piezoelectric actuator provides a damping effect on the FGB by means of a velocity feedback control algorithm. A Matlab program has been developed for the FGB model and compared with ANSYS APDL. Using Newmark's method numerical solutions are obtained for the dynamic equations of FGB with piezoelectric layers. Numerical results show the effects of the constituent volume fraction and the influence the feedback control gain on the frequency and dynamic response of FGBs.

  4. Neural-like computing with populations of superparamagnetic basis functions.

    PubMed

    Mizrahi, Alice; Hirtzlin, Tifenn; Fukushima, Akio; Kubota, Hitoshi; Yuasa, Shinji; Grollier, Julie; Querlioz, Damien

    2018-04-18

    In neuroscience, population coding theory demonstrates that neural assemblies can achieve fault-tolerant information processing. Mapped to nanoelectronics, this strategy could allow for reliable computing with scaled-down, noisy, imperfect devices. Doing so requires that the population components form a set of basis functions in terms of their response functions to inputs, offering a physical substrate for computing. Such a population can be implemented with CMOS technology, but the corresponding circuits have high area or energy requirements. Here, we show that nanoscale magnetic tunnel junctions can instead be assembled to meet these requirements. We demonstrate experimentally that a population of nine junctions can implement a basis set of functions, providing the data to achieve, for example, the generation of cursive letters. We design hybrid magnetic-CMOS systems based on interlinked populations of junctions and show that they can learn to realize non-linear variability-resilient transformations with a low imprint area and low power.

  5. Use of implementation theory: a focus on PARIHS.

    PubMed

    Ullrich, Philip M; Sahay, Anju; Stetler, Cheryl B

    2014-02-01

    Limited understanding and application of theory in implementation research contributes to variable effectiveness of implementation studies. Better understanding of direct experiences with theory could improve implementation research and the potency of interventions. This study was a conceptual exercise aimed at characterizing experiences with and applications of the Promoting Action on Research Implementation in Health Services (PARIHS) framework. This was a structured, qualitative study involving document reviews and interviews used to answer the following overarching questions about nine implementation research centers: Why and how was PARIHS used? What strengths and weaknesses were identified for PARIHS? PARIHS was being used for varied purposes, at varied levels, in varied ways, and to a varying extent within and across centers. Lack of implementation theory use in investigators' early years was common. Variability in the nature of theory use was attributable to characteristics of the centers, individual investigators, and features of PARIHS. Strengths and weaknesses of the PARIHS framework were identified. The study provides information to researchers and theorists about the use of one well-known implementation framework. The information suggests areas for improvements in PARIHS as well as theory use in general, and should assist in the development of theory-based programs of research. Published 2013. This article is a U.S. Government work and is in the public domain in the USA.

  6. Super's Career Stages and the Decision to Change Careers.

    ERIC Educational Resources Information Center

    Smart, Roslyn; Peterson, Candida

    1997-01-01

    Australians (n=226) in one of four stages of a second career (contemplating, choosing a field, implementing, change completed) were compared with 81 nonchangers. Job satisfaction varied as a function of stage. Results supported Super's theory that career changers cycle through the full set of career stages a second time. (SK)

  7. Roles High School Principals Play in Establishing a Successful Character Education Initiative

    ERIC Educational Resources Information Center

    Francom, Jacob A.

    2016-01-01

    Principal leadership is crucial to the success of a high school character education initiative. The purpose of this qualitative grounded theory research was to identify the roles that high school principals play in developing, implementing, and sustaining a high functioning character education program. Data were collected through interviews and…

  8. TQM in Higher Education: What Does the Literature Say?

    ERIC Educational Resources Information Center

    Hertzler, Elizabeth

    The implementation of Total Quality Management (TQM) in an organization implies a fundamental change in the way that organization functions. Therefore an examination of the adoption of the TQM philosophy necessitates a review of the most significant and latest literature on change theory and on the impact of organizational culture on change, as…

  9. Analytical second derivatives of excited-state energy within the time-dependent density functional theory coupled with a conductor-like polarizable continuum model.

    PubMed

    Liu, Jie; Liang, WanZhen

    2013-01-14

    This work extends our previous works [J. Liu and W. Z. Liang, J. Chem. Phys. 135, 014113 (2011); J. Liu and W. Z. Liang, J. Chem. Phys. 135, 184111 (2011)] on analytical excited-state Hessian within the framework of time-dependent density functional theory (TDDFT) to couple with a conductor-like polarizable continuum model (CPCM). The formalism, implementation, and application of analytical first and second energy derivatives of TDDFT/CPCM excited state with respect to the nuclear and electric perturbations are presented. Their performances are demonstrated by the calculations of excitation energies, excited-state geometries, and harmonic vibrational frequencies for a number of benchmark systems. The calculated results are in good agreement with the corresponding experimental data or other theoretical calculations, indicating the reliability of the current computer implementation of the developed algorithms. Then we made some preliminary applications to calculate the resonant Raman spectrum of 4-hydroxybenzylidene-2,3-dimethyl-imidazolinone in ethanol solution and the infrared spectra of ground and excited states of 9-fluorenone in methanol solution.

  10. Electronic coupling matrix elements from charge constrained density functional theory calculations using a plane wave basis set

    NASA Astrophysics Data System (ADS)

    Oberhofer, Harald; Blumberger, Jochen

    2010-12-01

    We present a plane wave basis set implementation for the calculation of electronic coupling matrix elements of electron transfer reactions within the framework of constrained density functional theory (CDFT). Following the work of Wu and Van Voorhis [J. Chem. Phys. 125, 164105 (2006)], the diabatic wavefunctions are approximated by the Kohn-Sham determinants obtained from CDFT calculations, and the coupling matrix element calculated by an efficient integration scheme. Our results for intermolecular electron transfer in small systems agree very well with high-level ab initio calculations based on generalized Mulliken-Hush theory, and with previous local basis set CDFT calculations. The effect of thermal fluctuations on the coupling matrix element is demonstrated for intramolecular electron transfer in the tetrathiafulvalene-diquinone (Q-TTF-Q-) anion. Sampling the electronic coupling along density functional based molecular dynamics trajectories, we find that thermal fluctuations, in particular the slow bending motion of the molecule, can lead to changes in the instantaneous electron transfer rate by more than an order of magnitude. The thermal average, ( {< {| {H_ab } |^2 } > } )^{1/2} = 6.7 {mH}, is significantly higher than the value obtained for the minimum energy structure, | {H_ab } | = 3.8 {mH}. While CDFT in combination with generalized gradient approximation (GGA) functionals describes the intermolecular electron transfer in the studied systems well, exact exchange is required for Q-TTF-Q- in order to obtain coupling matrix elements in agreement with experiment (3.9 mH). The implementation presented opens up the possibility to compute electronic coupling matrix elements for extended systems where donor, acceptor, and the environment are treated at the quantum mechanical (QM) level.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suryanarayana, Phanish, E-mail: phanish.suryanarayana@ce.gatech.edu; Phanish, Deepa

    We present an Augmented Lagrangian formulation and its real-space implementation for non-periodic Orbital-Free Density Functional Theory (OF-DFT) calculations. In particular, we rewrite the constrained minimization problem of OF-DFT as a sequence of minimization problems without any constraint, thereby making it amenable to powerful unconstrained optimization algorithms. Further, we develop a parallel implementation of this approach for the Thomas–Fermi–von Weizsacker (TFW) kinetic energy functional in the framework of higher-order finite-differences and the conjugate gradient method. With this implementation, we establish that the Augmented Lagrangian approach is highly competitive compared to the penalty and Lagrange multiplier methods. Additionally, we show that higher-ordermore » finite-differences represent a computationally efficient discretization for performing OF-DFT simulations. Overall, we demonstrate that the proposed formulation and implementation are both efficient and robust by studying selected examples, including systems consisting of thousands of atoms. We validate the accuracy of the computed energies and forces by comparing them with those obtained by existing plane-wave methods.« less

  12. Models, Strategies, and Tools: Theory in Implementing Evidence-Based Findings into Health Care Practice

    PubMed Central

    Sales, Anne; Smith, Jeffrey; Curran, Geoffrey; Kochevar, Laura

    2006-01-01

    This paper presents a case for careful consideration of theory in planning to implement evidence-based practices into clinical care. As described, theory should be tightly linked to strategic planning through careful choice or creation of an implementation framework. Strategies should be linked to specific interventions and/or intervention components to be implemented, and the choice of tools should match the interventions and overall strategy, linking back to the original theory and framework. The thesis advanced is that in most studies where there is an attempt to implement planned change in clinical processes, theory is used loosely. An example of linking theory to intervention design is presented from a Mental Health Quality Enhancement Research Initiative effort to increase appropriate use of antipsychotic medication among patients with schizophrenia in the Veterans Health Administration. PMID:16637960

  13. Models, strategies, and tools. Theory in implementing evidence-based findings into health care practice.

    PubMed

    Sales, Anne; Smith, Jeffrey; Curran, Geoffrey; Kochevar, Laura

    2006-02-01

    This paper presents a case for careful consideration of theory in planning to implement evidence-based practices into clinical care. As described, theory should be tightly linked to strategic planning through careful choice or creation of an implementation framework. Strategies should be linked to specific interventions and/or intervention components to be implemented, and the choice of tools should match the interventions and overall strategy, linking back to the original theory and framework. The thesis advanced is that in most studies where there is an attempt to implement planned change in clinical processes, theory is used loosely. An example of linking theory to intervention design is presented from a Mental Health Quality Enhancement Research Initiative effort to increase appropriate use of antipsychotic medication among patients with schizophrenia in the Veterans Health Administration.

  14. A functional approach to emotion in autonomous systems.

    PubMed

    Sanz, Ricardo; Hernández, Carlos; Gómez, Jaime; Hernando, Adolfo

    2010-01-01

    The construction of fully effective systems seems to pass through the proper exploitation of goal-centric self-evaluative capabilities that let the system teleologically self-manage. Emotions seem to provide this kind of functionality to biological systems and hence the interest in emotion for function sustainment in artificial systems performing in changing and uncertain environments; far beyond the media hullabaloo of displaying human-like emotion-laden faces in robots. This chapter provides a brief analysis of the scientific theories of emotion and presents an engineering approach for developing technology for robust autonomy by implementing functionality inspired in that of biological emotions.

  15. Using Policy Attributes Theory to Examine Comprehensive School Reform Implementation in Two Title I Middle Schools

    ERIC Educational Resources Information Center

    Patterson, Jean A.; Campbell, J. K.; Johnson, Dawn M.; Marx, Gina; Whitener, Mark

    2013-01-01

    Findings from a qualitative study of two Title I middle schools that were in their second year of implementing an externally developed Comprehensive School Reform (CSR) model are presented. Policy attributes theory was used as a framework for examining implementation. The theory argues fidelity of implementation of a CSR is strongest when it is…

  16. STM contrast of a CO dimer on a Cu(1 1 1) surface: a wave-function analysis.

    PubMed

    Gustafsson, Alexander; Paulsson, Magnus

    2017-12-20

    We present a method used to intuitively interpret the scanning tunneling microscopy (STM) contrast by investigating individual wave functions originating from the substrate and tip side. We use localized basis orbital density functional theory, and propagate the wave functions into the vacuum region at a real-space grid, including averaging over the lateral reciprocal space. Optimization by means of the method of Lagrange multipliers is implemented to perform a unitary transformation of the wave functions in the middle of the vacuum region. The method enables (i) reduction of the number of contributing tip-substrate wave function combinations used in the corresponding transmission matrix, and (ii) to bundle up wave functions with similar symmetry in the lateral plane, so that (iii) an intuitive understanding of the STM contrast can be achieved. The theory is applied to a CO dimer adsorbed on a Cu(1 1 1) surface scanned by a single-atom Cu tip, whose STM image is discussed in detail by the outlined method.

  17. STM contrast of a CO dimer on a Cu(1 1 1) surface: a wave-function analysis

    NASA Astrophysics Data System (ADS)

    Gustafsson, Alexander; Paulsson, Magnus

    2017-12-01

    We present a method used to intuitively interpret the scanning tunneling microscopy (STM) contrast by investigating individual wave functions originating from the substrate and tip side. We use localized basis orbital density functional theory, and propagate the wave functions into the vacuum region at a real-space grid, including averaging over the lateral reciprocal space. Optimization by means of the method of Lagrange multipliers is implemented to perform a unitary transformation of the wave functions in the middle of the vacuum region. The method enables (i) reduction of the number of contributing tip-substrate wave function combinations used in the corresponding transmission matrix, and (ii) to bundle up wave functions with similar symmetry in the lateral plane, so that (iii) an intuitive understanding of the STM contrast can be achieved. The theory is applied to a CO dimer adsorbed on a Cu(1 1 1) surface scanned by a single-atom Cu tip, whose STM image is discussed in detail by the outlined method.

  18. PyR@TE 2: A Python tool for computing RGEs at two-loop

    NASA Astrophysics Data System (ADS)

    Lyonnet, F.; Schienbein, I.

    2017-04-01

    Renormalization group equations are an essential tool for the description of theories across different energy scales. Even though their expressions at two-loop for an arbitrary gauge field theory have been known for more than thirty years, deriving the full set of equations for a given model by hand is very challenging and prone to errors. To tackle this issue, we have introduced in Lyonnet et al. (2014) a Python tool called PyR@TE; Python Renormalization group equations @ Two-loop for Everyone. With PyR@TE, it is easy to implement a given Lagrangian and derive the complete set of two-loop RGEs for all the parameters of the theory. In this paper, we present the new version of this code, PyR@TE 2, which brings many new features and in particular it incorporates kinetic mixing when several U(1) gauge groups are involved. In addition, the group theory part has been greatly improved as we introduced a new Python module dubbed PyLie that deals with all the group theoretical aspects required for the calculation of the RGEs as well as providing very useful model building capabilities. This allows the use of any irreducible representation of the SU(n) , SO(2 n) and SO(2n + 1) groups. Furthermore, it is now possible to implement terms in the Lagrangian involving fields which can be contracted into gauge singlets in more than one way. As a byproduct, results for a popular model (SM + complex triplet) for which, to our knowledge, the complete set of two-loop RGEs has not been calculated before are presented in this paper. Finally, the two-loop RGEs for the anomalous dimension of the scalar and fermion fields have been implemented as well. It is now possible to export the coupled system of beta functions into a numerical C++ function, leading to a consequent speed up in solving them.

  19. A functional-dependencies-based Bayesian networks learning method and its application in a mobile commerce system.

    PubMed

    Liao, Stephen Shaoyi; Wang, Huai Qing; Li, Qiu Dan; Liu, Wei Yi

    2006-06-01

    This paper presents a new method for learning Bayesian networks from functional dependencies (FD) and third normal form (3NF) tables in relational databases. The method sets up a linkage between the theory of relational databases and probabilistic reasoning models, which is interesting and useful especially when data are incomplete and inaccurate. The effectiveness and practicability of the proposed method is demonstrated by its implementation in a mobile commerce system.

  20. Density functional calculations of multiphonon capture cross sections at defects in semiconductors

    NASA Astrophysics Data System (ADS)

    Barmparis, Georgios D.; Puzyrev, Yevgeniy S.; Zhang, X.-G.; Pantelides, Sokrates T.

    2014-03-01

    The theory of electron capture cross sections by multiphonon processes in semiconductors has a long and controversial history. Here we present a comprehensive theory and describe its implementation for realistic calculations. The Born-Oppenheimer and the Frank-Condon approximations are employed. The transition probability of an incoming electron is written as a product of an instantaneous electronic transition in the initial defect configuration and the line shape function (LSF) that describes the multiphonon processes that lead to lattice relaxation. The electronic matrix elements are calculated using the Projector Augmented Wave (PAW) method which yields the true wave functions while still employing a plane-wave basis. The LSF is calculated by employing a Monte Carlo method and the real phonon modes of the defect, calculated using density functional theory in the PAW scheme. Initial results of the capture cross section for a prototype system, namely a triply hydrogenated vacancy in Si are presented. The results are relevant for modeling device degradation by hot electron effects. This work is supported in part by the Samsung Advanced Institute of Technology (SAIT)'s Global Research Outreach (GRO) Program and by the LDRD program at ORNL.

  1. Density functional theory study of structural, electronic, and thermal properties of Pt, Pd, Rh, Ir, Os and PtPd X (X = Ir, Os, and Rh) alloys

    NASA Astrophysics Data System (ADS)

    Shabbir, Ahmed; Muhammad, Zafar; M, Shakil; M, A. Choudhary

    2016-03-01

    The structural, electronic, mechanical, and thermal properties of Pt, Pd, Rh, Ir, Os metals and their alloys PtPdX (X = Ir, Os and Rh) are studied systematically using ab initio density functional theory. The groundstate properties such as lattice constant and bulk modulus are calculated to find the equilibrium atomic position for stable alloys. The electronic band structure and density of states are calculated to study the electronic behavior of metals on making their alloys. The electronic properties substantiate the metallic behavior for all studied materials. The firstprinciples density functional perturbation theory as implemented in quasi-harmonic approximation is used for the calculations of thermal properties. We have calculated the thermal properties such as the Debye temperature, vibrational energy, entropy and constant-volume specific heat. The calculated properties are compared with the previously reported experimental and theoretical data for metals and are found to be in good agreement. Calculated results for alloys could not be compared because there is no data available in the literature with such alloy composition.

  2. Aspects of perturbation theory in quantum mechanics: The BenderWuMATHEMATICA® package

    NASA Astrophysics Data System (ADS)

    Sulejmanpasic, Tin; Ünsal, Mithat

    2018-07-01

    We discuss a general setup which allows the study of the perturbation theory of an arbitrary, locally harmonic 1D quantum mechanical potential as well as its multi-variable (many-body) generalization. The latter may form a prototype for regularized quantum field theory. We first generalize the method of Bender-Wu,and derive exact recursion relations which allow the determination of the perturbative wave-function and energy corrections to an arbitrary order, at least in principle. For 1D systems, we implement these equations in an easy to use MATHEMATICA® package we call BenderWu. Our package enables quick home-computer computation of high orders of perturbation theory (about 100 orders in 10-30 s, and 250 orders in 1-2 h) and enables practical study of a large class of problems in Quantum Mechanics. We have two hopes concerning the BenderWu package. One is that due to resurgence, large amount of non-perturbative information, such as non-perturbative energies and wave-functions (e.g. WKB wave functions), can in principle be extracted from the perturbative data. We also hope that the package may be used as a teaching tool, providing an effective bridge between perturbation theory and non-perturbative physics in textbooks. Finally, we show that for the multi-variable case, the recursion relation acquires a geometric character, and has a structure which allows parallelization to computer clusters.

  3. Second-order perturbation theory with a density matrix renormalization group self-consistent field reference function: theory and application to the study of chromium dimer.

    PubMed

    Kurashige, Yuki; Yanai, Takeshi

    2011-09-07

    We present a second-order perturbation theory based on a density matrix renormalization group self-consistent field (DMRG-SCF) reference function. The method reproduces the solution of the complete active space with second-order perturbation theory (CASPT2) when the DMRG reference function is represented by a sufficiently large number of renormalized many-body basis, thereby being named DMRG-CASPT2 method. The DMRG-SCF is able to describe non-dynamical correlation with large active space that is insurmountable to the conventional CASSCF method, while the second-order perturbation theory provides an efficient description of dynamical correlation effects. The capability of our implementation is demonstrated for an application to the potential energy curve of the chromium dimer, which is one of the most demanding multireference systems that require best electronic structure treatment for non-dynamical and dynamical correlation as well as large basis sets. The DMRG-CASPT2/cc-pwCV5Z calculations were performed with a large (3d double-shell) active space consisting of 28 orbitals. Our approach using large-size DMRG reference addressed the problems of why the dissociation energy is largely overestimated by CASPT2 with the small active space consisting of 12 orbitals (3d4s), and also is oversensitive to the choice of the zeroth-order Hamiltonian. © 2011 American Institute of Physics

  4. From the Orbital Implementation of the Kinetic Theory to the Polarization Propagator Method in the Study of Energy Deposition Problems

    NASA Astrophysics Data System (ADS)

    Cabrera-Trujillo, R.; Cruz, S. A.; Soullard, J.

    The energy deposited by swift atomic-ion projectiles when colliding with a given target material has been a topic of special scientific interest for the last century due to the variety of applications of ion beams in modern materials technology as well as in medical physics. In this work, we summarize our contributions in this field as a consequence of fruitful discussions and enlightening ideas put forward by one of the main protagonists in stopping power theory during the last three decades: Jens Oddershede. Our review, mainly motivated by Jens' work, evolves from the extension of the orbital implementation of the kinetic theory of stopping through the orbital local plasma approximation, its use in studies of orbital and total mean excitation energies for the study of atomic and molecular stopping until the advances on generalized oscillator strength and sum rules in the study of stopping cross sections. Finally, as a tribute to Jens' work on the orbital implementation of the kinetic theory of stopping, in this work we present new results on the use of the Thomas-Fermi-Dirac-Weizsäcker density functional for the calculation of orbital and total atomic mean excitation energies. The results are applied to free-atoms and and extension is done to confined atoms - taking Si as an example - whereby target pressure effects on stopping are derived. Hence, evidence of the far-yield of Jens' ideas is given.

  5. Higher-order finite-difference formulation of periodic Orbital-free Density Functional Theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghosh, Swarnava; Suryanarayana, Phanish, E-mail: phanish.suryanarayana@ce.gatech.edu

    2016-02-15

    We present a real-space formulation and higher-order finite-difference implementation of periodic Orbital-free Density Functional Theory (OF-DFT). Specifically, utilizing a local reformulation of the electrostatic and kernel terms, we develop a generalized framework for performing OF-DFT simulations with different variants of the electronic kinetic energy. In particular, we propose a self-consistent field (SCF) type fixed-point method for calculations involving linear-response kinetic energy functionals. In this framework, evaluation of both the electronic ground-state and forces on the nuclei are amenable to computations that scale linearly with the number of atoms. We develop a parallel implementation of this formulation using the finite-difference discretization.more » We demonstrate that higher-order finite-differences can achieve relatively large convergence rates with respect to mesh-size in both the energies and forces. Additionally, we establish that the fixed-point iteration converges rapidly, and that it can be further accelerated using extrapolation techniques like Anderson's mixing. We validate the accuracy of the results by comparing the energies and forces with plane-wave methods for selected examples, including the vacancy formation energy in Aluminum. Overall, the suitability of the proposed formulation for scalable high performance computing makes it an attractive choice for large-scale OF-DFT calculations consisting of thousands of atoms.« less

  6. Data Processing And Machine Learning Methods For Multi-Modal Operator State Classification Systems

    NASA Technical Reports Server (NTRS)

    Hearn, Tristan A.

    2015-01-01

    This document is intended as an introduction to a set of common signal processing learning methods that may be used in the software portion of a functional crew state monitoring system. This includes overviews of both the theory of the methods involved, as well as examples of implementation. Practical considerations are discussed for implementing modular, flexible, and scalable processing and classification software for a multi-modal, multi-channel monitoring system. Example source code is also given for all of the discussed processing and classification methods.

  7. Development of Fast and Reliable Free-Energy Density Functional Methods for Simulations of Dense Plasmas from Cold- to Hot-Temperature Regimes

    NASA Astrophysics Data System (ADS)

    Karasiev, V. V.

    2017-10-01

    Free-energy density functional theory (DFT) is one of the standard tools in high-energy-density physics used to determine the fundamental properties of dense plasmas, especially in cold and warm regimes when quantum effects are essential. DFT is usually implemented via the orbital-dependent Kohn-Sham (KS) procedure. There are two challenges of conventional implementation: (1) KS computational cost becomes prohibitively expensive at high temperatures; and (2) ground-state exchange-correlation (XC) functionals do not take into account the XC thermal effects. This talk will address both challenges and report details of the formal development of new generalized gradient approximation (GGA) XC free-energy functional which bridges low-temperature (ground state) and high-temperature (plasma) limits. Recent progress on development of functionals for orbital-free DFT as a way to address the second challenge will also be discussed. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0001944.

  8. Using organization theory to understand the determinants of effective implementation of worksite health promotion programs.

    PubMed

    Weiner, Bryan J; Lewis, Megan A; Linnan, Laura A

    2009-04-01

    The field of worksite health promotion has moved toward the development and testing of comprehensive programs that target health behaviors with interventions operating at multiple levels of influence. Yet, observational and process evaluation studies indicate that such programs are challenging for worksites to implement effectively. Research has identified several organizational factors that promote or inhibit effective implementation of comprehensive worksite health promotion programs. However, no integrated theory of implementation has emerged from this research. This article describes a theory of the organizational determinants of effective implementation of comprehensive worksite health promotion programs. The model is adapted from theory and research on the implementation of complex innovations in manufacturing, education and health care settings. The article uses the Working Well Trial to illustrate the model's theoretical constructs. Although the article focuses on comprehensive worksite health promotion programs, the conceptual model may also apply to other types of complex health promotion programs. An organization-level theory of the determinants of effective implementation of worksite health promotion programs.

  9. Final Technical Report for DE-SC0001878 [Theory and Simulation of Defects in Oxide Materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chelikowsky, James R.

    2014-04-14

    We explored a wide variety of oxide materials and related problems, including materials at the nanoscale and generic problems associated with oxide materials such as the development of more efficient computational tools to examine these materials. We developed and implemented methods to understand the optical and structural properties of oxides. For ground state properties, our work is predominantly based on pseudopotentials and density functional theory (DFT), including new functionals and going beyond the local density approximation (LDA): LDA+U. To study excited state properties (quasiparticle and optical excitations), we use time dependent density functional theory, the GW approach, and GW plusmore » Bethe-Salpeter equation (GW-BSE) methods based on a many-body Green function approaches. Our work focused on the structural, electronic, optical and magnetic properties of defects (such as oxygen vacancies) in hafnium oxide, titanium oxide (both bulk and clusters) and related materials. We calculated the quasiparticle defect states and charge transition levels of oxygen vacancies in monoclinic hafnia. we presented a milestone G0W0 study of two of the crystalline phases of dye-sensitized TiO{sub 2} clusters. We employed hybrid density functional theory to examine the electronic structure of sexithiophene/ZnO interfaces. To identify the possible effect of epitaxial strain on stabilization of the ferromagnetic state of LaCoO{sub 3} (LCO), we compare the total energy of the magnetic and nonmagnetic states of the strained theoretical bulk structure.« less

  10. Practicing Technology Implementation: The Case of an Enterprise System

    ERIC Educational Resources Information Center

    Awazu, Yukika

    2013-01-01

    Drawing on four theories of practice--Communities of Practice (CoP), Bourdieu's theory of practice, Pickering's mangle of practice, and Actor Network Theory (ANT), the study provides an in-depth understanding about technology implementation practice. Analysis of an Enterprise System implementation project in a software manufacturing…

  11. Modelling machine ensembles with discrete event dynamical system theory

    NASA Technical Reports Server (NTRS)

    Hunter, Dan

    1990-01-01

    Discrete Event Dynamical System (DEDS) theory can be utilized as a control strategy for future complex machine ensembles that will be required for in-space construction. The control strategy involves orchestrating a set of interactive submachines to perform a set of tasks for a given set of constraints such as minimum time, minimum energy, or maximum machine utilization. Machine ensembles can be hierarchically modeled as a global model that combines the operations of the individual submachines. These submachines are represented in the global model as local models. Local models, from the perspective of DEDS theory , are described by the following: a set of system and transition states, an event alphabet that portrays actions that takes a submachine from one state to another, an initial system state, a partial function that maps the current state and event alphabet to the next state, and the time required for the event to occur. Each submachine in the machine ensemble is presented by a unique local model. The global model combines the local models such that the local models can operate in parallel under the additional logistic and physical constraints due to submachine interactions. The global model is constructed from the states, events, event functions, and timing requirements of the local models. Supervisory control can be implemented in the global model by various methods such as task scheduling (open-loop control) or implementing a feedback DEDS controller (closed-loop control).

  12. Aspects Topologiques de la Theorie des Champs et leurs Applications

    NASA Astrophysics Data System (ADS)

    Caenepeel, Didier

    This thesis is dedicated to the study of various topological aspects of field theory, and is divided in three parts. In two space dimensions the possibility of fractional statistics can be implemented by adding an appropriate "fictitious" electric charge and magnetic flux to each particle (after which they are known as anyons). Since the statistical interaction is rather difficult to handle, a mean-field approximation is used in order to describe a gas of anyons. We derive a criterion for the validity of this approximation using the inherent feature of parity violation in the scattering of anyons. We use this new method in various examples of anyons and show both analytically and numerically that the approximation is justified if the statistical interaction is weak, and that it must be more weak for boson-based than for fermion-based anyons. Chern-Simons theories give an elegant implementation of anyonic properties in field theories, which permits the emergence of new mechanisms for anyon superconductivity. Since it is reasonable to think that superconductivity is a low energy phenomenon, we have been interested in non-relativistic C-S systems. We present the scalar field effective potential for non-relativistic matter coupled to both Abelian and non-Abelian C-S gauge fields. We perform the calculations using functional methods in background fields. Finally, we compute the scalar effective potential in various gauges and treat divergences with various regularization schemes. In three space dimensions, a generalization of Chern-Simons theory may be achieved by introducing an antisymmetric tensor gauge field. We use these theories, called B wedge F theories, to present an alternative to the Higgs mechanism to generate masses for non-Abelian gauge fields. The initial Lagrangian is composed of a fermion with current-current and dipole-dipole type self -interactions minimally coupled to non-Abelian gauge fields. The mass generation occurs upon the fermionic functional integration. We show that by suitably adjusting the coupling constants the effective theory contains massive non-Abelian gauge fields without any residual scalars or other degrees of freedom.

  13. Implementation of cardiovascular disease prevention in primary health care: enhancing understanding using normalisation process theory.

    PubMed

    Volker, Nerida; Williams, Lauren T; Davey, Rachel C; Cochrane, Thomas; Clancy, Tanya

    2017-02-24

    The reorientation of primary health care towards prevention is fundamental to addressing the rising burden of chronic disease. However, in Australia, cardiovascular disease prevention practice in primary health care is not generally consistent with existing guidelines. The Model for Prevention study was a whole-of-system cardiovascular disease prevention intervention, with one component being enhanced lifestyle modification support and addition of a health coaching service in the general practice setting. To determine the feasibility of translating intervention outcomes into real world practice, implementation work done by stakeholders was examined using Normalisation Process Theory as a framework. Data was collected through interviews with 40 intervention participants and included general practitioners, practice nurses, practice managers, lifestyle advisors and participants. Data analysis was informed by normalisation process theory constructs. Stakeholders were in agreement that, while prevention is a key function of general practice, it was not their usual work. There were varying levels of engagement with the intervention by practice staff due to staff interest, capacity and turnover, but most staff reconfigured their work for required activities. The Lifestyle Advisors believed staff had varied levels of interest in and understanding of, their service, but most staff felt their role was useful. Patients expanded their existing relationships with their general practice, and most achieved their lifestyle modification goals. While the study highlighted the complex nature of the change required, many of the new or enhanced processes implemented as part of the intervention could be scaled up to improve the systems approach to prevention. Overcoming the barriers to change, such as the perception of CVD prevention as a 'hard sell', is going to rely on improving the value proposition for all stakeholders. The study provided a detailed understanding of the work required to implement a complex cardiovascular disease prevention intervention within general practice. The findings highlighted the need for multiple strategies that engage all stakeholders. Normalisation process theory was a useful framework for guiding change implementation.

  14. The Myers-Briggs Type Indicator and the Teaching-Learning Process.

    ERIC Educational Resources Information Center

    McCaulley, Mary H.

    The Myers-Briggs Type Indicator (MBTI) was developed specifically to make possible the implementation of Carl Jung's theory of type and is concerned mainly with conscious elements of the personality. It assumes that to function well, an individual must have a well-developed system for perception and a well-developed system for making decisions or…

  15. Lord's Wald Test for Detecting Dif in Multidimensional Irt Models: A Comparison of Two Estimation Approaches

    ERIC Educational Resources Information Center

    Lee, Soo; Suh, Youngsuk

    2018-01-01

    Lord's Wald test for differential item functioning (DIF) has not been studied extensively in the context of the multidimensional item response theory (MIRT) framework. In this article, Lord's Wald test was implemented using two estimation approaches, marginal maximum likelihood estimation and Bayesian Markov chain Monte Carlo estimation, to detect…

  16. Multiresolution quantum chemistry in multiwavelet bases: excited states from time-dependent Hartree–Fock and density functional theory via linear response

    DOE PAGES

    Yanai, Takeshi; Fann, George I.; Beylkin, Gregory; ...

    2015-02-25

    Using the fully numerical method for time-dependent Hartree–Fock and density functional theory (TD-HF/DFT) with the Tamm–Dancoff (TD) approximation we use a multiresolution analysis (MRA) approach to present our findings. From a reformulation with effective use of the density matrix operator, we obtain a general form of the HF/DFT linear response equation in the first quantization formalism. It can be readily rewritten as an integral equation with the bound-state Helmholtz (BSH) kernel for the Green's function. The MRA implementation of the resultant equation permits excited state calculations without virtual orbitals. Moreover, the integral equation is efficiently and adaptively solved using amore » numerical multiresolution solver with multiwavelet bases. Our implementation of the TD-HF/DFT methods is applied for calculating the excitation energies of H 2, Be, N 2, H 2O, and C 2H 4 molecules. The numerical errors of the calculated excitation energies converge in proportion to the residuals of the equation in the molecular orbitals and response functions. The energies of the excited states at a variety of length scales ranging from short-range valence excitations to long-range Rydberg-type ones are consistently accurate. It is shown that the multiresolution calculations yield the correct exponential asymptotic tails for the response functions, whereas those computed with Gaussian basis functions are too diffuse or decay too rapidly. Finally, we introduce a simple asymptotic correction to the local spin-density approximation (LSDA) so that in the TDDFT calculations, the excited states are correctly bound.« less

  17. Toward a General Research Process for Using Dubin's Theory Building Model

    ERIC Educational Resources Information Center

    Holton, Elwood F.; Lowe, Janis S.

    2007-01-01

    Dubin developed a widely used methodology for theory building, which describes the components of the theory building process. Unfortunately, he does not define a research process for implementing his theory building model. This article proposes a seven-step general research process for implementing Dubin's theory building model. An example of a…

  18. Corrigendum: First principles calculation of field emission from nanostructures using time-dependent density functional theory: A simplified approach

    NASA Astrophysics Data System (ADS)

    Tawfik, Sherif A.; El-Sheikh, S. M.; Salem, N. M.

    2016-09-01

    Recently we have become aware that the description of the quantum wave functions in Sec. 2.1 is incorrect. In the published version of the paper, we have stated that the states are expanded in terms of plane waves. However, the correct description of the quantum states in the context of the real space implementation (using the Octopus code) is that states are represented by discrete points in a real space grid.

  19. The journey from forensic to predictive materials science using density functional theory

    DOE PAGES

    Schultz, Peter A.

    2017-09-12

    Approximate methods for electronic structure, implemented in sophisticated computer codes and married to ever-more powerful computing platforms, have become invaluable in chemistry and materials science. The maturing and consolidation of quantum chemistry codes since the 1980s, based upon explicitly correlated electronic wave functions, has made them a staple of modern molecular chemistry. Here, the impact of first principles electronic structure in physics and materials science had lagged owing to the extra formal and computational demands of bulk calculations.

  20. The journey from forensic to predictive materials science using density functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schultz, Peter A.

    Approximate methods for electronic structure, implemented in sophisticated computer codes and married to ever-more powerful computing platforms, have become invaluable in chemistry and materials science. The maturing and consolidation of quantum chemistry codes since the 1980s, based upon explicitly correlated electronic wave functions, has made them a staple of modern molecular chemistry. Here, the impact of first principles electronic structure in physics and materials science had lagged owing to the extra formal and computational demands of bulk calculations.

  1. Time dependent density functional calculation of plasmon response in clusters

    NASA Astrophysics Data System (ADS)

    Wang, Feng; Zhang, Feng-Shou; Eric, Suraud

    2003-02-01

    We have introduced a theoretical scheme for the efficient description of the optical response of a cluster based on the time-dependent density functional theory. The practical implementation is done by means of the fully fledged time-dependent local density approximation scheme, which is solved directly in the time domain without any linearization. As an example we consider the simple Na2 cluster and compute its surface plasmon photoabsorption cross section, which is in good agreement with the experiments.

  2. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science

    PubMed Central

    Damschroder, Laura J; Aron, David C; Keith, Rosalind E; Kirsh, Susan R; Alexander, Jeffery A; Lowery, Julie C

    2009-01-01

    Background Many interventions found to be effective in health services research studies fail to translate into meaningful patient care outcomes across multiple contexts. Health services researchers recognize the need to evaluate not only summative outcomes but also formative outcomes to assess the extent to which implementation is effective in a specific setting, prolongs sustainability, and promotes dissemination into other settings. Many implementation theories have been published to help promote effective implementation. However, they overlap considerably in the constructs included in individual theories, and a comparison of theories reveals that each is missing important constructs included in other theories. In addition, terminology and definitions are not consistent across theories. We describe the Consolidated Framework For Implementation Research (CFIR) that offers an overarching typology to promote implementation theory development and verification about what works where and why across multiple contexts. Methods We used a snowball sampling approach to identify published theories that were evaluated to identify constructs based on strength of conceptual or empirical support for influence on implementation, consistency in definitions, alignment with our own findings, and potential for measurement. We combined constructs across published theories that had different labels but were redundant or overlapping in definition, and we parsed apart constructs that conflated underlying concepts. Results The CFIR is composed of five major domains: intervention characteristics, outer setting, inner setting, characteristics of the individuals involved, and the process of implementation. Eight constructs were identified related to the intervention (e.g., evidence strength and quality), four constructs were identified related to outer setting (e.g., patient needs and resources), 12 constructs were identified related to inner setting (e.g., culture, leadership engagement), five constructs were identified related to individual characteristics, and eight constructs were identified related to process (e.g., plan, evaluate, and reflect). We present explicit definitions for each construct. Conclusion The CFIR provides a pragmatic structure for approaching complex, interacting, multi-level, and transient states of constructs in the real world by embracing, consolidating, and unifying key constructs from published implementation theories. It can be used to guide formative evaluations and build the implementation knowledge base across multiple studies and settings. PMID:19664226

  3. The Importance of Three-Body Interactions in Molecular Dynamics Simulations of Water with the Fragment Molecular Orbital Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pruitt, Spencer R.; Nakata, Hiroya; Nagata, Takeshi

    2016-04-12

    The analytic first derivative with respect to nuclear coordinates is formulated and implemented in the framework of the three-body fragment molecular orbital (FMO) method. The gradient has been derived and implemented for restricted Hartree-Fock, second-order Møller-Plesset perturbation, and density functional theories. The importance of the three-body fully analytic gradient is illustrated through the failure of the two-body FMO method during molecular dynamics simulations of a small water cluster. The parallel implementation of the fragment molecular orbital method, its parallel efficiency, and its scalability on the Blue Gene/Q architecture up to 262,144 CPU cores, are also discussed.

  4. Design, implementation, and extension of thermal invisibility cloaks

    NASA Astrophysics Data System (ADS)

    Zhang, Youming; Xu, Hongyi; Zhang, Baile

    2015-05-01

    A thermal invisibility cloak, as inspired by optical invisibility cloaks, is a device which can steer the conductive heat flux around an isolated object without changing the ambient temperature distribution so that the object can be "invisible" to external thermal environment. While designs of thermal invisibility cloaks inherit previous theories from optical cloaks, the uniqueness of heat diffusion leads to more achievable implementations. Thermal invisibility cloaks, as well as the variations including thermal concentrator, rotator, and illusion devices, have potentials to be applied in thermal management, sensing and imaging applications. Here, we review the current knowledge of thermal invisibility cloaks in terms of their design and implementation in cloaking studies, and their extension as other functional devices.

  5. Use of the Equity Implementation Model to Review Clinical System Implementation Efforts

    PubMed Central

    Lauer, Thomas W.; Joshi, Kailash; Browdy, Thomas

    2000-01-01

    This paper presents the equity implementation model (EIM) in the context of a case that describes the implementation of a medical scheduling system. The model is based on equity theory, a well-established theory in the social sciences that has been tested in hundreds of experimental and field studies. The predictions of equity theory have been supported in organizational, societal, family, and other social settings. Thus, the EIM helps provide a theory-based understanding for collecting and reviewing users' reactions to, and acceptance or rejection of, a new technology or system. The case study (implementation of a patient scheduling and appointment setting system in a large health maintenance organization) illustrates how the EIM can be used to examine users' reactions to the implementation of a new system. PMID:10641966

  6. Further Evidence That Sleep Deprivation Effects and the Vigilance Decrement Are Functionally Equivalent: Comment on Altmann (2018).

    PubMed

    Gunzelmann, Glenn; Veksler, Bella

    2018-03-01

    Veksler and Gunzelmann (2018) argue that the vigilance decrement and the deleterious effects of sleep loss reflect functionally equivalent degradations in cognitive processing and performance. Our account is implemented in a cognitive architecture, where these factors produce breakdowns in goal-directed cognitive processing that we refer to as microlapses. Altmann (2018) raises a number of challenges to microlapses as a unified account of these deficits. Under scrutiny, however, the challenges do little to discredit the theory or conclusions in the original paper. In our response, we address the most serious challenges. In so doing, we provide additional support for the theory and mechanisms, and we highlight opportunities for extending their explanatory breadth. Copyright © 2018 Cognitive Science Society, Inc.

  7. AuNx stabilization with interstitial nitrogen atoms: A Density Functional Theory Study

    NASA Astrophysics Data System (ADS)

    Quintero, J. H.; Gonzalez-Hernandez, R.; Ospina, R.; Mariño, A.

    2017-06-01

    Researchers have been studying 4d and 5d Series Transition Metal Nitrides lately as a result of the experimental production of AuN, PtN, CuN. In this paper, we used the Density Functional Theory (DFT) implementing a pseudopotential plane-wave method to study the incorporation of nitrogen atoms in the face-centered cube (fcc) lattice of gold (Au). First, we took the fcc structure of gold, and gradually located the nitrogen atoms in tetrahedral (TH) and octahedral (OH) interstitial sites. AuN stabilized in: 2OH (30%), 4OH and 4TH (50%), 4OH - 2TH (close to the wurtzite structure) and 6TH (60%). This leads us to think that AuN behaves like a Transition Metal Nitride since the nitrogen atoms look for tetrahedral sites.

  8. Large-Scale Linear Optimization through Machine Learning: From Theory to Practical System Design and Implementation

    DTIC Science & Technology

    2016-08-10

    AFRL-AFOSR-JP-TR-2016-0073 Large-scale Linear Optimization through Machine Learning: From Theory to Practical System Design and Implementation ...2016 4.  TITLE AND SUBTITLE Large-scale Linear Optimization through Machine Learning: From Theory to Practical System Design and Implementation 5a...performances on various machine learning tasks and it naturally lends itself to fast parallel implementations . Despite this, very little work has been

  9. Taking Root: a grounded theory on evidence-based nursing implementation in China.

    PubMed

    Cheng, L; Broome, M E; Feng, S; Hu, Y

    2018-06-01

    Evidence-based nursing is widely recognized as the critical foundation for quality care. To develop a middle-range theory on the process of evidence-based nursing implementation in Chinese context. A grounded theory study using unstructured in-depth individual interviews was conducted with 56 participants who were involved in 24 evidence-based nursing implementation projects in Mainland China from September 2015 to September 2016. A middle-range grounded theory of 'Taking Root' was developed. The theory describes the evidence implementation process consisting of four components (driving forces, process, outcome, sustainment/regression), three approaches (top-down, bottom-up and outside-in), four implementation strategies (patient-centred, nurses at the heart of change, reaching agreement, collaboration) and two patterns (transformational and adaptive implementation). Certain perspectives may have not been captured, as the retrospective nature of the interviewing technique did not allow for 'real-time' assessment of the actual implementation process. The transferability of the findings requires further exploration as few participants with negative experiences were recruited. This is the first study that explored evidence-based implementation process, strategies, approaches and patterns in the Chinese nursing practice context to inform international nursing and health policymaking. The theory of Taking Root described various approaches to evidence implementation and how the implementation can be transformational for the nurses and the setting in which they work. Nursing educators, managers and researchers should work together to improve nurses' readiness for evidence implementation. Healthcare systems need to optimize internal mechanisms and external collaborations to promote nursing practice in line with evidence and achieve clinical outcomes and sustainability. © 2017 International Council of Nurses.

  10. Improved treatment of exact exchange in Quantum ESPRESSO

    DOE PAGES

    Barnes, Taylor A.; Kurth, Thorsten; Carrier, Pierre; ...

    2017-01-18

    Here, we present an algorithm and implementation for the parallel computation of exact exchange in Quantum ESPRESSO (QE) that exhibits greatly improved strong scaling. QE is an open-source software package for electronic structure calculations using plane wave density functional theory, and supports the use of local, semi-local, and hybrid DFT functionals. Wider application of hybrid functionals is desirable for the improved simulation of electronic band energy alignments and thermodynamic properties, but the computational complexity of evaluating the exact exchange potential limits the practical application of hybrid functionals to large systems and requires efficient implementations. We demonstrate that existing implementations ofmore » hybrid DFT that utilize a single data structure for both the local and exact exchange regions of the code are significantly limited in the degree of parallelization achievable. We present a band-pair parallelization approach, in which the calculation of exact exchange is parallelized and evaluated independently from the parallelization of the remainder of the calculation, with the wavefunction data being efficiently transformed on-the-fly into a form that is optimal for each part of the calculation. For a 64 water molecule supercell, our new algorithm reduces the overall time to solution by nearly an order of magnitude.« less

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnes, Taylor A.; Kurth, Thorsten; Carrier, Pierre

    Here, we present an algorithm and implementation for the parallel computation of exact exchange in Quantum ESPRESSO (QE) that exhibits greatly improved strong scaling. QE is an open-source software package for electronic structure calculations using plane wave density functional theory, and supports the use of local, semi-local, and hybrid DFT functionals. Wider application of hybrid functionals is desirable for the improved simulation of electronic band energy alignments and thermodynamic properties, but the computational complexity of evaluating the exact exchange potential limits the practical application of hybrid functionals to large systems and requires efficient implementations. We demonstrate that existing implementations ofmore » hybrid DFT that utilize a single data structure for both the local and exact exchange regions of the code are significantly limited in the degree of parallelization achievable. We present a band-pair parallelization approach, in which the calculation of exact exchange is parallelized and evaluated independently from the parallelization of the remainder of the calculation, with the wavefunction data being efficiently transformed on-the-fly into a form that is optimal for each part of the calculation. For a 64 water molecule supercell, our new algorithm reduces the overall time to solution by nearly an order of magnitude.« less

  12. 3D GIS spatial operation based on extended Euler operators

    NASA Astrophysics Data System (ADS)

    Xu, Hongbo; Lu, Guonian; Sheng, Yehua; Zhou, Liangchen; Guo, Fei; Shang, Zuoyan; Wang, Jing

    2008-10-01

    The implementation of 3 dimensions spatial operations, based on certain data structure, has a lack of universality and is not able to treat with non-manifold cases, at present. ISO/DIS 19107 standard just presents the definition of Boolean operators and set operators for topological relationship query, and OGC GeoXACML gives formal definitions for several set functions without implementation detail. Aiming at these problems, based mathematical foundation on cell complex theory, supported by non-manifold data structure and using relevant research in the field of non-manifold geometry modeling for reference, firstly, this paper according to non-manifold Euler-Poincaré formula constructs 6 extended Euler operators and inverse operators to carry out creating, updating and deleting 3D spatial elements, as well as several pairs of supplementary Euler operators to convenient for implementing advanced functions. Secondly, we change topological element operation sequence of Boolean operation and set operation as well as set functions defined in GeoXACML into combination of extended Euler operators, which separates the upper functions and lower data structure. Lastly, we develop underground 3D GIS prototype system, in which practicability and credibility of extended Euler operators faced to 3D GIS presented by this paper are validated.

  13. Improvements on non-equilibrium and transport Green function techniques: The next-generation TRANSIESTA

    NASA Astrophysics Data System (ADS)

    Papior, Nick; Lorente, Nicolás; Frederiksen, Thomas; García, Alberto; Brandbyge, Mads

    2017-03-01

    We present novel methods implemented within the non-equilibrium Green function code (NEGF) TRANSIESTA based on density functional theory (DFT). Our flexible, next-generation DFT-NEGF code handles devices with one or multiple electrodes (Ne ≥ 1) with individual chemical potentials and electronic temperatures. We describe its novel methods for electrostatic gating, contour optimizations, and assertion of charge conservation, as well as the newly implemented algorithms for optimized and scalable matrix inversion, performance-critical pivoting, and hybrid parallelization. Additionally, a generic NEGF "post-processing" code (TBTRANS/PHTRANS) for electron and phonon transport is presented with several novelties such as Hamiltonian interpolations, Ne ≥ 1 electrode capability, bond-currents, generalized interface for user-defined tight-binding transport, transmission projection using eigenstates of a projected Hamiltonian, and fast inversion algorithms for large-scale simulations easily exceeding 106 atoms on workstation computers. The new features of both codes are demonstrated and bench-marked for relevant test systems.

  14. Implicit solvation model for density-functional study of nanocrystal surfaces and reaction pathways

    NASA Astrophysics Data System (ADS)

    Mathew, Kiran; Sundararaman, Ravishankar; Letchworth-Weaver, Kendra; Arias, T. A.; Hennig, Richard G.

    2014-02-01

    Solid-liquid interfaces are at the heart of many modern-day technologies and provide a challenge to many materials simulation methods. A realistic first-principles computational study of such systems entails the inclusion of solvent effects. In this work, we implement an implicit solvation model that has a firm theoretical foundation into the widely used density-functional code Vienna ab initio Software Package. The implicit solvation model follows the framework of joint density functional theory. We describe the framework, our algorithm and implementation, and benchmarks for small molecular systems. We apply the solvation model to study the surface energies of different facets of semiconducting and metallic nanocrystals and the SN2 reaction pathway. We find that solvation reduces the surface energies of the nanocrystals, especially for the semiconducting ones and increases the energy barrier of the SN2 reaction.

  15. A Sparse Self-Consistent Field Algorithm and Its Parallel Implementation: Application to Density-Functional-Based Tight Binding.

    PubMed

    Scemama, Anthony; Renon, Nicolas; Rapacioli, Mathias

    2014-06-10

    We present an algorithm and its parallel implementation for solving a self-consistent problem as encountered in Hartree-Fock or density functional theory. The algorithm takes advantage of the sparsity of matrices through the use of local molecular orbitals. The implementation allows one to exploit efficiently modern symmetric multiprocessing (SMP) computer architectures. As a first application, the algorithm is used within the density-functional-based tight binding method, for which most of the computational time is spent in the linear algebra routines (diagonalization of the Fock/Kohn-Sham matrix). We show that with this algorithm (i) single point calculations on very large systems (millions of atoms) can be performed on large SMP machines, (ii) calculations involving intermediate size systems (1000-100 000 atoms) are also strongly accelerated and can run efficiently on standard servers, and (iii) the error on the total energy due to the use of a cutoff in the molecular orbital coefficients can be controlled such that it remains smaller than the SCF convergence criterion.

  16. Item Response Theory and Health Outcomes Measurement in the 21st Century

    PubMed Central

    Hays, Ron D.; Morales, Leo S.; Reise, Steve P.

    2006-01-01

    Item response theory (IRT) has a number of potential advantages over classical test theory in assessing self-reported health outcomes. IRT models yield invariant item and latent trait estimates (within a linear transformation), standard errors conditional on trait level, and trait estimates anchored to item content. IRT also facilitates evaluation of differential item functioning, inclusion of items with different response formats in the same scale, and assessment of person fit and is ideally suited for implementing computer adaptive testing. Finally, IRT methods can be helpful in developing better health outcome measures and in assessing change over time. These issues are reviewed, along with a discussion of some of the methodological and practical challenges in applying IRT methods. PMID:10982088

  17. Calculation of phonon dispersion relation using new correlation functional

    NASA Astrophysics Data System (ADS)

    Jitropas, Ukrit; Hsu, Chung-Hao

    2017-06-01

    To extend the use of Local Density Approximation (LDA), a new analytical correlation functional is introduced. Correlation energy is an essential ingredient within density functional theory and used to determine ground state energy and other properties including phonon dispersion relation. Except for high and low density limit, the general expression of correlation energy is unknown. The approximation approach is therefore required. The accuracy of the modelling system depends on the quality of correlation energy approximation. Typical correlation functionals used in LDA such as Vosko-Wilk-Nusair (VWN) and Perdew-Wang (PW) were obtained from parameterizing the near-exact quantum Monte Carlo data of Ceperley and Alder. These functionals are presented in complex form and inconvenient to implement. Alternatively, the latest published formula of Chachiyo correlation functional provides a comparable result for those much more complicated functionals. In addition, it provides more predictive power based on the first principle approach, not fitting functionals. Nevertheless, the performance of Chachiyo formula for calculating phonon dispersion relation (a key to the thermal properties of materials) has not been tested yet. Here, the implementation of new correlation functional to calculate phonon dispersion relation is initiated. The accuracy and its validity will be explored.

  18. Mapping the conduction band edge density of states of γ-In2Se3 by diffuse reflectance spectra

    NASA Astrophysics Data System (ADS)

    Kumar, Pradeep; Vedeshwar, Agnikumar G.

    2018-03-01

    It is demonstrated that the measured diffuse reflectance spectra of γ-In2Se3 can be used to map the conduction band edge density of states through Kubelka-Munk analysis. The Kubelka-Munk function derived from the measured spectra almost mimics the calculated density of states in the vicinity of conduction band edge. The calculation of density of states was carried out using first-principles approach yielding the structural, electronic, and optical properties. The calculations were carried out implementing various functionals and only modified Tran and Blaha (TB-MBJ) results tally closest with the experimental result of band gap. The electronic and optical properties were calculated using FP-LAPW + lo approach based on the Density Functional Theory formalism implementing only TB-mBJ functional. The electron and hole effective masses have been calculated as me * = 0.25 m 0 and mh * = 1.11 m 0 , respectively. The optical properties clearly indicate the anisotropic nature of γ-In2Se3.

  19. The MONET code for the evaluation of the dose in hadrontherapy

    NASA Astrophysics Data System (ADS)

    Embriaco, A.

    2018-01-01

    The MONET is a code for the computation of the 3D dose distribution for protons in water. For the lateral profile, MONET is based on the Molière theory of multiple Coulomb scattering. To take into account also the nuclear interactions, we add to this theory a Cauchy-Lorentz function, where the two parameters are obtained by a fit to a FLUKA simulation. We have implemented the Papoulis algorithm for the passage from the projected to a 2D lateral distribution. For the longitudinal profile, we have implemented a new calculation of the energy loss that is in good agreement with simulations. The inclusion of the straggling is based on the convolution of energy loss with a Gaussian function. In order to complete the longitudinal profile, also the nuclear contributions are included using a linear parametrization. The total dose profile is calculated in a 3D mesh by evaluating at each depth the 2D lateral distributions and by scaling them at the value of the energy deposition. We have compared MONET with FLUKA in two cases: a single Gaussian beam and a lateral scan. In both cases, we have obtained a good agreement for different energies of protons in water.

  20. The molecular gradient using the divide-expand-consolidate resolution of the identity second-order Møller-Plesset perturbation theory: The DEC-RI-MP2 gradient

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bykov, Dmytro; Kristensen, Kasper; Kjærgaard, Thomas

    We report an implementation of the molecular gradient using the divide-expand-consolidate resolution of the identity second-order Møller-Plesset perturbation theory (DEC-RI-MP2). The new DEC-RI-MP2 gradient method combines the precision control as well as the linear-scaling and massively parallel features of the DEC scheme with efficient evaluations of the gradient contributions using the RI approximation. We further demonstrate that the DEC-RI-MP2 gradient method is capable of calculating molecular gradients for very large molecular systems. A test set of supramolecular complexes containing up to 158 atoms and 1960 contracted basis functions has been employed to demonstrate the general applicability of the DEC-RI-MP2 methodmore » and to analyze the errors of the DEC approximation. Moreover, the test set contains molecules of complicated electronic structures and is thus deliberately chosen to stress test the DEC-RI-MP2 gradient implementation. Additionally, as a showcase example the full molecular gradient for insulin (787 atoms and 7604 contracted basis functions) has been evaluated.« less

  1. Characterising an implementation intervention in terms of behaviour change techniques and theory: the 'Sepsis Six' clinical care bundle.

    PubMed

    Steinmo, Siri; Fuller, Christopher; Stone, Sheldon P; Michie, Susan

    2015-08-08

    Sepsis is a major cause of death from infection, with a mortality rate of 36 %. This can be halved by implementing the 'Sepsis Six' evidence-based care bundle within 1 h of presentation. A UK audit has shown that median implementation rates are 27-47 % and interventions to improve this have demonstrated minimal effects. In order to develop more effective implementation interventions, it is helpful to obtain detailed characterisations of current interventions and to draw on behavioural theory to identify mechanisms of change. The aim of this study was to illustrate this process by using the Behaviour Change Wheel; Behaviour Change Technique (BCT) Taxonomy; Capability, Opportunity, Motivation model of behaviour; and Theoretical Domains Framework to characterise the content and theoretical mechanisms of action of an existing intervention to implement Sepsis Six. Data came from documentary, interview and observational analyses of intervention delivery in several wards of a UK hospital. A broad description of the intervention was created using the Template for Intervention Description and Replication framework. Content was specified in terms of (i) component BCTs using the BCT Taxonomy and (ii) intervention functions using the Behaviour Change Wheel. Mechanisms of action were specified using the Capability, Opportunity, Motivation model and the Theoretical Domains Framework. The intervention consisted of 19 BCTs, with eight identified using all three data sources. The BCTs were delivered via seven functions of the Behaviour Change Wheel, with four ('education', 'enablement', 'training' and 'environmental restructuring') supported by the three data sources. The most frequent mechanisms of action were reflective motivation (especially 'beliefs about consequences' and 'beliefs about capabilities') and psychological capability (especially 'knowledge'). The intervention consisted of a wide range of BCTs targeting a wide range of mechanisms of action. This study demonstrates the utility of the Behaviour Change Wheel, the BCT Taxonomy and the Theoretical Domains Framework, tools recognised for providing guidance for intervention design, for characterising an existing intervention to implement evidence-based care.

  2. Divide-and-conquer density functional theory on hierarchical real-space grids: Parallel implementation and applications

    NASA Astrophysics Data System (ADS)

    Shimojo, Fuyuki; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya

    2008-02-01

    A linear-scaling algorithm based on a divide-and-conquer (DC) scheme has been designed to perform large-scale molecular-dynamics (MD) simulations, in which interatomic forces are computed quantum mechanically in the framework of the density functional theory (DFT). Electronic wave functions are represented on a real-space grid, which is augmented with a coarse multigrid to accelerate the convergence of iterative solutions and with adaptive fine grids around atoms to accurately calculate ionic pseudopotentials. Spatial decomposition is employed to implement the hierarchical-grid DC-DFT algorithm on massively parallel computers. The largest benchmark tests include 11.8×106 -atom ( 1.04×1012 electronic degrees of freedom) calculation on 131 072 IBM BlueGene/L processors. The DC-DFT algorithm has well-defined parameters to control the data locality, with which the solutions converge rapidly. Also, the total energy is well conserved during the MD simulation. We perform first-principles MD simulations based on the DC-DFT algorithm, in which large system sizes bring in excellent agreement with x-ray scattering measurements for the pair-distribution function of liquid Rb and allow the description of low-frequency vibrational modes of graphene. The band gap of a CdSe nanorod calculated by the DC-DFT algorithm agrees well with the available conventional DFT results. With the DC-DFT algorithm, the band gap is calculated for larger system sizes until the result reaches the asymptotic value.

  3. Combining the GW formalism with the polarizable continuum model: A state-specific non-equilibrium approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duchemin, Ivan, E-mail: ivan.duchemin@cea.fr; Jacquemin, Denis; Institut Universitaire de France, 1 rue Descartes, 75005 Paris Cedex 5

    We have implemented the polarizable continuum model within the framework of the many-body Green’s function GW formalism for the calculation of electron addition and removal energies in solution. The present formalism includes both ground-state and non-equilibrium polarization effects. In addition, the polarization energies are state-specific, allowing to obtain the bath-induced renormalisation energy of all occupied and virtual energy levels. Our implementation is validated by comparisons with ΔSCF calculations performed at both the density functional theory and coupled-cluster single and double levels for solvated nucleobases. The present study opens the way to GW and Bethe-Salpeter calculations in disordered condensed phases ofmore » interest in organic optoelectronics, wet chemistry, and biology.« less

  4. Prolegomena to the field

    NASA Astrophysics Data System (ADS)

    Chen, Su Shing; Caulfield, H. John

    1994-03-01

    Adaptive Computing, vs. Classical Computing, is emerging to be a field which is the culmination during the last 40 and more years of various scientific and technological areas, including cybernetics, neural networks, pattern recognition networks, learning machines, selfreproducing automata, genetic algorithms, fuzzy logics, probabilistic logics, chaos, electronics, optics, and quantum devices. This volume of "Critical Reviews on Adaptive Computing: Mathematics, Electronics, and Optics" is intended as a synergistic approach to this emerging field. There are many researchers in these areas working on important results. However, we have not seen a general effort to summarize and synthesize these results in theory as well as implementation. In order to reach a higher level of synergism, we propose Adaptive Computing as the field which comprises of the above mentioned computational paradigms and various realizations. The field should include both the Theory (or Mathematics) and the Implementation. Our emphasis is on the interplay of Theory and Implementation. The interplay, an adaptive process itself, of Theory and Implementation is the only "holistic" way to advance our understanding and realization of brain-like computation. We feel that a theory without implementation has the tendency to become unrealistic and "out-of-touch" with reality, while an implementation without theory runs the risk to be superficial and obsolete.

  5. Massive-scale gene co-expression network construction and robustness testing using random matrix theory.

    PubMed

    Gibson, Scott M; Ficklin, Stephen P; Isaacson, Sven; Luo, Feng; Feltus, Frank A; Smith, Melissa C

    2013-01-01

    The study of gene relationships and their effect on biological function and phenotype is a focal point in systems biology. Gene co-expression networks built using microarray expression profiles are one technique for discovering and interpreting gene relationships. A knowledge-independent thresholding technique, such as Random Matrix Theory (RMT), is useful for identifying meaningful relationships. Highly connected genes in the thresholded network are then grouped into modules that provide insight into their collective functionality. While it has been shown that co-expression networks are biologically relevant, it has not been determined to what extent any given network is functionally robust given perturbations in the input sample set. For such a test, hundreds of networks are needed and hence a tool to rapidly construct these networks. To examine functional robustness of networks with varying input, we enhanced an existing RMT implementation for improved scalability and tested functional robustness of human (Homo sapiens), rice (Oryza sativa) and budding yeast (Saccharomyces cerevisiae). We demonstrate dramatic decrease in network construction time and computational requirements and show that despite some variation in global properties between networks, functional similarity remains high. Moreover, the biological function captured by co-expression networks thresholded by RMT is highly robust.

  6. Simulation of surface processes

    PubMed Central

    Jónsson, Hannes

    2011-01-01

    Computer simulations of surface processes can reveal unexpected insight regarding atomic-scale structure and transitions. Here, the strengths and weaknesses of some commonly used approaches are reviewed as well as promising avenues for improvements. The electronic degrees of freedom are usually described by gradient-dependent functionals within Kohn–Sham density functional theory. Although this level of theory has been remarkably successful in numerous studies, several important problems require a more accurate theoretical description. It is important to develop new tools to make it possible to study, for example, localized defect states and band gaps in large and complex systems. Preliminary results presented here show that orbital density-dependent functionals provide a promising avenue, but they require the development of new numerical methods and substantial changes to codes designed for Kohn–Sham density functional theory. The nuclear degrees of freedom can, in most cases, be described by the classical equations of motion; however, they still pose a significant challenge, because the time scale of interesting transitions, which typically involve substantial free energy barriers, is much longer than the time scale of vibrations—often 10 orders of magnitude. Therefore, simulation of diffusion, structural annealing, and chemical reactions cannot be achieved with direct simulation of the classical dynamics. Alternative approaches are needed. One such approach is transition state theory as implemented in the adaptive kinetic Monte Carlo algorithm, which, thus far, has relied on the harmonic approximation but could be extended and made applicable to systems with rougher energy landscape and transitions through quantum mechanical tunneling. PMID:21199939

  7. A trunk ranging system based on binocular stereo vision

    NASA Astrophysics Data System (ADS)

    Zhao, Xixuan; Kan, Jiangming

    2017-07-01

    Trunk ranging is an essential function for autonomous forestry robots. Traditional trunk ranging systems based on personal computers are not convenient in practical application. This paper examines the implementation of a trunk ranging system based on the binocular vision theory via TI's DaVinc DM37x system. The system is smaller and more reliable than that implemented using a personal computer. It calculates the three-dimensional information from the images acquired by binocular cameras, producing the targeting and ranging results. The experimental results show that the measurement error is small and the system design is feasible for autonomous forestry robots.

  8. Development of a computer technique for the prediction of transport aircraft flight profile sonic boom signatures. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Coen, Peter G.

    1991-01-01

    A new computer technique for the analysis of transport aircraft sonic boom signature characteristics was developed. This new technique, based on linear theory methods, combines the previously separate equivalent area and F function development with a signature propagation method using a single geometry description. The new technique was implemented in a stand-alone computer program and was incorporated into an aircraft performance analysis program. Through these implementations, both configuration designers and performance analysts are given new capabilities to rapidly analyze an aircraft's sonic boom characteristics throughout the flight envelope.

  9. Theory of L -edge spectroscopy of strongly correlated systems

    NASA Astrophysics Data System (ADS)

    Lüder, Johann; Schött, Johan; Brena, Barbara; Haverkort, Maurits W.; Thunström, Patrik; Eriksson, Olle; Sanyal, Biplab; Di Marco, Igor; Kvashnin, Yaroslav O.

    2017-12-01

    X-ray absorption spectroscopy measured at the L edge of transition metals (TMs) is a powerful element-selective tool providing direct information about the correlation effects in the 3 d states. The theoretical modeling of the 2 p →3 d excitation processes remains to be challenging for contemporary ab initio electronic structure techniques, due to strong core-hole and multiplet effects influencing the spectra. In this work, we present a realization of the method combining the density-functional theory with multiplet ligand field theory, proposed in Haverkort et al. [Phys. Rev. B 85, 165113 (2012), 10.1103/PhysRevB.85.165113]. In this approach, a single-impurity Anderson model (SIAM) is constructed, with almost all parameters obtained from first principles, and then solved to obtain the spectra. In our implementation, we adopt the language of the dynamical mean-field theory and utilize the local density of states and the hybridization function, projected onto TM 3 d states, in order to construct the SIAM. The developed computational scheme is applied to calculate the L -edge spectra for several TM monoxides. A very good agreement between the theory and experiment is found for all studied systems. The effect of core-hole relaxation, hybridization discretization, possible extensions of the method as well as its limitations are discussed.

  10. Scalar relativistic computations of nuclear magnetic shielding and g-shifts with the zeroth-order regular approximation and range-separated hybrid density functionals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aquino, Fredy W.; Govind, Niranjan; Autschbach, Jochen

    2011-10-01

    Density functional theory (DFT) calculations of NMR chemical shifts and molecular g-tensors with Gaussian-type orbitals are implemented via second-order energy derivatives within the scalar relativistic zeroth order regular approximation (ZORA) framework. Nonhybrid functionals, standard (global) hybrids, and range-separated (Coulomb-attenuated, long-range corrected) hybrid functionals are tested. Origin invariance of the results is ensured by use of gauge-including atomic orbital (GIAO) basis functions. The new implementation in the NWChem quantum chemistry package is verified by calculations of nuclear shielding constants for the heavy atoms in HX (X=F, Cl, Br, I, At) and H2X (X = O, S, Se, Te, Po), and Temore » chemical shifts in a number of tellurium compounds. The basis set and functional dependence of g-shifts is investigated for 14 radicals with light and heavy atoms. The problem of accurately predicting F NMR shielding in UF6-nCln, n = 1 to 6, is revisited. The results are sensitive to approximations in the density functionals, indicating a delicate balance of DFT self-interaction vs. correlation. For the uranium halides, the results with the range-separated functionals are mixed.« less

  11. Finite-Temperature Relativistic Time-Blocking Approximation for Nuclear Strength Functions

    NASA Astrophysics Data System (ADS)

    Wibowo, Herlik; Litvinova, Elena

    2017-09-01

    This work presents an extension of the relativistic nuclear field theory (RNFT) developed throughout the last decade as an approach to the nuclear many-body problem, based on QHD meson-nucleon Lagrangian and relativistic field theory. The unique feature of RNFT is a consistent connection of the high-energy scale of heavy mesons, the medium-energy range of pion, and the low-energy domain of emergent collective vibrations (phonons). RNFT has demonstrated a very good performance in various nuclear structure calculations across the nuclear chart and, in particular, provides a consistent input for description of the two phases of r-process nucleosynthesis: neutron capture and beta decay. Further inclusion of finite temperature effects presented here allows for an extension of the method to highly excited compound nuclei. The covariant response theory in the relativistic time-blocking approximation (RTBA) is generalized for thermal effects, adopting the Matsubara Green's function formalism to the RNFT framework. The finite-temperature RTBA is implemented numerically to calculate multipole strength functions in medium-mass and heavy nuclei. The obtained results will be discussed in comparison to available experimental data and in the context of possible consequences for astrophysics.

  12. Application of simplified Complexity Theory concepts for healthcare social systems to explain the implementation of evidence into practice.

    PubMed

    Chandler, Jacqueline; Rycroft-Malone, Jo; Hawkes, Claire; Noyes, Jane

    2016-02-01

    To examine the application of core concepts from Complexity Theory to explain the findings from a process evaluation undertaken in a trial evaluating implementation strategies for recommendations about reducing surgical fasting times. The proliferation of evidence-based guidance requires a greater focus on its implementation. Theory is required to explain the complex processes across the multiple healthcare organizational levels. This social healthcare context involves the interaction between professionals, patients and the organizational systems in care delivery. Complexity Theory may provide an explanatory framework to explain the complexities inherent in implementation in social healthcare contexts. A secondary thematic analysis of qualitative process evaluation data informed by Complexity Theory. Seminal texts applying Complexity Theory to the social context were annotated, key concepts extracted and core Complexity Theory concepts identified. These core concepts were applied as a theoretical lens to provide an explanation of themes from a process evaluation of a trial evaluating the implementation of strategies to reduce surgical fasting times. Sampled substantive texts provided a representative spread of theoretical development and application of Complexity Theory from late 1990's-2013 in social science, healthcare, management and philosophy. Five Complexity Theory core concepts extracted were 'self-organization', 'interaction', 'emergence', 'system history' and 'temporality'. Application of these concepts suggests routine surgical fasting practice is habituated in the social healthcare system and therefore it cannot easily be reversed. A reduction to fasting times requires an incentivised new approach to emerge in the surgical system's priority of completing the operating list. The application of Complexity Theory provides a useful explanation for resistance to change fasting practice. Its utility in implementation research warrants further attention and evaluation. © 2015 John Wiley & Sons Ltd.

  13. Adiabatic corrections to density functional theory energies and wave functions.

    PubMed

    Mohallem, José R; Coura, Thiago de O; Diniz, Leonardo G; de Castro, Gustavo; Assafrão, Denise; Heine, Thomas

    2008-09-25

    The adiabatic finite-nuclear-mass-correction (FNMC) to the electronic energies and wave functions of atoms and molecules is formulated for density-functional theory and implemented in the deMon code. The approach is tested for a series of local and gradient corrected density functionals, using MP2 results and diagonal-Born-Oppenheimer corrections from the literature for comparison. In the evaluation of absolute energy corrections of nonorganic molecules the LDA PZ81 functional works surprisingly better than the others. For organic molecules the GGA BLYP functional has the best performance. FNMC with GGA functionals, mainly BLYP, show a good performance in the evaluation of relative corrections, except for nonorganic molecules containing H atoms. The PW86 functional stands out with the best evaluation of the barrier of linearity of H2O and the isotopic dipole moment of HDO. In general, DFT functionals display an accuracy superior than the common belief and because the corrections are based on a change of the electronic kinetic energy they are here ranked in a new appropriate way. The approach is applied to obtain the adiabatic correction for full atomization of alcanes C(n)H(2n+2), n = 4-10. The barrier of 1 mHartree is approached for adiabatic corrections, justifying its insertion into DFT.

  14. Alternative separation of exchange and correlation energies in multi-configuration range-separated density-functional theory.

    PubMed

    Stoyanova, Alexandrina; Teale, Andrew M; Toulouse, Julien; Helgaker, Trygve; Fromager, Emmanuel

    2013-10-07

    The alternative separation of exchange and correlation energies proposed by Toulouse et al. [Theor. Chem. Acc. 114, 305 (2005)] is explored in the context of multi-configuration range-separated density-functional theory. The new decomposition of the short-range exchange-correlation energy relies on the auxiliary long-range interacting wavefunction rather than the Kohn-Sham (KS) determinant. The advantage, relative to the traditional KS decomposition, is that the wavefunction part of the energy is now computed with the regular (fully interacting) Hamiltonian. One potential drawback is that, because of double counting, the wavefunction used to compute the energy cannot be obtained by minimizing the energy expression with respect to the wavefunction parameters. The problem is overcome by using short-range optimized effective potentials (OEPs). The resulting combination of OEP techniques with wavefunction theory has been investigated in this work, at the Hartree-Fock (HF) and multi-configuration self-consistent-field (MCSCF) levels. In the HF case, an analytical expression for the energy gradient has been derived and implemented. Calculations have been performed within the short-range local density approximation on H2, N2, Li2, and H2O. Significant improvements in binding energies are obtained with the new decomposition of the short-range energy. The importance of optimizing the short-range OEP at the MCSCF level when static correlation becomes significant has also been demonstrated for H2, using a finite-difference gradient. The implementation of the analytical gradient for MCSCF wavefunctions is currently in progress.

  15. The value of theory in programmes to implement clinical guidelines: Insights from a retrospective mixed-methods evaluation of a programme to increase adherence to national guidelines for chronic disease in primary care

    PubMed Central

    Sheringham, Jessica; Solmi, Francesca; Ariti, Cono; Baim-Lance, Abigail; Morris, Steve; Fulop, Naomi J.

    2017-01-01

    Background Programmes have had limited success in improving guideline adherence for chronic disease. Use of theory is recommended but is often absent in programmes conducted in ‘real-world’ rather than research settings. Materials and methods This mixed-methods study tested a retrospective theory-based approach to evaluate a ‘real-world’ programme in primary care to improve adherence to national guidelines for chronic obstructive pulmonary disease (COPD). Qualitative data, comprising analysis of documents generated throughout the programme (n>300), in-depth interviews with planners (clinicians, managers and improvement experts involved in devising, planning, and implementing the programme, n = 14) and providers (practice clinicians, n = 14) were used to construct programme theories, experiences of implementation and contextual factors influencing care. Quantitative analyses comprised controlled before-and-after analyses to test ‘early’ and evolved’ programme theories with comparators grounded in each theory. ‘Early’ theory predicted the programme would reduce emergency hospital admissions (EHA). It was tested using national analysis of standardized borough-level EHA rates between programme and comparator boroughs. ‘Evolved’ theory predicted practices with higher programme participation would increase guideline adherence and reduce EHA and costs. It was tested using a difference-in-differences analysis with linked primary and secondary care data to compare changes in diagnosis, management, EHA and costs, over time and by programme participation. Results Contrary to programme planners’ predictions in ‘early’ and ‘evolved’ programme theories, admissions did not change following the programme. However, consistent with ‘evolved’ theory, higher guideline adoption occurred in practices with greater programme participation. Conclusions Retrospectively constructing theories based on the ideas of programme planners can enable evaluators to address some limitations encountered when evaluating programmes without a theoretical base. Prospectively articulating theory aided by existing models and mid-range implementation theories may strengthen guideline adoption efforts by prompting planners to scrutinise implementation methods. Benefits of deriving programme theory, with or without the aid of mid-range implementation theories, however, may be limited when the evidence underpinning guidelines is flawed. PMID:28328942

  16. Self-Determination Theory and Motivational Interviewing: Complementary Models to Elicit Voluntary Engagement by Partner-Abusive Men

    PubMed Central

    NEIGHBORS, CLAYTON; WALKER, DENISE D.; ROFFMAN, ROGER A.; MBILINYI, LYUNGAI F.; EDLESON, JEFFREY L.

    2012-01-01

    Research examining intimate partner violence (IPV) has lacked a comprehensive theoretical framework for understanding and treating behavior. The authors propose two complementary models, a treatment approach (Motivational Interviewing, MI) informed by a theory (Self-Determination Theory; SDT), as a way of integrating existing knowledge and suggesting new directions in intervening early with IPV perpetrators. MI is a client-centered clinical intervention intended to assist in strengthening motivation to change and has been widely implemented in the substance abuse literature. SDT is a theory that focuses on internal versus external motivation and considers elements that impact optimal functioning and psychological well-being. These elements include psychological needs, integration of behavioral regulations, and contextual influences on motivation. Each of these aspects of SDT is described in detail and in the context of IPV etiology and intervention using motivational interviewing. PMID:22593609

  17. Implementation of Complexity Analyzing Based on Additional Effect

    NASA Astrophysics Data System (ADS)

    Zhang, Peng; Li, Na; Liang, Yanhong; Liu, Fang

    According to the Complexity Theory, there is complexity in the system when the functional requirement is not be satisfied. There are several study performances for Complexity Theory based on Axiomatic Design. However, they focus on reducing the complexity in their study and no one focus on method of analyzing the complexity in the system. Therefore, this paper put forth a method of analyzing the complexity which is sought to make up the deficiency of the researches. In order to discussing the method of analyzing the complexity based on additional effect, this paper put forth two concepts which are ideal effect and additional effect. The method of analyzing complexity based on additional effect combines Complexity Theory with Theory of Inventive Problem Solving (TRIZ). It is helpful for designers to analyze the complexity by using additional effect. A case study shows the application of the process.

  18. Communication: Density functional theory embedding with the orthogonality constrained basis set expansion procedure

    NASA Astrophysics Data System (ADS)

    Culpitt, Tanner; Brorsen, Kurt R.; Hammes-Schiffer, Sharon

    2017-06-01

    Density functional theory (DFT) embedding approaches have generated considerable interest in the field of computational chemistry because they enable calculations on larger systems by treating subsystems at different levels of theory. To circumvent the calculation of the non-additive kinetic potential, various projector methods have been developed to ensure the orthogonality of molecular orbitals between subsystems. Herein the orthogonality constrained basis set expansion (OCBSE) procedure is implemented to enforce this subsystem orbital orthogonality without requiring a level shifting parameter. This scheme is a simple alternative to existing parameter-free projector-based schemes, such as the Huzinaga equation. The main advantage of the OCBSE procedure is that excellent convergence behavior is attained for DFT-in-DFT embedding without freezing any of the subsystem densities. For the three chemical systems studied, the level of accuracy is comparable to or higher than that obtained with the Huzinaga scheme with frozen subsystem densities. Allowing both the high-level and low-level DFT densities to respond to each other during DFT-in-DFT embedding calculations provides more flexibility and renders this approach more generally applicable to chemical systems. It could also be useful for future extensions to embedding approaches combining wavefunction theories and DFT.

  19. Ab Initio study on structural, electronic, magnetic and dielectric properties of LSNO within Density Functional Perturbation Theory

    NASA Astrophysics Data System (ADS)

    Petersen, John; Bechstedt, Friedhelm; Furthmüller, Jürgen; Scolfaro, Luisa

    LSNO (La2-xSrxNiO4) is of great interest due to its colossal dielectric constant (CDC) and rich underlying physics. While being an antiferromagnetic insulator, localized holes are present in the form of stripes in the Ni-O planes which are commensurate with the inverse of the Sr concentration. The stripes are a manifestation of charge density waves with period approximately 1/x and spin density waves with period approximately 2/x. Here, the spin ground state is calculated via LSDA + U with the PAW method implemented in VASP. Crystal structure and the effective Hubbard U parameter are optimized before calculating ɛ∞ within the independent particle approximation. ɛ∞ and the full static dielectric constant (including the lattice polarizability) ɛ0 are calculated within Density Functional Perturbation Theory.

  20. Implementation of Magnetic Dipole Interaction in the Planewave-Basis Approach for Slab Systems

    NASA Astrophysics Data System (ADS)

    Oda, Tatsuki; Obata, Masao

    2018-06-01

    We implemented the magnetic dipole interaction (MDI) in a first-principles planewave-basis electronic structure calculation based on spin density functional theory. This implementation, employing the two-dimensional Ewald summation, enables us to obtain the total magnetic anisotropy energy of slab materials with contributions originating from both spin-orbit and magnetic dipole-dipole couplings on the same footing. The implementation was demonstrated using an iron square lattice. The result indicates that the magnetic anisotropy of the MDI is much less than that obtained from the atomic magnetic moment model due to the prolate quadrupole component of the spin magnetic moment density. We discuss the reduction in the anisotropy of the MDI in the case of modulation of the quadrupole component and the effect of magnetic field arising from the MDI on atomic scale.

  1. Hamiltonian lattice field theory: Computer calculations using variational methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zako, Robert L.

    1991-12-03

    I develop a variational method for systematic numerical computation of physical quantities -- bound state energies and scattering amplitudes -- in quantum field theory. An infinite-volume, continuum theory is approximated by a theory on a finite spatial lattice, which is amenable to numerical computation. I present an algorithm for computing approximate energy eigenvalues and eigenstates in the lattice theory and for bounding the resulting errors. I also show how to select basis states and choose variational parameters in order to minimize errors. The algorithm is based on the Rayleigh-Ritz principle and Kato`s generalizations of Temple`s formula. The algorithm could bemore » adapted to systems such as atoms and molecules. I show how to compute Green`s functions from energy eigenvalues and eigenstates in the lattice theory, and relate these to physical (renormalized) coupling constants, bound state energies and Green`s functions. Thus one can compute approximate physical quantities in a lattice theory that approximates a quantum field theory with specified physical coupling constants. I discuss the errors in both approximations. In principle, the errors can be made arbitrarily small by increasing the size of the lattice, decreasing the lattice spacing and computing sufficiently long. Unfortunately, I do not understand the infinite-volume and continuum limits well enough to quantify errors due to the lattice approximation. Thus the method is currently incomplete. I apply the method to real scalar field theories using a Fock basis of free particle states. All needed quantities can be calculated efficiently with this basis. The generalization to more complicated theories is straightforward. I describe a computer implementation of the method and present numerical results for simple quantum mechanical systems.« less

  2. Geometric low-energy effective action in a doubled spacetime

    NASA Astrophysics Data System (ADS)

    Ma, Chen-Te; Pezzella, Franco

    2018-05-01

    The ten-dimensional supergravity theory is a geometric low-energy effective theory and the equations of motion for its fields can be obtained from string theory by computing β functions. With d compact dimensions, an O (d , d ; Z) geometric structure can be added to it giving the supergravity theory with T-duality manifest. In this paper, this is constructed through the use of a suitable star product whose role is the one to implement the weak constraint on the fields and the gauge parameters in order to have a closed gauge symmetry algebra. The consistency of the action here proposed is based on the orthogonality of the momenta associated with fields in their triple star products in the cubic terms defined for d ≥ 1. This orthogonality holds also for an arbitrary number of star products of fields for d = 1. Finally, we extend our analysis to the double sigma model, non-commutative geometry and open string theory.

  3. Covalency in transition-metal oxides within all-electron dynamical mean-field theory

    NASA Astrophysics Data System (ADS)

    Haule, Kristjan; Birol, Turan; Kotliar, Gabriel

    2014-08-01

    A combination of dynamical mean field theory and density functional theory, as implemented by Haule et al. [Phys. Rev. B 81, 195107 (2010), 10.1103/PhysRevB.81.195107], is applied to both the early and late transition metal oxides. For a fixed value of the local Coulomb repulsion, without fine tuning, we obtain the main features of these series, such as the metallic character of SrVO3 and the insulating gaps of LaVO3,LaTiO3, and La2CO4, which are in good agreement with experiment. This study highlights the importance of local physics and high energy hybridization in the screening of the Hubbard interaction and how different low energy behaviors can emerge from the unified treatment of the transition metal series.

  4. First Renormalized Parton Distribution Functions from Lattice QCD

    NASA Astrophysics Data System (ADS)

    Lin, Huey-Wen; LP3 Collaboration

    2017-09-01

    We present the first lattice-QCD results on the nonperturbatively renormalized parton distribution functions (PDFs). Using X.D. Ji's large-momentum effective theory (LaMET) framework, lattice-QCD hadron structure calculations are able to overcome the longstanding problem of determining the Bjorken- x dependence of PDFs. This has led to numerous additional theoretical works and exciting progress. In this talk, we will address a recent development that implements a step missing from prior lattice-QCD calculations: renormalization, its effects on the nucleon matrix elements, and the resultant changes to the calculated distributions.

  5. Implementation of Gravity Model to Estimation of Transportation Market Shares

    NASA Astrophysics Data System (ADS)

    Krata, Przemysław

    2010-03-01

    The theoretical consideration presented in the paper is inspired by market gravity models, as an interesting attitude towards operations research on a market. The transportation market issues are emphasized. The mathematical model of relations, taking place between transportation companies and their customers on the market, which is applied in the course of the research is based on continuous functions characteristics. This attitude enables the use of the field theory notions. The resultant vector-type utility function facilitates obtaining of competitive advantage areas for all transportation companies located on the considered transportation market.

  6. Computing motion using resistive networks

    NASA Technical Reports Server (NTRS)

    Koch, Christof; Luo, Jin; Mead, Carver; Hutchinson, James

    1988-01-01

    Recent developments in the theory of early vision are described which lead from the formulation of the motion problem as an ill-posed one to its solution by minimizing certain 'cost' functions. These cost or energy functions can be mapped onto simple analog and digital resistive networks. It is shown how the optical flow can be computed by injecting currents into resistive networks and recording the resulting stationary voltage distribution at each node. These networks can be implemented in cMOS VLSI circuits and represent plausible candidates for biological vision systems.

  7. Molecular properties of excited electronic state: Formalism, implementation, and applications of analytical second energy derivatives within the framework of the time-dependent density functional theory/molecular mechanics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeng, Qiao; Liang, WanZhen, E-mail: liangwz@xmu.edu.cn; Liu, Jie

    2014-05-14

    This work extends our previous works [J. Liu and W. Z. Liang, J. Chem. Phys. 135, 014113 (2011); J. Liu and W. Z. Liang, J. Chem. Phys. 135, 184111 (2011)] on analytical excited-state energy Hessian within the framework of time-dependent density functional theory (TDDFT) to couple with molecular mechanics (MM). The formalism, implementation, and applications of analytical first and second energy derivatives of TDDFT/MM excited state with respect to the nuclear and electric perturbations are presented. Their performances are demonstrated by the calculations of adiabatic excitation energies, and excited-state geometries, harmonic vibrational frequencies, and infrared intensities for a number ofmore » benchmark systems. The consistent results with the full quantum mechanical method and other hybrid theoretical methods indicate the reliability of the current numerical implementation of developed algorithms. The computational accuracy and efficiency of the current analytical approach are also checked and the computational efficient strategies are suggested to speed up the calculations of complex systems with many MM degrees of freedom. Finally, we apply the current analytical approach in TDDFT/MM to a realistic system, a red fluorescent protein chromophore together with part of its nearby protein matrix. The calculated results indicate that the rearrangement of the hydrogen bond interactions between the chromophore and the protein matrix is responsible for the large Stokes shift.« less

  8. Monkeys choose as if maximizing utility compatible with basic principles of revealed preference theory

    PubMed Central

    Pastor-Bernier, Alexandre; Plott, Charles R.; Schultz, Wolfram

    2017-01-01

    Revealed preference theory provides axiomatic tools for assessing whether individuals make observable choices “as if” they are maximizing an underlying utility function. The theory evokes a tradeoff between goods whereby individuals improve themselves by trading one good for another good to obtain the best combination. Preferences revealed in these choices are modeled as curves of equal choice (indifference curves) and reflect an underlying process of optimization. These notions have far-reaching applications in consumer choice theory and impact the welfare of human and animal populations. However, they lack the empirical implementation in animals that would be required to establish a common biological basis. In a design using basic features of revealed preference theory, we measured in rhesus monkeys the frequency of repeated choices between bundles of two liquids. For various liquids, the animals’ choices were compatible with the notion of giving up a quantity of one good to gain one unit of another good while maintaining choice indifference, thereby implementing the concept of marginal rate of substitution. The indifference maps consisted of nonoverlapping, linear, convex, and occasionally concave curves with typically negative, but also sometimes positive, slopes depending on bundle composition. Out-of-sample predictions using homothetic polynomials validated the indifference curves. The animals’ preferences were internally consistent in satisfying transitivity. Change of option set size demonstrated choice optimality and satisfied the Weak Axiom of Revealed Preference (WARP). These data are consistent with a version of revealed preference theory in which preferences are stochastic; the monkeys behaved “as if” they had well-structured preferences and maximized utility. PMID:28202727

  9. Monkeys choose as if maximizing utility compatible with basic principles of revealed preference theory.

    PubMed

    Pastor-Bernier, Alexandre; Plott, Charles R; Schultz, Wolfram

    2017-03-07

    Revealed preference theory provides axiomatic tools for assessing whether individuals make observable choices "as if" they are maximizing an underlying utility function. The theory evokes a tradeoff between goods whereby individuals improve themselves by trading one good for another good to obtain the best combination. Preferences revealed in these choices are modeled as curves of equal choice (indifference curves) and reflect an underlying process of optimization. These notions have far-reaching applications in consumer choice theory and impact the welfare of human and animal populations. However, they lack the empirical implementation in animals that would be required to establish a common biological basis. In a design using basic features of revealed preference theory, we measured in rhesus monkeys the frequency of repeated choices between bundles of two liquids. For various liquids, the animals' choices were compatible with the notion of giving up a quantity of one good to gain one unit of another good while maintaining choice indifference, thereby implementing the concept of marginal rate of substitution. The indifference maps consisted of nonoverlapping, linear, convex, and occasionally concave curves with typically negative, but also sometimes positive, slopes depending on bundle composition. Out-of-sample predictions using homothetic polynomials validated the indifference curves. The animals' preferences were internally consistent in satisfying transitivity. Change of option set size demonstrated choice optimality and satisfied the Weak Axiom of Revealed Preference (WARP). These data are consistent with a version of revealed preference theory in which preferences are stochastic; the monkeys behaved "as if" they had well-structured preferences and maximized utility.

  10. ECON-KG: A Code for Computation of Electrical Conductivity Using Density Functional Theory

    DTIC Science & Technology

    2017-10-01

    is presented. Details of the implementation and instructions for execution are presented, and an example calculation of the frequency- dependent ...shown to depend on carbon content,3 and electrical conductivity models have become a requirement for input into continuum-level simulations being... dependent electrical conductivity is computed as a weighted sum over k-points: () = ∑ () ∗ () , (2) where W(k) is

  11. Numerical Nonlinear Robust Control with Applications to Humanoid Robots

    DTIC Science & Technology

    2015-07-01

    automatically. While optimization and optimal control theory have been widely applied in humanoid robot control, it is not without drawbacks . A blind... drawback of Galerkin-based approaches is the need to successively produce discrete forms, which is difficult to implement in practice. Related...universal function approx- imation ability, these approaches are not without drawbacks . In practice, while a single hidden layer neural network can

  12. How to fly an aircraft with control theory and splines

    NASA Technical Reports Server (NTRS)

    Karlsson, Anders

    1994-01-01

    When trying to fly an aircraft as smoothly as possible it is a good idea to use the derivatives of the pilot command instead of using the actual control. This idea was implemented with splines and control theory, in a system that tries to model an aircraft. Computer calculations in Matlab show that it is impossible to receive enough smooth control signals by this way. This is due to the fact that the splines not only try to approximate the test function, but also its derivatives. A perfect traction is received but we have to pay in very peaky control signals and accelerations.

  13. Towards a general theory of implementation

    PubMed Central

    2013-01-01

    Understanding and evaluating the implementation of complex interventions in practice is an important problem for healthcare managers and policy makers, and for patients and others who must operationalize them beyond formal clinical settings. It has been argued that this work should be founded on theory that provides a foundation for understanding, designing, predicting, and evaluating dynamic implementation processes. This paper sets out core constituents of a general theory of implementation, building on Normalization Process Theory and linking it to key constructs from recent work in sociology and psychology. These are informed by ideas about agency and its expression within social systems and fields, social and cognitive mechanisms, and collective action. This approach unites a number of contending perspectives in a way that makes possible a more comprehensive explanation of the implementation and embedding of new ways of thinking, enacting and organizing practice. PMID:23406398

  14. Towards a general theory of implementation.

    PubMed

    May, Carl

    2013-02-13

    Understanding and evaluating the implementation of complex interventions in practice is an important problem for healthcare managers and policy makers, and for patients and others who must operationalize them beyond formal clinical settings. It has been argued that this work should be founded on theory that provides a foundation for understanding, designing, predicting, and evaluating dynamic implementation processes. This paper sets out core constituents of a general theory of implementation, building on Normalization Process Theory and linking it to key constructs from recent work in sociology and psychology. These are informed by ideas about agency and its expression within social systems and fields, social and cognitive mechanisms, and collective action. This approach unites a number of contending perspectives in a way that makes possible a more comprehensive explanation of the implementation and embedding of new ways of thinking, enacting and organizing practice.

  15. Security Analysis of Selected AMI Failure Scenarios Using Agent Based Game Theoretic Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abercrombie, Robert K; Schlicher, Bob G; Sheldon, Frederick T

    Information security analysis can be performed using game theory implemented in dynamic Agent Based Game Theoretic (ABGT) simulations. Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, and information assets. We concentrated our analysis on the Advanced Metering Infrastructure (AMI) functional domain which the National Electric Sector Cyber security Organization Resource (NESCOR) working group has currently documented 29 failure scenarios. The strategy for the game was developed by analyzing five electric sector representative failure scenarios contained in the AMI functional domain. From thesemore » five selected scenarios, we characterize them into three specific threat categories affecting confidentiality, integrity and availability (CIA). The analysis using our ABGT simulation demonstrates how to model the AMI functional domain using a set of rationalized game theoretic rules decomposed from the failure scenarios in terms of how those scenarios might impact the AMI network with respect to CIA.« less

  16. Simulation of Near-Edge X-ray Absorption Fine Structure with Time-Dependent Equation-of-Motion Coupled-Cluster Theory.

    PubMed

    Nascimento, Daniel R; DePrince, A Eugene

    2017-07-06

    An explicitly time-dependent (TD) approach to equation-of-motion (EOM) coupled-cluster theory with single and double excitations (CCSD) is implemented for simulating near-edge X-ray absorption fine structure in molecular systems. The TD-EOM-CCSD absorption line shape function is given by the Fourier transform of the CCSD dipole autocorrelation function. We represent this transform by its Padé approximant, which provides converged spectra in much shorter simulation times than are required by the Fourier form. The result is a powerful framework for the blackbox simulation of broadband absorption spectra. K-edge X-ray absorption spectra for carbon, nitrogen, and oxygen in several small molecules are obtained from the real part of the absorption line shape function and are compared with experiment. The computed and experimentally obtained spectra are in good agreement; the mean unsigned error in the predicted peak positions is only 1.2 eV. We also explore the spectral signatures of protonation in these molecules.

  17. Applying Organization Theory to Understanding the Adoption and Implementation of Accountable Care Organizations: Commentary.

    PubMed

    Shortell, Stephen M

    2016-12-01

    This commentary highights the key arguments and contributions of institutional thoery, transaction cost economics (TCE) theory, high reliability theory, and organizational learning theory to understanding the development and evolution of Accountable Care Organizations (ACOs). Institutional theory and TCE theory primarily emphasize the external influences shaping ACOs while high reliability theory and organizational learning theory underscore the internal fctors influencing ACO perfromance. A framework based on Implementation Science is proposed to conside the multiple perspectives on ACOs and, in particular, their abiity to innovate to achieve desired cost, quality, and population health goals. © The Author(s) 2016.

  18. Large scale GW calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Govoni, Marco; Galli, Giulia

    We present GW calculations of molecules, ordered and disordered solids and interfaces, which employ an efficient contour deformation technique for frequency integration and do not require the explicit evaluation of virtual electronic states nor the inversion of dielectric matrices. We also present a parallel implementation of the algorithm, which takes advantage of separable expressions of both the single particle Green’s function and the screened Coulomb interaction. The method can be used starting from density functional theory calculations performed with semilocal or hybrid functionals. The newly developed technique was applied to GW calculations of systems of unprecedented size, including water/semiconductor interfacesmore » with thousands of electrons.« less

  19. Large Scale GW Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Govoni, Marco; Galli, Giulia

    We present GW calculations of molecules, ordered and disordered solids and interfaces, which employ an efficient contour deformation technique for frequency integration and do not require the explicit evaluation of virtual electronic states nor the inversion of dielectric matrices. We also present a parallel implementation of the algorithm which takes advantage of separable expressions of both the single particle Green's function and the screened Coulomb interaction. The method can be used starting from density functional theory calculations performed with semilocal or hybrid functionals. We applied the newly developed technique to GW calculations of systems of unprecedented size, including water/semiconductor interfacesmore » with thousands of electrons.« less

  20. Large scale GW calculations

    DOE PAGES

    Govoni, Marco; Galli, Giulia

    2015-01-12

    We present GW calculations of molecules, ordered and disordered solids and interfaces, which employ an efficient contour deformation technique for frequency integration and do not require the explicit evaluation of virtual electronic states nor the inversion of dielectric matrices. We also present a parallel implementation of the algorithm, which takes advantage of separable expressions of both the single particle Green’s function and the screened Coulomb interaction. The method can be used starting from density functional theory calculations performed with semilocal or hybrid functionals. The newly developed technique was applied to GW calculations of systems of unprecedented size, including water/semiconductor interfacesmore » with thousands of electrons.« less

  1. A time correlation function theory describing static field enhanced third order optical effects at interfaces.

    PubMed

    Neipert, Christine; Space, Brian

    2006-12-14

    Sum vibrational frequency spectroscopy, a second order optical process, is interface specific in the dipole approximation. At charged interfaces, there exists a static field, and as a direct consequence, the experimentally detected signal is a combination of enhanced second and static field induced third order contributions. There is significant evidence in the literature of the importance/relative magnitude of this third order contribution, but no previous molecularly detailed approach existed to separately calculate the second and third order contributions. Thus, for the first time, a molecularly detailed time correlation function theory is derived here that allows for the second and third order contributions to sum frequency vibrational spectra to be individually determined. Further, a practical, molecular dynamics based, implementation procedure for the derived correlation functions that describe the third order phenomenon is also presented. This approach includes a novel generalization of point atomic polarizability models to calculate the hyperpolarizability of a molecular system. The full system hyperpolarizability appears in the time correlation functions responsible for third order contributions in the presence of a static field.

  2. A theory-based approach to nursing shared governance.

    PubMed

    Joseph, M Lindell; Bogue, Richard J

    2016-01-01

    The discipline of nursing uses a general definition of shared governance. The discipline's lack of a specified theory with precepts and propositions contributes to persistent barriers in progress toward building evidence-based knowledge through systematic study. The purposes of this article were to describe the development and elements of a program theory approach for nursing shared governance implementation and to recommend further testing. Five studies using multiple methods are described using a structured framework. The studies led to the use of Lipsey's method of theory development for program implementation to develop a theory for shared governance for nursing. Nine competencies were verified to define nursing practice council effectiveness. Other findings reveal that nurse empowerment results from alignment between the competencies of self- directed work teams and the competencies of organizational leaders. Implementation of GEMS theory based nursing shared governance can advance goals at the individual, unit, department, and organization level. Advancing professional nursing practice requires that nursing concepts are systematically studied and then formalized for implementation. This article describes the development of a theoretical foundation for the systematic study and implementation of nursing shared governance. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.

  3. Modern money theory and ecological tax reform: A functional finance approach to energy conservation

    NASA Astrophysics Data System (ADS)

    McConnell, Scott L. B.

    This dissertation contributes to heterodox economics by developing a theoretical and policy-relevant link that will promote the conservation of energy while driving the value of the domestic currency. The analysis relies upon the theoretical foundation of modern money theory and functional finance, which states that "taxes-drive-money" where the value of a sovereign nation's currency is imputed through the acceptance by the sovereign nation of the currency in payment of taxation. This theoretical perspective lends itself to various public policy prescriptions, such as government employment policies or the employer of last resort (ELR), which has been discussed at length elsewhere (Wray 1998; Tcherneva 2007, Forstater 2003). This research contributes to this overall program by arguing that the basis for taxation under modern money theory allows public policy makers various alternatives regarding the make-up of the tax system in place. In particular, following functional finance, taxes do not have the sole purpose of paying for government spending, but rather drive the value of the currency and may be designed to perform other functions as well, such as penalizing socially undesirable behavior. The focus in this dissertation is on the amelioration of pollution and increasing energy conservation. The research question for this dissertation is this: what federally implemented tax would best serve the multiple criteria of 1) driving the value of the currency, 2) promoting energy conservation and 3) ameliorating income and wealth disparities inherent in a monetary production economy? This dissertation provides a suggestion for such a tax that would be part of a much larger overall policy program based upon the tenets of modern money theory and functional finance. Additionally, this research seeks to provide an important theoretical contribution to the emerging Post Keynesian and ecological economics dialog.

  4. From conflict management to reward-based decision making: actors and critics in primate medial frontal cortex.

    PubMed

    Silvetti, Massimo; Alexander, William; Verguts, Tom; Brown, Joshua W

    2014-10-01

    The role of the medial prefrontal cortex (mPFC) and especially the anterior cingulate cortex has been the subject of intense debate for the last decade. A number of theories have been proposed to account for its function. Broadly speaking, some emphasize cognitive control, whereas others emphasize value processing; specific theories concern reward processing, conflict detection, error monitoring, and volatility detection, among others. Here we survey and evaluate them relative to experimental results from neurophysiological, anatomical, and cognitive studies. We argue for a new conceptualization of mPFC, arising from recent computational modeling work. Based on reinforcement learning theory, these new models propose that mPFC is an Actor-Critic system. This system is aimed to predict future events including rewards, to evaluate errors in those predictions, and finally, to implement optimal skeletal-motor and visceromotor commands to obtain reward. This framework provides a comprehensive account of mPFC function, accounting for and predicting empirical results across different levels of analysis, including monkey neurophysiology, human ERP, human neuroimaging, and human behavior. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Psi4NumPy: An Interactive Quantum Chemistry Programming Environment for Reference Implementations and Rapid Development.

    PubMed

    Smith, Daniel G A; Burns, Lori A; Sirianni, Dominic A; Nascimento, Daniel R; Kumar, Ashutosh; James, Andrew M; Schriber, Jeffrey B; Zhang, Tianyuan; Zhang, Boyi; Abbott, Adam S; Berquist, Eric J; Lechner, Marvin H; Cunha, Leonardo A; Heide, Alexander G; Waldrop, Jonathan M; Takeshita, Tyler Y; Alenaizan, Asem; Neuhauser, Daniel; King, Rollin A; Simmonett, Andrew C; Turney, Justin M; Schaefer, Henry F; Evangelista, Francesco A; DePrince, A Eugene; Crawford, T Daniel; Patkowski, Konrad; Sherrill, C David

    2018-06-11

    Psi4NumPy demonstrates the use of efficient computational kernels from the open-source Psi4 program through the popular NumPy library for linear algebra in Python to facilitate the rapid development of clear, understandable Python computer code for new quantum chemical methods, while maintaining a relatively low execution time. Using these tools, reference implementations have been created for a number of methods, including self-consistent field (SCF), SCF response, many-body perturbation theory, coupled-cluster theory, configuration interaction, and symmetry-adapted perturbation theory. Furthermore, several reference codes have been integrated into Jupyter notebooks, allowing background, underlying theory, and formula information to be associated with the implementation. Psi4NumPy tools and associated reference implementations can lower the barrier for future development of quantum chemistry methods. These implementations also demonstrate the power of the hybrid C++/Python programming approach employed by the Psi4 program.

  6. Approaching the basis set limit for DFT calculations using an environment-adapted minimal basis with perturbation theory: Formulation, proof of concept, and a pilot implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mao, Yuezhi; Horn, Paul R.; Mardirossian, Narbe

    2016-07-28

    Recently developed density functionals have good accuracy for both thermochemistry (TC) and non-covalent interactions (NC) if very large atomic orbital basis sets are used. To approach the basis set limit with potentially lower computational cost, a new self-consistent field (SCF) scheme is presented that employs minimal adaptive basis (MAB) functions. The MAB functions are optimized on each atomic site by minimizing a surrogate function. High accuracy is obtained by applying a perturbative correction (PC) to the MAB calculation, similar to dual basis approaches. Compared to exact SCF results, using this MAB-SCF (PC) approach with the same large target basis set producesmore » <0.15 kcal/mol root-mean-square deviations for most of the tested TC datasets, and <0.1 kcal/mol for most of the NC datasets. The performance of density functionals near the basis set limit can be even better reproduced. With further improvement to its implementation, MAB-SCF (PC) is a promising lower-cost substitute for conventional large-basis calculations as a method to approach the basis set limit of modern density functionals.« less

  7. Implementation of density functional theory method on object-oriented programming (C++) to calculate energy band structure using the projector augmented wave (PAW)

    NASA Astrophysics Data System (ADS)

    Alfianto, E.; Rusydi, F.; Aisyah, N. D.; Fadilla, R. N.; Dipojono, H. K.; Martoprawiro, M. A.

    2017-05-01

    This study implemented DFT method into the C++ programming language with object-oriented programming rules (expressive software). The use of expressive software results in getting a simple programming structure, which is similar to mathematical formula. This will facilitate the scientific community to develop the software. We validate our software by calculating the energy band structure of Silica, Carbon, and Germanium with FCC structure using the Projector Augmented Wave (PAW) method then compare the results to Quantum Espresso calculation’s results. This study shows that the accuracy of the software is 85% compared to Quantum Espresso.

  8. Life Review: Implementation, Theory, Research, and Therapy

    ERIC Educational Resources Information Center

    Haber, David

    2006-01-01

    A selective literature review of publications on life review generated ideas on implementation, theory, research, and therapy. The review begins by differentiating life review from reminiscence, and summarizing ways to conduct a life review. A dozen theories that have been influenced by the life review technique are presented, with a focus placed…

  9. Flexoelectricity in ATiO3 (A = Sr, Ba, Pb) perovskite oxide superlattices from density functional theory

    NASA Astrophysics Data System (ADS)

    Plymill, Austin; Xu, Haixuan

    2018-04-01

    Flexoelectric coefficients for several bulk and superlattice perovskite systems are determined using a direct approach from first principles density functional theory calculations. A strong enhancement in the longitudinal flexoelectric coefficient has been observed in the 1SrTiO3/1PbTiO3 superlattice with alternating single atomic layers of SrTiO3 and PbTiO3. It was found that atomistic displacement, charge response under strain, and interfaces affect the flexoelectric properties of perovskite superlattice systems. These factors can be used to tune this effect in dielectrics. It was further found that the calculated Born effective charge for an ion under the influence of strain can differ significantly from the bulk value. These insights can be used to help search for more effective flexoelectric materials to be implemented in electromechanical devices.

  10. Open-ended recursive calculation of single residues of response functions for perturbation-dependent basis sets.

    PubMed

    Friese, Daniel H; Ringholm, Magnus; Gao, Bin; Ruud, Kenneth

    2015-10-13

    We present theory, implementation, and applications of a recursive scheme for the calculation of single residues of response functions that can treat perturbations that affect the basis set. This scheme enables the calculation of nonlinear light absorption properties to arbitrary order for other perturbations than an electric field. We apply this scheme for the first treatment of two-photon circular dichroism (TPCD) using London orbitals at the Hartree-Fock level of theory. In general, TPCD calculations suffer from the problem of origin dependence, which has so far been solved by using the velocity gauge for the electric dipole operator. This work now enables comparison of results from London orbital and velocity gauge based TPCD calculations. We find that the results from the two approaches both exhibit strong basis set dependence but that they are very similar with respect to their basis set convergence.

  11. Lattice dynamics of Ru2FeX (X = Si, Ge) Full Heusler alloys

    NASA Astrophysics Data System (ADS)

    Rizwan, M.; Afaq, A.; Aneeza, A.

    2018-05-01

    In present work, the lattice dynamics of Ru2FeX (X = Si, Ge) full Heusler alloys are investigated using density functional theory (DFT) within generalized gradient approximation (GGA) in a plane wave basis, with norm-conserving pseudopotentials. Phonon dispersion curves and phonon density of states are obtained using first-principles linear response approach of density functional perturbation theory (DFPT) as implemented in Quantum ESPRESSO code. Phonon dispersion curves indicates for both Heusler alloys that there is no imaginary phonon in whole Brillouin zone, confirming dynamical stability of these alloys in L21 type structure. There is a considerable overlapping between acoustic and optical phonon modes predicting no phonon band gap exists in dispersion curves of alloys. The same result is shown by phonon density of states curves for both Heusler alloys. Reststrahlen band for Ru2FeSi is found smaller than Ru2FeGe.

  12. Two-photon absorption cross sections within equation-of-motion coupled-cluster formalism using resolution-of-the-identity and Cholesky decomposition representations: Theory, implementation, and benchmarks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nanda, Kaushik D.; Krylov, Anna I.

    The equation-of-motion coupled-cluster (EOM-CC) methods provide a robust description of electronically excited states and their properties. Here, we present a formalism for two-photon absorption (2PA) cross sections for the equation-of-motion for excitation energies CC with single and double substitutions (EOM-CC for electronically excited states with single and double substitutions) wave functions. Rather than the response theory formulation, we employ the expectation-value approach which is commonly used within EOM-CC, configuration interaction, and algebraic diagrammatic construction frameworks. In addition to canonical implementation, we also exploit resolution-of-the-identity (RI) and Cholesky decomposition (CD) for the electron-repulsion integrals to reduce memory requirements and to increasemore » parallel efficiency. The new methods are benchmarked against the CCSD and CC3 response theories for several small molecules. We found that the expectation-value 2PA cross sections are within 5% from the quadratic response CCSD values. The RI and CD approximations lead to small errors relative to the canonical implementation (less than 4%) while affording computational savings. RI/CD successfully address the well-known issue of large basis set requirements for 2PA cross sections calculations. The capabilities of the new code are illustrated by calculations of the 2PA cross sections for model chromophores of the photoactive yellow and green fluorescent proteins.« less

  13. Open-Ended Recursive Approach for the Calculation of Multiphoton Absorption Matrix Elements

    PubMed Central

    2015-01-01

    We present an implementation of single residues for response functions to arbitrary order using a recursive approach. Explicit expressions in terms of density-matrix-based response theory for the single residues of the linear, quadratic, cubic, and quartic response functions are also presented. These residues correspond to one-, two-, three- and four-photon transition matrix elements. The newly developed code is used to calculate the one-, two-, three- and four-photon absorption cross sections of para-nitroaniline and para-nitroaminostilbene, making this the first treatment of four-photon absorption in the framework of response theory. We find that the calculated multiphoton absorption cross sections are not very sensitive to the size of the basis set as long as a reasonably large basis set with diffuse functions is used. The choice of exchange–correlation functional, however, significantly affects the calculated cross sections of both charge-transfer transitions and other transitions, in particular, for the larger para-nitroaminostilbene molecule. We therefore recommend the use of a range-separated exchange–correlation functional in combination with the augmented correlation-consistent double-ζ basis set aug-cc-pVDZ for the calculation of multiphoton absorption properties. PMID:25821415

  14. Proof-Term Synthesis on Dependent-Type Systems via Explicit Substitutions

    DTIC Science & Technology

    1999-11-01

    oriented functional language OCaml , in about 50 lines. We have also implemented a higher-order unification algorithm for ground expressions. The soundness... OCaml , and it is electronically available by contacting the author. The underlying theory of the method proposed here is the An^-calculus. We believe...CORNES, Conception d’un langage de haut niveau de representation de preuves: recurrence par filtrage de motifs, unification en presence de types

  15. Research on Network Defense Strategy Based on Honey Pot Technology

    NASA Astrophysics Data System (ADS)

    Hong, Jianchao; Hua, Ying

    2018-03-01

    As a new network security technology of active defense, The honeypot technology has become a very effective and practical method of decoy attackers. The thesis discusses the theory, structure, characteristic, design and implementation of Honeypot in detail. Aiming at the development of means of attack, put forward a kind of network defense technology based on honeypot technology, constructing a virtual Honeypot demonstrate the honeypot’s functions.

  16. Survey of digital filtering

    NASA Technical Reports Server (NTRS)

    Nagle, H. T., Jr.

    1972-01-01

    A three part survey is made of the state-of-the-art in digital filtering. Part one presents background material including sampled data transformations and the discrete Fourier transform. Part two, digital filter theory, gives an in-depth coverage of filter categories, transfer function synthesis, quantization and other nonlinear errors, filter structures and computer aided design. Part three presents hardware mechanization techniques. Implementations by general purpose, mini-, and special-purpose computers are presented.

  17. Massive-Scale Gene Co-Expression Network Construction and Robustness Testing Using Random Matrix Theory

    PubMed Central

    Isaacson, Sven; Luo, Feng; Feltus, Frank A.; Smith, Melissa C.

    2013-01-01

    The study of gene relationships and their effect on biological function and phenotype is a focal point in systems biology. Gene co-expression networks built using microarray expression profiles are one technique for discovering and interpreting gene relationships. A knowledge-independent thresholding technique, such as Random Matrix Theory (RMT), is useful for identifying meaningful relationships. Highly connected genes in the thresholded network are then grouped into modules that provide insight into their collective functionality. While it has been shown that co-expression networks are biologically relevant, it has not been determined to what extent any given network is functionally robust given perturbations in the input sample set. For such a test, hundreds of networks are needed and hence a tool to rapidly construct these networks. To examine functional robustness of networks with varying input, we enhanced an existing RMT implementation for improved scalability and tested functional robustness of human (Homo sapiens), rice (Oryza sativa) and budding yeast (Saccharomyces cerevisiae). We demonstrate dramatic decrease in network construction time and computational requirements and show that despite some variation in global properties between networks, functional similarity remains high. Moreover, the biological function captured by co-expression networks thresholded by RMT is highly robust. PMID:23409071

  18. Testing the nonlocal kinetic energy functional of an inhomogeneous, two-dimensional degenerate Fermi gas within the average density approximation

    NASA Astrophysics Data System (ADS)

    Towers, J.; van Zyl, B. P.; Kirkby, W.

    2015-08-01

    In a recent paper [B. P. van Zyl et al., Phys. Rev. A 89, 022503 (2014), 10.1103/PhysRevA.89.022503], the average density approximation (ADA) was implemented to develop a parameter-free, nonlocal kinetic energy functional to be used in the orbital-free density functional theory of an inhomogeneous, two-dimensional (2D) Fermi gas. In this work, we provide a detailed comparison of self-consistent calculations within the ADA with the exact results of the Kohn-Sham density functional theory and the elementary Thomas-Fermi (TF) approximation. We demonstrate that the ADA for the 2D kinetic energy functional works very well under a wide variety of confinement potentials, even for relatively small particle numbers. Remarkably, the TF approximation for the kinetic energy functional, without any gradient corrections, also yields good agreement with the exact kinetic energy for all confining potentials considered, although at the expense of the spatial and kinetic energy densities exhibiting poor pointwise agreement, particularly near the TF radius. Our findings illustrate that the ADA kinetic energy functional yields accurate results for both the local and global equilibrium properties of an inhomogeneous 2D Fermi gas, without the need for any fitting parameters.

  19. Implementation of highly parallel and large scale GW calculations within the OpenAtom software

    NASA Astrophysics Data System (ADS)

    Ismail-Beigi, Sohrab

    The need to describe electronic excitations with better accuracy than provided by band structures produced by Density Functional Theory (DFT) has been a long-term enterprise for the computational condensed matter and materials theory communities. In some cases, appropriate theoretical frameworks have existed for some time but have been difficult to apply widely due to computational cost. For example, the GW approximation incorporates a great deal of important non-local and dynamical electronic interaction effects but has been too computationally expensive for routine use in large materials simulations. OpenAtom is an open source massively parallel ab initiodensity functional software package based on plane waves and pseudopotentials (http://charm.cs.uiuc.edu/OpenAtom/) that takes advantage of the Charm + + parallel framework. At present, it is developed via a three-way collaboration, funded by an NSF SI2-SSI grant (ACI-1339804), between Yale (Ismail-Beigi), IBM T. J. Watson (Glenn Martyna) and the University of Illinois at Urbana Champaign (Laxmikant Kale). We will describe the project and our current approach towards implementing large scale GW calculations with OpenAtom. Potential applications of large scale parallel GW software for problems involving electronic excitations in semiconductor and/or metal oxide systems will be also be pointed out.

  20. Computational Implementation of a Thermodynamically Based Work Potential Model For Progressive Microdamage and Transverse Cracking in Fiber-Reinforced Laminates

    NASA Technical Reports Server (NTRS)

    Pineda, Evan J.; Waas, Anthony M.; Bednarcyk, Brett A.; Collier, Craig S.

    2012-01-01

    A continuum-level, dual internal state variable, thermodynamically based, work potential model, Schapery Theory, is used capture the effects of two matrix damage mechanisms in a fiber-reinforced laminated composite: microdamage and transverse cracking. Matrix microdamage accrues primarily in the form of shear microcracks between the fibers of the composite. Whereas, larger transverse matrix cracks typically span the thickness of a lamina and run parallel to the fibers. Schapery Theory uses the energy potential required to advance structural changes, associated with the damage mechanisms, to govern damage growth through a set of internal state variables. These state variables are used to quantify the stiffness degradation resulting from damage growth. The transverse and shear stiffness of the lamina are related to the internal state variables through a set of measurable damage functions. Additionally, the damage variables for a given strain state can be calculated from a set of evolution equations. These evolution equations and damage functions are implemented into the finite element method and used to govern the constitutive response of the material points in the model. Additionally, an axial failure criterion is included in the model. The response of a center-notched, buffer strip-stiffened panel subjected to uniaxial tension is investigated and results are compared to experiment.

  1. Research into display sharing techniques for distributed computing environments

    NASA Technical Reports Server (NTRS)

    Hugg, Steven B.; Fitzgerald, Paul F., Jr.; Rosson, Nina Y.; Johns, Stephen R.

    1990-01-01

    The X-based Display Sharing solution for distributed computing environments is described. The Display Sharing prototype includes the base functionality for telecast and display copy requirements. Since the prototype implementation is modular and the system design provided flexibility for the Mission Control Center Upgrade (MCCU) operational consideration, the prototype implementation can be the baseline for a production Display Sharing implementation. To facilitate the process the following discussions are presented: Theory of operation; System of architecture; Using the prototype; Software description; Research tools; Prototype evaluation; and Outstanding issues. The prototype is based on the concept of a dedicated central host performing the majority of the Display Sharing processing, allowing minimal impact on each individual workstation. Each workstation participating in Display Sharing hosts programs to facilitate the user's access to Display Sharing as host machine.

  2. Homeostatic theory of obesity

    PubMed Central

    2015-01-01

    Health is regulated by homeostasis, a property of all living things. Homeostasis maintains equilibrium at set-points using feedback loops for optimum functioning of the organism. Imbalances in homeostasis causing overweight and obesity are evident in more than 1 billion people. In a new theory, homeostatic obesity imbalance is attributed to a hypothesized ‘Circle of Discontent’, a system of feedback loops linking weight gain, body dissatisfaction, negative affect and over-consumption. The Circle of Discontent theory is consistent with an extensive evidence base. A four-armed strategy to halt the obesity epidemic consists of (1) putting a stop to victim-blaming, stigma and discrimination; (2) devalorizing the thin-ideal; (3) reducing consumption of energy-dense, low-nutrient foods and drinks; and (4) improving access to plant-based diets. If fully implemented, interventions designed to restore homeostasis have the potential to halt the obesity epidemic. PMID:28070357

  3. Computing decay rates for new physics theories with FEYNRULES and MADGRAPH 5_AMC@NLO

    NASA Astrophysics Data System (ADS)

    Alwall, Johan; Duhr, Claude; Fuks, Benjamin; Mattelaer, Olivier; Öztürk, Deniz Gizem; Shen, Chia-Hsien

    2015-12-01

    We present new features of the FEYNRULES and MADGRAPH 5_AMC@NLO programs for the automatic computation of decay widths that consistently include channels of arbitrary final-state multiplicity. The implementations are generic enough so that they can be used in the framework of any quantum field theory, possibly including higher-dimensional operators. We extend at the same time the conventions of the Universal FEYNRULES Output (or UFO) format to include decay tables and information on the total widths. We finally provide a set of representative examples of the usage of the new functions of the different codes in the framework of the Standard Model, the Higgs Effective Field Theory, the Strongly Interacting Light Higgs model and the Minimal Supersymmetric Standard Model and compare the results to available literature and programs for validation purposes.

  4. Navigating Through Chaos: Charge Nurses and Patient Safety.

    PubMed

    Cathro, Heather

    2016-04-01

    The aim of this study was to explore actions and the processes charge nurses (CNs) implement to keep patients safe and generate an emerging theory to inform CN job descriptions, orientation, and training to promote patient safety in practice. Healthcare workers must provide a safe environment for patients. CNs are the frontline leaders on most hospital units and can function as gatekeepers for safe patient care. This grounded theory study utilized purposive sampling of CNs on medical-surgical units in a 400-bed metropolitan hospital. Data collection consisted of 11 interviews and 6 observations. The emerging theory was navigating through chaos: CNs balancing multiple roles, maintaining a watchful eye, and working with and leading the healthcare team to keep patients safe. CNs have knowledge of patients, staff, and complex healthcare environments, putting them in opportune positions to influence patient safety.

  5. cDF Theory Software for mesoscopic modeling of equilibrium and transport phenomena

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2015-12-01

    The approach is based on classical Density Functional Theory ((cDFT) coupled with the Poisson-Nernst-Planck (PNP) transport kinetics model and quantum mechanical description of short-range interaction and elementary transport processes. The model we proposed and implemented is fully atomistic, taking into account pairwise short-range and manybody long-range interactions. But in contrast to standard molecular dynamics (MD) simulations, where long-range manybody interactions are evaluated as a sum of pair-wise atom-atom contributions, we include them analytically based on wellestablished theories of electrostatic and excluded volume interactions in multicomponent systems. This feature of the PNP/cDFT approach allows us to reach well beyond the length-scalesmore » accessible to MD simulations, while retaining the essential physics of interatomic interactions from first principles and in a parameter-free fashion.« less

  6. SurfKin: an ab initio kinetic code for modeling surface reactions.

    PubMed

    Le, Thong Nguyen-Minh; Liu, Bin; Huynh, Lam K

    2014-10-05

    In this article, we describe a C/C++ program called SurfKin (Surface Kinetics) to construct microkinetic mechanisms for modeling gas-surface reactions. Thermodynamic properties of reaction species are estimated based on density functional theory calculations and statistical mechanics. Rate constants for elementary steps (including adsorption, desorption, and chemical reactions on surfaces) are calculated using the classical collision theory and transition state theory. Methane decomposition and water-gas shift reaction on Ni(111) surface were chosen as test cases to validate the code implementations. The good agreement with literature data suggests this is a powerful tool to facilitate the analysis of complex reactions on surfaces, and thus it helps to effectively construct detailed microkinetic mechanisms for such surface reactions. SurfKin also opens a possibility for designing nanoscale model catalysts. Copyright © 2014 Wiley Periodicals, Inc.

  7. Analyzing implementation dynamics using theory-driven evaluation principles: lessons learnt from a South African centralized chronic dispensing model.

    PubMed

    Magadzire, Bvudzai Priscilla; Marchal, Bruno; Mathys, Tania; Laing, Richard O; Ward, Kim

    2017-12-04

    Centralized dispensing of essential medicines is one of South Africa's strategies to address the shortage of pharmacists, reduce patients' waiting times and reduce over-crowding at public sector healthcare facilities. This article reports findings of an evaluation of the Chronic Dispensing Unit (CDU) in one province. The objectives of this process evaluation were to: (1) compare what was planned versus the actual implementation and (2) establish the causal elements and contextual factors influencing implementation. This qualitative study employed key informant interviews with the intervention's implementers (clinicians, managers and the service provider) [N = 40], and a review of policy and program documents. Data were thematically analyzed by identifying the main influences shaping the implementation process. Theory-driven evaluation principles were applied as a theoretical framework to explain implementation dynamics. The overall participants' response about the CDU was positive and the majority of informants concurred that the establishment of the CDU to dispense large volumes of medicines is a beneficial strategy to address healthcare barriers because mechanical functions are automated and distribution of medicines much quicker. However, implementation was influenced by the context and discrepancies between planned activities and actual implementation were noted. Procurement inefficiencies at central level caused medicine stock-outs and affected CDU activities. At the frontline, actors were aware of the CDU's implementation guidelines regarding patient selection, prescription validity and management of non-collected medicines but these were adapted to accommodate practical realities and to meet performance targets attached to the intervention. Implementation success was a result of a combination of 'hardware' (e.g. training, policies, implementation support and appropriate infrastructure) and 'software' (e.g. ownership, cooperation between healthcare practitioners and trust) factors. This study shows that health system interventions have unpredictable paths of implementation. Discrepancies between planned and actual implementation reinforce findings in existing literature suggesting that while tools and defined operating procedures are necessary for any intervention, their successful application depends crucially on the context and environment in which implementation occurs. We anticipate that this evaluation will stimulate wider thinking about the implementation of similar models in low- and middle-income countries.

  8. Factorization and resummation of Higgs boson differential distributions in soft-collinear effective theory

    NASA Astrophysics Data System (ADS)

    Mantry, Sonny; Petriello, Frank

    2010-05-01

    We derive a factorization theorem for the Higgs boson transverse momentum (pT) and rapidity (Y) distributions at hadron colliders, using the soft-collinear effective theory (SCET), for mh≫pT≫ΛQCD, where mh denotes the Higgs mass. In addition to the factorization of the various scales involved, the perturbative physics at the pT scale is further factorized into two collinear impact-parameter beam functions (IBFs) and an inverse soft function (ISF). These newly defined functions are of a universal nature for the study of differential distributions at hadron colliders. The additional factorization of the pT-scale physics simplifies the implementation of higher order radiative corrections in αs(pT). We derive formulas for factorization in both momentum and impact parameter space and discuss the relationship between them. Large logarithms of the relevant scales in the problem are summed using the renormalization group equations of the effective theories. Power corrections to the factorization theorem in pT/mh and ΛQCD/pT can be systematically derived. We perform multiple consistency checks on our factorization theorem including a comparison with known fixed-order QCD results. We compare the SCET factorization theorem with the Collins-Soper-Sterman approach to low-pT resummation.

  9. Using planned adaptation to implement evidence-based programs with new populations.

    PubMed

    Lee, Shawna J; Altschul, Inna; Mowbray, Carol T

    2008-06-01

    The Interactive Systems Framework (ISF) for Dissemination and Implementation (Wandersman et al. 2008) elaborates the functions and structures that move evidence-based programs (EBPs) from research to practice. Inherent in that process is the tension between implementing programs with fidelity and the need to tailor programs to fit the target population. We propose Planned Adaptation as one approach to resolve this tension, with the goal of guiding practitioners in adapting EBPs so that they maintain core components of program theory while taking into account the needs of particular populations. Planned Adaptation is a form of capacity building within the Prevention Support System that provides a framework to guide practitioners in adapting programs while encouraging researchers to provide information relevant to adaptation as a critical aspect of dissemination research, with the goal of promoting wider dissemination and better implementation of EBPs. We illustrate Planned Adaptation using the JOBS Program (Caplan et al. 1989), which was developed for recently laid-off, working- and middle-class workers and subsequently implemented with welfare recipients.

  10. Identifying Barriers in Implementing Outcomes-Based Assessment Program Review: A Grounded Theory Analysis

    ERIC Educational Resources Information Center

    Bresciani, Marilee J.

    2011-01-01

    The purpose of this grounded theory study was to identify the typical barriers encountered by faculty and administrators when implementing outcomes-based assessment program review. An analysis of interviews with faculty and administrators at nine institutions revealed a theory that faculty and administrators' promotion, tenure (if applicable),…

  11. Knowledge-for-Action Theories in Evaluation: Knowledge Utilization, Diffusion, Implementation, Transfer, and Translation

    ERIC Educational Resources Information Center

    Ottoson, Judith M.

    2009-01-01

    Five knowledge-for-action theories are summarized and compared in this chapter for their evaluation implications: knowledge utilization, diffusion, implementation, transfer, and translation. Usually dispersed across multiple fields and disciplines, these theories are gathered here for a common focus on knowledge and change. Knowledge in some form…

  12. Structural predictions for Correlated Electron Materials Using the Functional Dynamical Mean Field Theory Approach

    NASA Astrophysics Data System (ADS)

    Haule, Kristjan

    2018-04-01

    The Dynamical Mean Field Theory (DMFT) in combination with the band structure methods has been able to address reach physics of correlated materials, such as the fluctuating local moments, spin and orbital fluctuations, atomic multiplet physics and band formation on equal footing. Recently it is getting increasingly recognized that more predictive ab-initio theory of correlated systems needs to also address the feedback effect of the correlated electronic structure on the ionic positions, as the metal-insulator transition is almost always accompanied with considerable structural distortions. We will review recently developed extension of merger between the Density Functional Theory (DFT) and DMFT method, dubbed DFT+ embedded DMFT (DFT+eDMFT), whichsuccessfully addresses this challenge. It is based on the stationary Luttinger-Ward functional to minimize the numerical error, it subtracts the exact double-counting of DFT and DMFT, and implements self-consistent forces on all atoms in the unit cell. In a few examples, we will also show how the method elucidated the important feedback effect of correlations on crystal structure in rare earth nickelates to explain the mechanism of the metal-insulator transition. The method showed that such feedback effect is also essential to understand the dynamic stability of the high-temperature body-centered cubic phase of elemental iron, and in particular it predicted strong enhancement of the electron-phonon coupling over DFT values in FeSe, which was very recently verified by pioneering time-domain experiment.

  13. Pragmatic hydraulic theory predicts stomatal responses to climatic water deficits.

    PubMed

    Sperry, John S; Wang, Yujie; Wolfe, Brett T; Mackay, D Scott; Anderegg, William R L; McDowell, Nate G; Pockman, William T

    2016-11-01

    Ecosystem models have difficulty predicting plant drought responses, partially from uncertainty in the stomatal response to water deficits in soil and atmosphere. We evaluate a 'supply-demand' theory for water-limited stomatal behavior that avoids the typical scaffold of empirical response functions. The premise is that canopy water demand is regulated in proportion to threat to supply posed by xylem cavitation and soil drying. The theory was implemented in a trait-based soil-plant-atmosphere model. The model predicted canopy transpiration (E), canopy diffusive conductance (G), and canopy xylem pressure (P canopy ) from soil water potential (P soil ) and vapor pressure deficit (D). Modeled responses to D and P soil were consistent with empirical response functions, but controlling parameters were hydraulic traits rather than coefficients. Maximum hydraulic and diffusive conductances and vulnerability to loss in hydraulic conductance dictated stomatal sensitivity and hence the iso- to anisohydric spectrum of regulation. The model matched wide fluctuations in G and P canopy across nine data sets from seasonally dry tropical forest and piñon-juniper woodland with < 26% mean error. Promising initial performance suggests the theory could be useful in improving ecosystem models. Better understanding of the variation in hydraulic properties along the root-stem-leaf continuum will simplify parameterization. © 2016 The Authors. New Phytologist © 2016 New Phytologist Trust.

  14. Experiment-specific cosmic microwave background calculations made easier - Approximation formula for smoothed delta T/T windows

    NASA Technical Reports Server (NTRS)

    Gorski, Krzysztof M.

    1993-01-01

    Simple and easy to implement elementary function approximations are introduced to the spectral window functions needed in calculations of model predictions of the cosmic microwave backgrond (CMB) anisotropy. These approximations allow the investigator to obtain model delta T/T predictions in terms of single integrals over the power spectrum of cosmological perturbations and to avoid the necessity of performing the additional integrations. The high accuracy of these approximations is demonstrated here for the CDM theory-based calculations of the expected delta T/T signal in several experiments searching for the CMB anisotropy.

  15. Effects of two-temperature parameter and thermal nonlocal parameter on transient responses of a half-space subjected to ramp-type heating

    NASA Astrophysics Data System (ADS)

    Xue, Zhang-Na; Yu, Ya-Jun; Tian, Xiao-Geng

    2017-07-01

    Based upon the coupled thermoelasticity and Green and Lindsay theory, the new governing equations of two-temperature thermoelastic theory with thermal nonlocal parameter is formulated. To more realistically model thermal loading of a half-space surface, a linear temperature ramping function is adopted. Laplace transform techniques are used to get the general analytical solutions in Laplace domain, and the inverse Laplace transforms based on Fourier expansion techniques are numerically implemented to obtain the numerical solutions in time domain. Specific attention is paid to study the effect of thermal nonlocal parameter, ramping time, and two-temperature parameter on the distributions of temperature, displacement and stress distribution.

  16. Pressure calculation in hybrid particle-field simulations

    NASA Astrophysics Data System (ADS)

    Milano, Giuseppe; Kawakatsu, Toshihiro

    2010-12-01

    In the framework of a recently developed scheme for a hybrid particle-field simulation techniques where self-consistent field (SCF) theory and particle models (molecular dynamics) are combined [J. Chem. Phys. 130, 214106 (2009)], we developed a general formulation for the calculation of instantaneous pressure and stress tensor. The expressions have been derived from statistical mechanical definition of the pressure starting from the expression for the free energy functional in the SCF theory. An implementation of the derived formulation suitable for hybrid particle-field molecular dynamics-self-consistent field simulations is described. A series of test simulations on model systems are reported comparing the calculated pressure with those obtained from standard molecular dynamics simulations based on pair potentials.

  17. Nodal aberration theory applied to freeform surfaces

    NASA Astrophysics Data System (ADS)

    Fuerschbach, Kyle; Rolland, Jannick P.; Thompson, Kevin P.

    2014-12-01

    When new three-dimensional packages are developed for imaging optical systems, the rotational symmetry of the optical system is often broken, changing its imaging behavior and making the optical performance worse. A method to restore the performance is to use freeform optical surfaces that compensate directly the aberrations introduced from tilting and decentering the optical surfaces. In order to effectively optimize the shape of a freeform surface to restore optical functionality, it is helpful to understand the aberration effect the surface may induce. Using nodal aberration theory the aberration fields induced by a freeform surface in an optical system are explored. These theoretical predications are experimentally validated with the design and implementation of an aberration generating telescope.

  18. Microscopic theory of nuclear fission: a review

    NASA Astrophysics Data System (ADS)

    Schunck, N.; Robledo, L. M.

    2016-11-01

    This article reviews how nuclear fission is described within nuclear density functional theory. A distinction should be made between spontaneous fission, where half-lives are the main observables and quantum tunnelling the essential concept, and induced fission, where the focus is on fragment properties and explicitly time-dependent approaches are often invoked. Overall, the cornerstone of the density functional theory approach to fission is the energy density functional formalism. The basic tenets of this method, including some well-known tools such as the Hartree-Fock-Bogoliubov (HFB) theory, effective two-body nuclear potentials such as the Skyrme and Gogny force, finite-temperature extensions and beyond mean-field corrections, are presented succinctly. The energy density functional approach is often combined with the hypothesis that the time-scale of the large amplitude collective motion driving the system to fission is slow compared to typical time-scales of nucleons inside the nucleus. In practice, this hypothesis of adiabaticity is implemented by introducing (a few) collective variables and mapping out the many-body Schrödinger equation into a collective Schrödinger-like equation for the nuclear wave-packet. The region of the collective space where the system transitions from one nucleus to two (or more) fragments defines what are called the scission configurations. The inertia tensor that enters the kinetic energy term of the collective Schrödinger-like equation is one of the most essential ingredients of the theory, since it includes the response of the system to small changes in the collective variables. For this reason, the two main approximations used to compute this inertia tensor, the adiabatic time-dependent HFB and the generator coordinate method, are presented in detail, both in their general formulation and in their most common approximations. The collective inertia tensor enters also the Wentzel-Kramers-Brillouin (WKB) formula used to extract spontaneous fission half-lives from multi-dimensional quantum tunnelling probabilities (For the sake of completeness, other approaches to tunnelling based on functional integrals are also briefly discussed, although there are very few applications.) It is also an important component of some of the time-dependent methods that have been used in fission studies. Concerning the latter, both the semi-classical approaches to time-dependent nuclear dynamics and more microscopic theories involving explicit quantum-many-body methods are presented. One of the hallmarks of the microscopic theory of fission is the tremendous amount of computing needed for practical applications. In particular, the successful implementation of the theories presented in this article requires a very precise numerical resolution of the HFB equations for large values of the collective variables. This aspect is often overlooked, and several sections are devoted to discussing the resolution of the HFB equations, especially in the context of very deformed nuclear shapes. In particular, the numerical precision and iterative methods employed to obtain the HFB solution are documented in detail. Finally, a selection of the most recent and representative results obtained for both spontaneous and induced fission is presented, with the goal of emphasizing the coherence of the microscopic approaches employed. Although impressive progress has been achieved over the last two decades to understand fission microscopically, much work remains to be done. Several possible lines of research are outlined in the conclusion.

  19. Microscopic Theory of Nuclear Fission: A Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schunck, N.; Robledo, L. M.

    This paper reviews how nuclear fission is described within nuclear density functional theory. A distinction should be made between spontaneous fission, where half-lives are the main observables and quantum tunnelling the essential concept, and induced fission, where the focus is on fragment properties and explicitly time-dependent approaches are often invoked. Overall, the cornerstone of the density functional theory approach to fission is the energy density functional formalism. The basic tenets of this method, including some well-known tools such as the Hartree–Fock–Bogoliubov (HFB) theory, effective two-body nuclear potentials such as the Skyrme and Gogny force, finite-temperature extensions and beyond mean-field corrections,more » are presented succinctly. The energy density functional approach is often combined with the hypothesis that the time-scale of the large amplitude collective motion driving the system to fission is slow compared to typical time-scales of nucleons inside the nucleus. In practice, this hypothesis of adiabaticity is implemented by introducing (a few) collective variables and mapping out the many-body Schrödinger equation into a collective Schrödinger-like equation for the nuclear wave-packet. The region of the collective space where the system transitions from one nucleus to two (or more) fragments defines what are called the scission configurations. The inertia tensor that enters the kinetic energy term of the collective Schrödinger-like equation is one of the most essential ingredients of the theory, since it includes the response of the system to small changes in the collective variables. For this reason, the two main approximations used to compute this inertia tensor, the adiabatic time-dependent HFB and the generator coordinate method, are presented in detail, both in their general formulation and in their most common approximations. The collective inertia tensor enters also the Wentzel–Kramers–Brillouin (WKB) formula used to extract spontaneous fission half-lives from multi-dimensional quantum tunnelling probabilities (For the sake of completeness, other approaches to tunnelling based on functional integrals are also briefly discussed, although there are very few applications.) It is also an important component of some of the time-dependent methods that have been used in fission studies. Concerning the latter, both the semi-classical approaches to time-dependent nuclear dynamics and more microscopic theories involving explicit quantum-many-body methods are presented. One of the hallmarks of the microscopic theory of fission is the tremendous amount of computing needed for practical applications. In particular, the successful implementation of the theories presented in this article requires a very precise numerical resolution of the HFB equations for large values of the collective variables. This aspect is often overlooked, and several sections are devoted to discussing the resolution of the HFB equations, especially in the context of very deformed nuclear shapes. In particular, the numerical precision and iterative methods employed to obtain the HFB solution are documented in detail. Finally, a selection of the most recent and representative results obtained for both spontaneous and induced fission is presented, with the goal of emphasizing the coherence of the microscopic approaches employed. In conclusion, although impressive progress has been achieved over the last two decades to understand fission microscopically, much work remains to be done. Several possible lines of research are outlined in the conclusion.« less

  20. Microscopic Theory of Nuclear Fission: A Review

    DOE PAGES

    Schunck, N.; Robledo, L. M.

    2016-10-11

    This paper reviews how nuclear fission is described within nuclear density functional theory. A distinction should be made between spontaneous fission, where half-lives are the main observables and quantum tunnelling the essential concept, and induced fission, where the focus is on fragment properties and explicitly time-dependent approaches are often invoked. Overall, the cornerstone of the density functional theory approach to fission is the energy density functional formalism. The basic tenets of this method, including some well-known tools such as the Hartree–Fock–Bogoliubov (HFB) theory, effective two-body nuclear potentials such as the Skyrme and Gogny force, finite-temperature extensions and beyond mean-field corrections,more » are presented succinctly. The energy density functional approach is often combined with the hypothesis that the time-scale of the large amplitude collective motion driving the system to fission is slow compared to typical time-scales of nucleons inside the nucleus. In practice, this hypothesis of adiabaticity is implemented by introducing (a few) collective variables and mapping out the many-body Schrödinger equation into a collective Schrödinger-like equation for the nuclear wave-packet. The region of the collective space where the system transitions from one nucleus to two (or more) fragments defines what are called the scission configurations. The inertia tensor that enters the kinetic energy term of the collective Schrödinger-like equation is one of the most essential ingredients of the theory, since it includes the response of the system to small changes in the collective variables. For this reason, the two main approximations used to compute this inertia tensor, the adiabatic time-dependent HFB and the generator coordinate method, are presented in detail, both in their general formulation and in their most common approximations. The collective inertia tensor enters also the Wentzel–Kramers–Brillouin (WKB) formula used to extract spontaneous fission half-lives from multi-dimensional quantum tunnelling probabilities (For the sake of completeness, other approaches to tunnelling based on functional integrals are also briefly discussed, although there are very few applications.) It is also an important component of some of the time-dependent methods that have been used in fission studies. Concerning the latter, both the semi-classical approaches to time-dependent nuclear dynamics and more microscopic theories involving explicit quantum-many-body methods are presented. One of the hallmarks of the microscopic theory of fission is the tremendous amount of computing needed for practical applications. In particular, the successful implementation of the theories presented in this article requires a very precise numerical resolution of the HFB equations for large values of the collective variables. This aspect is often overlooked, and several sections are devoted to discussing the resolution of the HFB equations, especially in the context of very deformed nuclear shapes. In particular, the numerical precision and iterative methods employed to obtain the HFB solution are documented in detail. Finally, a selection of the most recent and representative results obtained for both spontaneous and induced fission is presented, with the goal of emphasizing the coherence of the microscopic approaches employed. In conclusion, although impressive progress has been achieved over the last two decades to understand fission microscopically, much work remains to be done. Several possible lines of research are outlined in the conclusion.« less

  1. Microscopic theory of nuclear fission: a review.

    PubMed

    Schunck, N; Robledo, L M

    2016-11-01

    This article reviews how nuclear fission is described within nuclear density functional theory. A distinction should be made between spontaneous fission, where half-lives are the main observables and quantum tunnelling the essential concept, and induced fission, where the focus is on fragment properties and explicitly time-dependent approaches are often invoked. Overall, the cornerstone of the density functional theory approach to fission is the energy density functional formalism. The basic tenets of this method, including some well-known tools such as the Hartree-Fock-Bogoliubov (HFB) theory, effective two-body nuclear potentials such as the Skyrme and Gogny force, finite-temperature extensions and beyond mean-field corrections, are presented succinctly. The energy density functional approach is often combined with the hypothesis that the time-scale of the large amplitude collective motion driving the system to fission is slow compared to typical time-scales of nucleons inside the nucleus. In practice, this hypothesis of adiabaticity is implemented by introducing (a few) collective variables and mapping out the many-body Schrödinger equation into a collective Schrödinger-like equation for the nuclear wave-packet. The region of the collective space where the system transitions from one nucleus to two (or more) fragments defines what are called the scission configurations. The inertia tensor that enters the kinetic energy term of the collective Schrödinger-like equation is one of the most essential ingredients of the theory, since it includes the response of the system to small changes in the collective variables. For this reason, the two main approximations used to compute this inertia tensor, the adiabatic time-dependent HFB and the generator coordinate method, are presented in detail, both in their general formulation and in their most common approximations. The collective inertia tensor enters also the Wentzel-Kramers-Brillouin (WKB) formula used to extract spontaneous fission half-lives from multi-dimensional quantum tunnelling probabilities (For the sake of completeness, other approaches to tunnelling based on functional integrals are also briefly discussed, although there are very few applications.) It is also an important component of some of the time-dependent methods that have been used in fission studies. Concerning the latter, both the semi-classical approaches to time-dependent nuclear dynamics and more microscopic theories involving explicit quantum-many-body methods are presented. One of the hallmarks of the microscopic theory of fission is the tremendous amount of computing needed for practical applications. In particular, the successful implementation of the theories presented in this article requires a very precise numerical resolution of the HFB equations for large values of the collective variables. This aspect is often overlooked, and several sections are devoted to discussing the resolution of the HFB equations, especially in the context of very deformed nuclear shapes. In particular, the numerical precision and iterative methods employed to obtain the HFB solution are documented in detail. Finally, a selection of the most recent and representative results obtained for both spontaneous and induced fission is presented, with the goal of emphasizing the coherence of the microscopic approaches employed. Although impressive progress has been achieved over the last two decades to understand fission microscopically, much work remains to be done. Several possible lines of research are outlined in the conclusion.

  2. Stable and unstable roots of ion temperature gradient driven mode using curvature modified plasma dispersion functions

    NASA Astrophysics Data System (ADS)

    Gültekin, Ö.; Gürcan, Ö. D.

    2018-02-01

    Basic, local kinetic theory of ion temperature gradient driven (ITG) mode, with adiabatic electrons is reconsidered. Standard unstable, purely oscillating as well as damped solutions of the local dispersion relation are obtained using a bracketing technique that uses the argument principle. This method requires computing the plasma dielectric function and its derivatives, which are implemented here using modified plasma dispersion functions with curvature and their derivatives, and allows bracketing/following the zeros of the plasma dielectric function which corresponds to different roots of the ITG dispersion relation. We provide an open source implementation of the derivatives of modified plasma dispersion functions with curvature, which are used in this formulation. Studying the local ITG dispersion, we find that near the threshold of instability the unstable branch is rather asymmetric with oscillating solutions towards lower wave numbers (i.e. drift waves), and damped solutions toward higher wave numbers. This suggests a process akin to inverse cascade by coupling to the oscillating branch towards lower wave numbers may play a role in the nonlinear evolution of the ITG, near the instability threshold. Also, using the algorithm, the linear wave diffusion is estimated for the marginally stable ITG mode.

  3. An integral-factorized implementation of the driven similarity renormalization group second-order multireference perturbation theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hannon, Kevin P.; Li, Chenyang; Evangelista, Francesco A., E-mail: francesco.evangelista@emory.edu

    2016-05-28

    We report an efficient implementation of a second-order multireference perturbation theory based on the driven similarity renormalization group (DSRG-MRPT2) [C. Li and F. A. Evangelista, J. Chem. Theory Comput. 11, 2097 (2015)]. Our implementation employs factorized two-electron integrals to avoid storage of large four-index intermediates. It also exploits the block structure of the reference density matrices to reduce the computational cost to that of second-order Møller–Plesset perturbation theory. Our new DSRG-MRPT2 implementation is benchmarked on ten naphthyne isomers using basis sets up to quintuple-ζ quality. We find that the singlet-triplet splittings (Δ{sub ST}) of the naphthyne isomers strongly depend onmore » the equilibrium structures. For a consistent set of geometries, the Δ{sub ST} values predicted by the DSRG-MRPT2 are in good agreements with those computed by the reduced multireference coupled cluster theory with singles, doubles, and perturbative triples.« less

  4. Extended Lagrangian Density Functional Tight-Binding Molecular Dynamics for Molecules and Solids.

    PubMed

    Aradi, Bálint; Niklasson, Anders M N; Frauenheim, Thomas

    2015-07-14

    A computationally fast quantum mechanical molecular dynamics scheme using an extended Lagrangian density functional tight-binding formulation has been developed and implemented in the DFTB+ electronic structure program package for simulations of solids and molecular systems. The scheme combines the computational speed of self-consistent density functional tight-binding theory with the efficiency and long-term accuracy of extended Lagrangian Born-Oppenheimer molecular dynamics. For systems without self-consistent charge instabilities, only a single diagonalization or construction of the single-particle density matrix is required in each time step. The molecular dynamics simulation scheme can be applied to a broad range of problems in materials science, chemistry, and biology.

  5. Extended Lagrangian Excited State Molecular Dynamics

    DOE PAGES

    Bjorgaard, Josiah August; Sheppard, Daniel Glen; Tretiak, Sergei; ...

    2018-01-09

    In this work, an extended Lagrangian framework for excited state molecular dynamics (XL-ESMD) using time-dependent self-consistent field theory is proposed. The formulation is a generalization of the extended Lagrangian formulations for ground state Born–Oppenheimer molecular dynamics [Phys. Rev. Lett. 2008 100, 123004]. The theory is implemented, demonstrated, and evaluated using a time-dependent semiempirical model, though it should be generally applicable to ab initio theory. The simulations show enhanced energy stability and a significantly reduced computational cost associated with the iterative solutions of both the ground state and the electronically excited states. Relaxed convergence criteria can therefore be used both formore » the self-consistent ground state optimization and for the iterative subspace diagonalization of the random phase approximation matrix used to calculate the excited state transitions. In conclusion, the XL-ESMD approach is expected to enable numerically efficient excited state molecular dynamics for such methods as time-dependent Hartree–Fock (TD-HF), Configuration Interactions Singles (CIS), and time-dependent density functional theory (TD-DFT).« less

  6. Damage Based Analysis (DBA) - Theory, Derivation and Practical Application Using Both an Acceleration and Pseudo Velocity Approach

    NASA Technical Reports Server (NTRS)

    Grillo, Vince

    2017-01-01

    The objective of this presentation is to give a brief overview of the theory behind the (DBA) method, an overview of the derivation and a practical application of the theory using the Python computer language. The Theory and Derivation will use both Acceleration and Pseudo Velocity methods to derive a series of equations for processing by Python. We will take the results and compare both Acceleration and Pseudo Velocity methods and discuss implementation of the Python functions. Also, we will discuss the efficiency of the methods and the amount of computer time required for the solution. In conclusion, (DBA) offers a powerful method to evaluate the amount of energy imparted into a system in the form of both Amplitude and Duration during qualification testing and flight environments. Many forms of steady state and transient vibratory motion can be characterized using this technique. (DBA) provides a more robust alternative to traditional methods such Power Spectral Density (PSD) using a maximax approach.

  7. Damage Based Analysis (DBA): Theory, Derivation and Practical Application - Using Both an Acceleration and Pseudo-Velocity Approach

    NASA Technical Reports Server (NTRS)

    Grillo, Vince

    2016-01-01

    The objective of this presentation is to give a brief overview of the theory behind the (DBA) method, an overview of the derivation and a practical application of the theory using the Python computer language. The Theory and Derivation will use both Acceleration and Pseudo Velocity methods to derive a series of equations for processing by Python. We will take the results and compare both Acceleration and Pseudo Velocity methods and discuss implementation of the Python functions. Also, we will discuss the efficiency of the methods and the amount of computer time required for the solution. In conclusion, (DBA) offers a powerful method to evaluate the amount of energy imparted into a system in the form of both Amplitude and Duration during qualification testing and flight environments. Many forms of steady state and transient vibratory motion can be characterized using this technique. (DBA) provides a more robust alternative to traditional methods such Power Spectral Density (PSD) using a Maximax approach.

  8. Extended Lagrangian Excited State Molecular Dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bjorgaard, Josiah August; Sheppard, Daniel Glen; Tretiak, Sergei

    In this work, an extended Lagrangian framework for excited state molecular dynamics (XL-ESMD) using time-dependent self-consistent field theory is proposed. The formulation is a generalization of the extended Lagrangian formulations for ground state Born–Oppenheimer molecular dynamics [Phys. Rev. Lett. 2008 100, 123004]. The theory is implemented, demonstrated, and evaluated using a time-dependent semiempirical model, though it should be generally applicable to ab initio theory. The simulations show enhanced energy stability and a significantly reduced computational cost associated with the iterative solutions of both the ground state and the electronically excited states. Relaxed convergence criteria can therefore be used both formore » the self-consistent ground state optimization and for the iterative subspace diagonalization of the random phase approximation matrix used to calculate the excited state transitions. In conclusion, the XL-ESMD approach is expected to enable numerically efficient excited state molecular dynamics for such methods as time-dependent Hartree–Fock (TD-HF), Configuration Interactions Singles (CIS), and time-dependent density functional theory (TD-DFT).« less

  9. Extended Lagrangian Excited State Molecular Dynamics.

    PubMed

    Bjorgaard, J A; Sheppard, D; Tretiak, S; Niklasson, A M N

    2018-02-13

    An extended Lagrangian framework for excited state molecular dynamics (XL-ESMD) using time-dependent self-consistent field theory is proposed. The formulation is a generalization of the extended Lagrangian formulations for ground state Born-Oppenheimer molecular dynamics [Phys. Rev. Lett. 2008 100, 123004]. The theory is implemented, demonstrated, and evaluated using a time-dependent semiempirical model, though it should be generally applicable to ab initio theory. The simulations show enhanced energy stability and a significantly reduced computational cost associated with the iterative solutions of both the ground state and the electronically excited states. Relaxed convergence criteria can therefore be used both for the self-consistent ground state optimization and for the iterative subspace diagonalization of the random phase approximation matrix used to calculate the excited state transitions. The XL-ESMD approach is expected to enable numerically efficient excited state molecular dynamics for such methods as time-dependent Hartree-Fock (TD-HF), Configuration Interactions Singles (CIS), and time-dependent density functional theory (TD-DFT).

  10. Understanding women's mammography intentions: a theory-based investigation.

    PubMed

    Naito, Mikako; O'Callaghan, Frances V; Morrissey, Shirley

    2009-01-01

    The present study compared the utility of two models (the Theory of Planned Behavior and Protection Motivation Theory) in identifying factors associated with intentions to undertake screening mammography, before and after an intervention. The comparison was made between the unique components of the two models. The effect of including implementation intentions was also investigated. Two hundred and fifty-one women aged 37 to 69 years completed questionnaires at baseline and following the delivery of a standard (control) or a protection motivation theory-based informational intervention. Hierarchical multiple regressions indicated that theory of planned behavior variables were associated with mammography intentions. Results also showed that inclusion of implementation intention in the model significantly increased the association with mammography intentions. The findings suggest that future interventions aiming to increase screening mammography participation should focus on the theory of planned behavior variables and that implementation intention should also be targeted.

  11. Pattern activation/recognition theory of mind

    PubMed Central

    du Castel, Bertrand

    2015-01-01

    In his 2012 book How to Create a Mind, Ray Kurzweil defines a “Pattern Recognition Theory of Mind” that states that the brain uses millions of pattern recognizers, plus modules to check, organize, and augment them. In this article, I further the theory to go beyond pattern recognition and include also pattern activation, thus encompassing both sensory and motor functions. In addition, I treat checking, organizing, and augmentation as patterns of patterns instead of separate modules, therefore handling them the same as patterns in general. Henceforth I put forward a unified theory I call “Pattern Activation/Recognition Theory of Mind.” While the original theory was based on hierarchical hidden Markov models, this evolution is based on their precursor: stochastic grammars. I demonstrate that a class of self-describing stochastic grammars allows for unifying pattern activation, recognition, organization, consistency checking, metaphor, and learning, into a single theory that expresses patterns throughout. I have implemented the model as a probabilistic programming language specialized in activation/recognition grammatical and neural operations. I use this prototype to compute and present diagrams for each stochastic grammar and corresponding neural circuit. I then discuss the theory as it relates to artificial network developments, common coding, neural reuse, and unity of mind, concluding by proposing potential paths to validation. PMID:26236228

  12. Pattern activation/recognition theory of mind.

    PubMed

    du Castel, Bertrand

    2015-01-01

    In his 2012 book How to Create a Mind, Ray Kurzweil defines a "Pattern Recognition Theory of Mind" that states that the brain uses millions of pattern recognizers, plus modules to check, organize, and augment them. In this article, I further the theory to go beyond pattern recognition and include also pattern activation, thus encompassing both sensory and motor functions. In addition, I treat checking, organizing, and augmentation as patterns of patterns instead of separate modules, therefore handling them the same as patterns in general. Henceforth I put forward a unified theory I call "Pattern Activation/Recognition Theory of Mind." While the original theory was based on hierarchical hidden Markov models, this evolution is based on their precursor: stochastic grammars. I demonstrate that a class of self-describing stochastic grammars allows for unifying pattern activation, recognition, organization, consistency checking, metaphor, and learning, into a single theory that expresses patterns throughout. I have implemented the model as a probabilistic programming language specialized in activation/recognition grammatical and neural operations. I use this prototype to compute and present diagrams for each stochastic grammar and corresponding neural circuit. I then discuss the theory as it relates to artificial network developments, common coding, neural reuse, and unity of mind, concluding by proposing potential paths to validation.

  13. Self-consistent implementation of meta-GGA functionals for the ONETEP linear-scaling electronic structure package.

    PubMed

    Womack, James C; Mardirossian, Narbe; Head-Gordon, Martin; Skylaris, Chris-Kriton

    2016-11-28

    Accurate and computationally efficient exchange-correlation functionals are critical to the successful application of linear-scaling density functional theory (DFT). Local and semi-local functionals of the density are naturally compatible with linear-scaling approaches, having a general form which assumes the locality of electronic interactions and which can be efficiently evaluated by numerical quadrature. Presently, the most sophisticated and flexible semi-local functionals are members of the meta-generalized-gradient approximation (meta-GGA) family, and depend upon the kinetic energy density, τ, in addition to the charge density and its gradient. In order to extend the theoretical and computational advantages of τ-dependent meta-GGA functionals to large-scale DFT calculations on thousands of atoms, we have implemented support for τ-dependent meta-GGA functionals in the ONETEP program. In this paper we lay out the theoretical innovations necessary to implement τ-dependent meta-GGA functionals within ONETEP's linear-scaling formalism. We present expressions for the gradient of the τ-dependent exchange-correlation energy, necessary for direct energy minimization. We also derive the forms of the τ-dependent exchange-correlation potential and kinetic energy density in terms of the strictly localized, self-consistently optimized orbitals used by ONETEP. To validate the numerical accuracy of our self-consistent meta-GGA implementation, we performed calculations using the B97M-V and PKZB meta-GGAs on a variety of small molecules. Using only a minimal basis set of self-consistently optimized local orbitals, we obtain energies in excellent agreement with large basis set calculations performed using other codes. Finally, to establish the linear-scaling computational cost and applicability of our approach to large-scale calculations, we present the outcome of self-consistent meta-GGA calculations on amyloid fibrils of increasing size, up to tens of thousands of atoms.

  14. Self-consistent implementation of meta-GGA functionals for the ONETEP linear-scaling electronic structure package

    NASA Astrophysics Data System (ADS)

    Womack, James C.; Mardirossian, Narbe; Head-Gordon, Martin; Skylaris, Chris-Kriton

    2016-11-01

    Accurate and computationally efficient exchange-correlation functionals are critical to the successful application of linear-scaling density functional theory (DFT). Local and semi-local functionals of the density are naturally compatible with linear-scaling approaches, having a general form which assumes the locality of electronic interactions and which can be efficiently evaluated by numerical quadrature. Presently, the most sophisticated and flexible semi-local functionals are members of the meta-generalized-gradient approximation (meta-GGA) family, and depend upon the kinetic energy density, τ, in addition to the charge density and its gradient. In order to extend the theoretical and computational advantages of τ-dependent meta-GGA functionals to large-scale DFT calculations on thousands of atoms, we have implemented support for τ-dependent meta-GGA functionals in the ONETEP program. In this paper we lay out the theoretical innovations necessary to implement τ-dependent meta-GGA functionals within ONETEP's linear-scaling formalism. We present expressions for the gradient of the τ-dependent exchange-correlation energy, necessary for direct energy minimization. We also derive the forms of the τ-dependent exchange-correlation potential and kinetic energy density in terms of the strictly localized, self-consistently optimized orbitals used by ONETEP. To validate the numerical accuracy of our self-consistent meta-GGA implementation, we performed calculations using the B97M-V and PKZB meta-GGAs on a variety of small molecules. Using only a minimal basis set of self-consistently optimized local orbitals, we obtain energies in excellent agreement with large basis set calculations performed using other codes. Finally, to establish the linear-scaling computational cost and applicability of our approach to large-scale calculations, we present the outcome of self-consistent meta-GGA calculations on amyloid fibrils of increasing size, up to tens of thousands of atoms.

  15. Design of Digital Learning Material on Social-Psychological Theories for Nutrition Behavior Research

    ERIC Educational Resources Information Center

    Busstra, Maria C.; De Graaf, Cees; Hartog, Rob

    2007-01-01

    This article describes the design, implementation and evaluation of digital learning material on the social--psychological Theory of Planned Behavior (TPB) and its use in nutrition behavior research. The design is based on guidelines derived from theories on instructional design. The major component of the design challenge is to implement three…

  16. Design, Implementation, and Lessons Learned from a Digital Storytelling Project in an Undergraduate Health Promotion Theory Course

    ERIC Educational Resources Information Center

    Rimando, Marylen; Smalley, K. Bryant; Warren, Jacob C.

    2015-01-01

    This article describes the design, implementation and lessons learned from a digital storytelling project in a health promotion theory course. From 2011-2012, 195 health promotion majors completed a digital storytelling project at a Midwestern university. The instructor observed students' understanding of theories and models. This article adds to…

  17. Using Implementation and Program Theory to Examine Communication Strategies in National Wildlife Federation's Backyard Wildlife Habitat Program

    ERIC Educational Resources Information Center

    Palmer, Dain; Dann, Shari L.

    2004-01-01

    Our evaluative approach used implementation theory and program theory, adapted from Weiss (1998) to examine communication processes and results for a national wildlife habitat stewardship education program. Using a mail survey of 1427 participants certified in National Wildlife Federation's (NWF) Backyard Wildlife Habitat (BWH) program and a study…

  18. A quasi-Newton approach to optimization problems with probability density constraints. [problem solving in mathematical programming

    NASA Technical Reports Server (NTRS)

    Tapia, R. A.; Vanrooy, D. L.

    1976-01-01

    A quasi-Newton method is presented for minimizing a nonlinear function while constraining the variables to be nonnegative and sum to one. The nonnegativity constraints were eliminated by working with the squares of the variables and the resulting problem was solved using Tapia's general theory of quasi-Newton methods for constrained optimization. A user's guide for a computer program implementing this algorithm is provided.

  19. A simple design of an artificial electromagnetic black hole

    NASA Astrophysics Data System (ADS)

    Lu, Wanli; Jin, JunFeng; Lin, Zhifang; Chen, Huanyang

    2010-09-01

    We conduct a rigorous study on the properties of an artificial electromagnetic black hole for transverse magnetic modes. A multilayered structure of such a black hole is then proposed as a reduced variety for easy experimental implementations. An actual design of composite materials based on the effective medium theory is given with only five kinds of real isotropic materials. The finite element method confirms the functionality of such a simple design.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehtomäki, Jouko; Makkonen, Ilja; Harju, Ari

    We present a computational scheme for orbital-free density functional theory (OFDFT) that simultaneously provides access to all-electron values and preserves the OFDFT linear scaling as a function of the system size. Using the projector augmented-wave method (PAW) in combination with real-space methods, we overcome some obstacles faced by other available implementation schemes. Specifically, the advantages of using the PAW method are twofold. First, PAW reproduces all-electron values offering freedom in adjusting the convergence parameters and the atomic setups allow tuning the numerical accuracy per element. Second, PAW can provide a solution to some of the convergence problems exhibited in othermore » OFDFT implementations based on Kohn-Sham (KS) codes. Using PAW and real-space methods, our orbital-free results agree with the reference all-electron values with a mean absolute error of 10 meV and the number of iterations required by the self-consistent cycle is comparable to the KS method. The comparison of all-electron and pseudopotential bulk modulus and lattice constant reveal an enormous difference, demonstrating that in order to assess the performance of OFDFT functionals it is necessary to use implementations that obtain all-electron values. The proposed combination of methods is the most promising route currently available. We finally show that a parametrized kinetic energy functional can give lattice constants and bulk moduli comparable in accuracy to those obtained by the KS PBE method, exemplified with the case of diamond.« less

  1. Full-potential multiple scattering theory with space-filling cells for bound and continuum states.

    PubMed

    Hatada, Keisuke; Hayakawa, Kuniko; Benfatto, Maurizio; Natoli, Calogero R

    2010-05-12

    We present a rigorous derivation of a real-space full-potential multiple scattering theory (FP-MST) that is free from the drawbacks that up to now have impaired its development (in particular the need to expand cell shape functions in spherical harmonics and rectangular matrices), valid both for continuum and bound states, under conditions for space partitioning that are not excessively restrictive and easily implemented. In this connection we give a new scheme to generate local basis functions for the truncated potential cells that is simple, fast, efficient, valid for any shape of the cell and reduces to the minimum the number of spherical harmonics in the expansion of the scattering wavefunction. The method also avoids the need for saturating 'internal sums' due to the re-expansion of the spherical Hankel functions around another point in space (usually another cell center). Thus this approach provides a straightforward extension of MST in the muffin-tin (MT) approximation, with only one truncation parameter given by the classical relation l(max) = kR(b), where k is the electron wavevector (either in the excited or ground state of the system under consideration) and R(b) is the radius of the bounding sphere of the scattering cell. Moreover, the scattering path operator of the theory can be found in terms of an absolutely convergent procedure in the l(max) --> ∞ limit. Consequently, this feature provides a firm ground for the use of FP-MST as a viable method for electronic structure calculations and makes possible the computation of x-ray spectroscopies, notably photo-electron diffraction, absorption and anomalous scattering among others, with the ease and versatility of the corresponding MT theory. Some numerical applications of the theory are presented, both for continuum and bound states.

  2. Realistic nurse-led policy implementation, optimization and evaluation: novel methodological exemplar.

    PubMed

    Noyes, Jane; Lewis, Mary; Bennett, Virginia; Widdas, David; Brombley, Karen

    2014-01-01

    To report the first large-scale realistic nurse-led implementation, optimization and evaluation of a complex children's continuing-care policy. Health policies are increasingly complex, involve multiple Government departments and frequently fail to translate into better patient outcomes. Realist methods have not yet been adapted for policy implementation. Research methodology - Evaluation using theory-based realist methods for policy implementation. An expert group developed the policy and supporting tools. Implementation and evaluation design integrated diffusion of innovation theory with multiple case study and adapted realist principles. Practitioners in 12 English sites worked with Consultant Nurse implementers to manipulate the programme theory and logic of new decision-support tools and care pathway to optimize local implementation. Methods included key-stakeholder interviews, developing practical diffusion of innovation processes using key-opinion leaders and active facilitation strategies and a mini-community of practice. New and existing processes and outcomes were compared for 137 children during 2007-2008. Realist principles were successfully adapted to a shorter policy implementation and evaluation time frame. Important new implementation success factors included facilitated implementation that enabled 'real-time' manipulation of programme logic and local context to best-fit evolving theories of what worked; using local experiential opinion to change supporting tools to more realistically align with local context and what worked; and having sufficient existing local infrastructure to support implementation. Ten mechanisms explained implementation success and differences in outcomes between new and existing processes. Realistic policy implementation methods have advantages over top-down approaches, especially where clinical expertise is low and unlikely to diffuse innovations 'naturally' without facilitated implementation and local optimization. © 2013 John Wiley & Sons Ltd.

  3. A theory-informed approach to mental health care capacity building for pharmacists.

    PubMed

    Murphy, Andrea L; Gardner, David M; Kutcher, Stan P; Martin-Misener, Ruth

    2014-01-01

    Pharmacists are knowledgeable, accessible health care professionals who can provide services that improve outcomes in mental health care. Various challenges and opportunities can exist in pharmacy practice to hinder or support pharmacists' efforts. We used a theory-informed approach to development and implementation of a capacity-building program to enhance pharmacists' roles in mental health care. Theories and frameworks including the Consolidated Framework for Implementation Research, the Theoretical Domains Framework, and the Behaviour Change Wheel were used to inform the conceptualization, development, and implementation of a capacity-building program to enhance pharmacists' roles in mental health care. The More Than Meds program was developed and implemented through an iterative process. The main program components included: an education and training day; use of a train-the-trainer approach from partnerships with pharmacists and people with lived experience of mental illness; development of a community of practice through email communications, a website, and a newsletter; and use of educational outreach delivered by pharmacists. Theories and frameworks used throughout the program's development and implementation facilitated a means to conceptualize the component parts of the program as well as its overall presence as a whole from inception through evolution in implementation. Using theoretical foundations for the program enabled critical consideration and understanding of issues related to trialability and adaptability of the program. Theory was essential to the underlying development and implementation of a capacity-building program for enhancing services by pharmacists for people with lived experience of mental illness. Lessons learned from the development and implementation of this program are informing current research and evolution of the program.

  4. Teacher Agency and Professional Learning: Rethinking Fidelity of Implementation as Multiplicities of Enactment

    ERIC Educational Resources Information Center

    Buxton, Cory A.; Allexsaht-Snider, Martha; Kayumova, Shakhnoza; Aghasaleh, Rouhollah; Choi, Youn-Jeng; Cohen, Allan

    2015-01-01

    In this paper we use practice theory, with its focus on the interplay of structure and agency, to theorize about teacher engagement in professional learning and teacher enactment of pedagogical practices as an alternative to framing implementation research in terms of program adherence and fidelity of implementation. Practice theory allowed us to…

  5. Introductory Molecular Orbital Theory: An Honors General Chemistry Computational Lab as Implemented Using Three-Dimensional Modeling Software

    ERIC Educational Resources Information Center

    Ruddick, Kristie R.; Parrill, Abby L.; Petersen, Richard L.

    2012-01-01

    In this study, a computational molecular orbital theory experiment was implemented in a first-semester honors general chemistry course. Students used the GAMESS (General Atomic and Molecular Electronic Structure System) quantum mechanical software (as implemented in ChemBio3D) to optimize the geometry for various small molecules. Extended Huckel…

  6. First-principles quantum transport method for disordered nanoelectronics: Disorder-averaged transmission, shot noise, and device-to-device variability

    NASA Astrophysics Data System (ADS)

    Yan, Jiawei; Wang, Shizhuo; Xia, Ke; Ke, Youqi

    2017-03-01

    Because disorders are inevitable in realistic nanodevices, the capability to quantitatively simulate the disorder effects on electron transport is indispensable for quantum transport theory. Here, we report a unified and effective first-principles quantum transport method for analyzing effects of chemical or substitutional disorder on transport properties of nanoelectronics, including averaged transmission coefficient, shot noise, and disorder-induced device-to-device variability. All our theoretical formulations and numerical implementations are worked out within the framework of the tight-binding linear muffin tin orbital method. In this method, we carry out the electronic structure calculation with the density functional theory, treat the nonequilibrium statistics by the nonequilbrium Green's function method, and include the effects of multiple impurity scattering with the generalized nonequilibrium vertex correction (NVC) method in coherent potential approximation (CPA). The generalized NVC equations are solved from first principles to obtain various disorder-averaged two-Green's-function correlators. This method provides a unified way to obtain different disorder-averaged transport properties of disordered nanoelectronics from first principles. To test our implementation, we apply the method to investigate the shot noise in the disordered copper conductor, and find all our results for different disorder concentrations approach a universal Fano factor 1 /3 . As the second test, we calculate the device-to-device variability in the spin-dependent transport through the disordered Cu/Co interface and find the conductance fluctuation is very large in the minority spin channel and negligible in the majority spin channel. Our results agree well with experimental measurements and other theories. In both applications, we show the generalized nonequilibrium vertex corrections play a determinant role in electron transport simulation. Our results demonstrate the effectiveness of the first-principles generalized CPA-NVC for atomistic analysis of disordered nanoelectronics, extending the capability of quantum transport simulation.

  7. Health Information Technologies-Academic and Commercial Evaluation (HIT-ACE) methodology: description and application to clinical feedback systems.

    PubMed

    Lyon, Aaron R; Lewis, Cara C; Melvin, Abigail; Boyd, Meredith; Nicodimos, Semret; Liu, Freda F; Jungbluth, Nathaniel

    2016-09-22

    Health information technologies (HIT) have become nearly ubiquitous in the contemporary healthcare landscape, but information about HIT development, functionality, and implementation readiness is frequently siloed. Theory-driven methods of compiling, evaluating, and integrating information from the academic and commercial sectors are necessary to guide stakeholder decision-making surrounding HIT adoption and to develop pragmatic HIT research agendas. This article presents the Health Information Technologies-Academic and Commercial Evaluation (HIT-ACE) methodology, a structured, theory-driven method for compiling and evaluating information from multiple sectors. As an example demonstration of the methodology, we apply HIT-ACE to mental and behavioral health measurement feedback systems (MFS). MFS are a specific class of HIT that support the implementation of routine outcome monitoring, an evidence-based practice. HIT-ACE is guided by theories and frameworks related to user-centered design and implementation science. The methodology involves four phases: (1) coding academic and commercial materials, (2) developer/purveyor interviews, (3) linking putative implementation mechanisms to hit capabilities, and (4) experimental testing of capabilities and mechanisms. In the current demonstration, phase 1 included a systematic process to identify MFS in mental and behavioral health using academic literature and commercial websites. Using user-centered design, implementation science, and feedback frameworks, the HIT-ACE coding system was developed, piloted, and used to review each identified system for the presence of 38 capabilities and 18 additional characteristics via a consensus coding process. Bibliometic data were also collected to examine the representation of the systems in the scientific literature. As an example, results are presented for the application of HIT-ACE phase 1 to MFS wherein 49 separate MFS were identified, reflecting a diverse array of characteristics and capabilities. Preliminary findings demonstrate the utility of HIT-ACE to represent the scope and diversity of a given class of HIT beyond what can be identified in the academic literature. Phase 2 data collection is expected to confirm and expand the information presented and phases 3 and 4 will provide more nuanced information about the impact of specific HIT capabilities. In all, HIT-ACE is expected to support adoption decisions and additional HIT development and implementation research.

  8. It's just the right thing to do: Conceptualizing a theory of change for a school food and beverage sales environment intervention and implications for implementation evaluation.

    PubMed

    Levay, Adrienne V; Chapman, Gwen E; Seed, Barbara; Wittman, Hannah

    2018-04-30

    School food environments are the target of nutrition interventions and evaluations across the globe. Yet little work to-date has articulated the importance of developing a theory of change upon which to base evaluation of both implementation and outcomes. This paper undertakes an interpretive approach to develop a retrospective theory of change for an implementation evaluation of British Columbia's school food and beverage sales Guidelines. This study contributes broadly to a nuanced conceptualization of this type of public health intervention and provides a methodological contribution on how to develop a retrospective theory of change with implications for effective evaluation. Data collection strategies included document analysis, semi-structured interviews with key stakeholders, and participant observation. Developing the logic model revealed that, despite the broad population health aims of the intervention, the main focus of implementation is to change behaviors of adults who create school food environments. Derived from the analysis and interpretation of the data, the emergent program theory focuses on the assumption that if adults are responsibilized through information and education campaigns and provided implementation tools, they will be 'convinced' to implement changes to school food environments to foster broader public health goals. These findings highlight the importance of assessing individual-level implementation indicators as well as the more often evaluated measures of food and beverage availability. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  9. Advancing the Interdisciplinary Collaborative Health Team Model: Applying Democratic Professionalism, Implementation Science, and Therapeutic Alliance to Enact Social Justice Practice.

    PubMed

    Murphy, Nancy

    2015-01-01

    This essay reframes the interdisciplinary collaborative health team model by proposing the application of 3 foundational pillars-democratic professionalism, implementation science, and therapeutic alliance to advance this practice. The aim was to address challenges to the model, enhance their functional capacity, and explicate and enact social justice practices to affect individual health outcomes while simultaneously addressing health inequities. The pillars are described and examples from the author's dissertation research illustrate how the pillars were used to bring about action. Related theories, models, and frameworks that have negotiation, capacity building, collaboration, and knowledge/task/power sharing as central concepts are presented under each of the pillars.

  10. Evolution of an experiential learning partnership in emergency management higher education.

    PubMed

    Knox, Claire Connolly; Harris, Alan S

    2016-01-01

    Experiential learning allows students to step outside the classroom and into a community setting to integrate theory with practice, while allowing the community partner to reach goals or address needs within their organization. Emergency Management and Homeland Security scholars recognize the importance, and support the increased implementation, of this pedagogical method in the higher education curriculum. Yet challenges to successful implementation exist including limited resources and time. This longitudinal study extends the literature by detailing the evolution of a partnership between a university and office of emergency management in which a functional exercise is strategically integrated into an undergraduate course. The manuscript concludes with a discussion of lessons learned from throughout the multiyear process.

  11. Efficient evaluation of nonlocal operators in density functional theory

    NASA Astrophysics Data System (ADS)

    Chen, Ying-Chih; Chen, Jing-Zhe; Michaud-Rioux, Vincent; Shi, Qing; Guo, Hong

    2018-02-01

    We present a method which combines plane waves (PW) and numerical atomic orbitals (NAO) to efficiently evaluate nonlocal operators in density functional theory with periodic boundary conditions. Nonlocal operators are first expanded using PW and then transformed to NAO so that the problem of distance-truncation is avoided. The general formalism is implemented using the hybrid functional HSE06 where the nonlocal operator is the exact exchange. Comparison of electronic structures of a wide range of semiconductors to a pure PW scheme validates the accuracy of our method. Due to the locality of NAO, thus sparsity of matrix representations of the operators, the computational complexity of the method is asymptotically quadratic in the number of electrons. Finally, we apply the technique to investigate the electronic structure of the interface between a single-layer black phosphorous and the high-κ dielectric material c -HfO2 . We predict that the band offset between the two materials is 1.29 eV and 2.18 eV for valence and conduction band edges, respectively, and such offsets are suitable for 2D field-effect transistor applications.

  12. Relativistic bound-state problem in the light-front Yukawa model

    NASA Astrophysics Data System (ADS)

    Głazek, Stanisław; Harindranath, Avaroth; Pinsky, Stephen; Shigemitsu, Junko; Wilson, Kenneth

    1993-02-01

    We study the renormalization problem on the light front for the two-fermion bound state in the (3+1)-dimensional Yukawa model, working within the lowest-order Tamm-Dancoff approximation. In addition to traditional mass and wave-function renormalization, new types of counterterms are required. These are nonlocal and involve arbitrary functions of the longitudinal momenta. Their appearance is consistent with general power-counting arguments on the light front. We estimate the ``arbitrary function'' in two ways: (1) by using perturbation theory as a guide and (2) by considering the asymptotic large transverse momentum behavior of the kernel in the bound-state equations. The latter method, as it is currently implemented, is applicable only to the helicity-zero sector of the theory. Because of triviality, in the Yukawa model one must retain a finite cutoff Λ in order to have a nonvanishing renormalized coupling. For the range of renormalized couplings (and cutoffs) allowed by triviality, one finds that the perturbative counterterm does a good job in eliminating cutoff dependence in the low-energy spectrum (masses <<Λ).

  13. Vapor-liquid phase equilibria of water modelled by a Kim-Gordon potential

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maerzke, Katie A.; McGrath, M. J.; Kuo, I-F W.

    2009-09-07

    Gibbs ensemble Monte Carlo simulations were carried out to investigate the properties of a frozen-electron-density (or Kim-Gordon, KG) model of water along the vapor-liquid coexistence curve. Because of its theoretical basis, such a KG model provides for seamless coupling to Kohn-Sham density functional theory for use in mixed quantum mechanics/molecular mechanics (QM/MM) implementations. The Gibbs ensemble simulations indicate rather limited transferability of such a simple KG model to other state points. Specifically, a KG model that was parameterized by Barker and Sprik to the properties of liquid water at 300 K, yields saturated vapor pressures and a critical temperature thatmore » are significantly under- and overestimated, respectively. We present a comprehensive density functional theory study to asses the accuracy of two popular exchange correlation functionals on the structure and density of liquid water at ambient conditions This work was supported by the US Department of Energy Office of Basic Energy Science Chemical Sciences Program. Battelle operates Pacific Northwest National Laboratory for the US Department of Energy.« less

  14. Density-functional energy gaps of solids demystified

    NASA Astrophysics Data System (ADS)

    Perdew, John P.; Ruzsinszky, Adrienn

    2018-06-01

    The fundamental energy gap of a solid is a ground-state second energy difference. Can one find the fundamental gap from the gap in the band structure of Kohn-Sham density functional theory? An argument of Williams and von Barth (WB), 1983, suggests that one can. In fact, self-consistent band-structure calculations within the local density approximation or the generalized gradient approximation (GGA) yield the fundamental gap within the same approximation for the energy. Such a calculation with the exact density functional would yield a band gap that also underestimates the fundamental gap, because the exact Kohn-Sham potential in a solid jumps up by an additive constant when one electron is added, and the WB argument does not take this effect into account. The WB argument has been extended recently to generalized Kohn-Sham theory, the simplest way to implement meta-GGAs and hybrid functionals self-consistently, with an exchange-correlation potential that is a non-multiplication operator. Since this operator is continuous, the band gap is again the fundamental gap within the same approximation, but, because the approximations are more realistic, so is the band gap. What approximations might be even more realistic?

  15. Finite-strain large-deflection elastic-viscoplastic finite-element transient response analysis of structures

    NASA Technical Reports Server (NTRS)

    Rodal, J. J. A.; Witmer, E. A.

    1979-01-01

    A method of analysis for thin structures that incorporates finite strain, elastic-plastic, strain hardening, time dependent material behavior implemented with respect to a fixed configuration and is consistently valid for finite strains and finite rotations is developed. The theory is formulated systematically in a body fixed system of convected coordinates with materially embedded vectors that deform in common with continuum. Tensors are considered as linear vector functions and use is made of the dyadic representation. The kinematics of a deformable continuum is treated in detail, carefully defining precisely all quantities necessary for the analysis. The finite strain theory developed gives much better predictions and agreement with experiment than does the traditional small strain theory, and at practically no additional cost. This represents a very significant advance in the capability for the reliable prediction of nonlinear transient structural responses, including the reliable prediction of strains large enough to produce ductile metal rupture.

  16. A Study on the Necessity and Basic Mode of Implementing Cooperative Teaching in the Ideological and Political Theory Courses

    ERIC Educational Resources Information Center

    Zhang, Xiaoxia

    2012-01-01

    This paper explores the necessity and basic mode of the implementation of cooperative teaching in the ideological and political theory courses. The practice of cooperative teaching in the Ideological and political theory courses is helpful for overcoming the deficiency of the traditional large-class teaching mode, and realizing complementary…

  17. You can lead a horse to water … what Self-Determination Theory can contribute to our understanding of clinical policy implementation.

    PubMed

    Smith, Geoffrey P; Williams, Theresa M

    2017-01-01

    There has been increasing reliance on policy directives as instruments for shaping clinical practice in health care, despite it being widely recognized that there is a significant translation gap between clinical policy and its implementation. Self-Determination Theory, a widely researched and empirically validated theory of human needs' fulfilment and motivation, offers a potentially valuable theoretical framework for understanding not only why the current policy environment has not led to the anticipated improvement in the quality and safety of clinical care but, importantly, also provides guidance about how organizations can create an environment that can nurture behavioural change in the workforce. We describe an alternative approach to clinical policy-making underpinned by Self-Determination Theory, which we believe has broad application for the science of clinical implementation theory.

  18. Implementation of multiple intelligences theory in the English language course syllabus at the University of Nis Medical School.

    PubMed

    Bakić-Mirić, Natasa

    2010-01-01

    Theory of multiple intelligences (MI) is considered an innovation in learning the English language because it helps students develop all eight intelligences that, on the other hand, represent ways people understand the world around them, solve problems and learn. They are: verbal/linguistic, logical/mathematical, visual/spatial, bodily/kinaesthetic, musical/rhythmic, interpersonal, intrapersonal and naturalist. Also, by focusing on the problem-solving activities, teachers, by implementing theory of multiple intelligences, encourage students not only to build their existing language knowledge but also learn new content and skills. The objective of this study has been to determine the importance of implementation of the theory of multiple intelligences in the English language course syllabus at the University of Nis Medical School. Ways in which the theory of multiple intelligences has been implemented in the English language course syllabus particularly in one lecture for junior year students of pharmacy in the University of Nis Medical School. The English language final exam results from February 2009 when compared with the final exam results from June 2007 prior to the implementation of MI theory showed the following: out of 80 junior year students of pharmacy, 40 obtained grade 10 (outstanding), 16 obtained grade 9 (excellent), 11 obtained grade 8 (very good), 4 obtained grade 7 (good) and 9 obtained grade 6 (pass). No student failed. The implementation of the theory of multiple intelligences in the English language course syllabus at the University of Nis Medical School has had a positive impact on learning the English language and has increased students' interest in language learning. Genarally speaking, this theory offers better understanding of students' intelligence and greater appreciation of their strengths. It provides numerous opportunities for students to use and develop all eight intelligences not just the few they excel in prior to enrolling in a university or college.

  19. Geometric measures of large biomolecules: surface, volume, and pockets.

    PubMed

    Mach, Paul; Koehl, Patrice

    2011-11-15

    Geometry plays a major role in our attempts to understand the activity of large molecules. For example, surface area and volume are used to quantify the interactions between these molecules and the water surrounding them in implicit solvent models. In addition, the detection of pockets serves as a starting point for predictive studies of biomolecule-ligand interactions. The alpha shape theory provides an exact and robust method for computing these geometric measures. Several implementations of this theory are currently available. We show however that these implementations fail on very large macromolecular systems. We show that these difficulties are not theoretical; rather, they are related to the architecture of current computers that rely on the use of cache memory to speed up calculation. By rewriting the algorithms that implement the different steps of the alpha shape theory such that we enforce locality, we show that we can remediate these cache problems; the corresponding code, UnionBall has an apparent O(n) behavior over a large range of values of n (up to tens of millions), where n is the number of atoms. As an example, it takes 136 sec with UnionBall to compute the contribution of each atom to the surface area and volume of a viral capsid with more than five million atoms on a commodity PC. UnionBall includes functions for computing analytically the surface area and volume of the intersection of two, three and four spheres that are fully detailed in an appendix. UnionBall is available as an OpenSource software. Copyright © 2011 Wiley Periodicals, Inc.

  20. Geometric Measures of Large Biomolecules: Surface, Volume and Pockets

    PubMed Central

    Mach, Paul; Koehl, Patrice

    2011-01-01

    Geometry plays a major role in our attempt to understand the activity of large molecules. For example, surface area and volume are used to quantify the interactions between these molecules and the water surrounding them in implicit solvent models. In addition, the detection of pockets serves as a starting point for predictive studies of biomolecule-ligand interactions. The alpha shape theory provides an exact and robust method for computing these geometric measures. Several implementations of this theory are currently available. We show however that these implementations fail on very large macromolecular systems. We show that these difficulties are not theoretical; rather, they are related to the architecture of current computers that rely on the use of cache memory to speed up calculation. By rewriting the algorithms that implement the different steps of the alpha shape theory such that we enforce locality, we show that we can remediate these cache problems; the corresponding code, UnionBall has an apparent (n) behavior over a large range of values of n (up to tens of millions), where n is the number of atoms. As an example, it takes 136 seconds with UnionBall to compute the contribution of each atom to the surface area and volume of a viral capsid with more than five million atoms on a commodity PC. UnionBall includes functions for computing the surface area and volume of the intersection of two, three and four spheres that are fully detailed in an appendix. UnionBall is available as an OpenSource software. PMID:21823134

  1. A grounded theory model for reducing stigma in health professionals in Canada.

    PubMed

    Knaak, S; Patten, S

    2016-08-01

    The Mental Health Commission of Canada was formed as a national catalyst for improving the mental health system. One of its initiatives is Opening Minds (OM), whose mandate is to reduce mental health-related stigma. This article reports findings from a qualitative study on antistigma interventions for healthcare providers, which includes a process model articulating key stages and strategies for implementing successful antistigma programmes. The study employed a grounded theory methodology. Data collection involved in-depth interviews with programme stakeholders, direct observation of programmes, a review of programme documents, and qualitative feedback from programme participants. Analysis proceeded via the constant comparison method. A model was generated to visually present key findings. Twenty-three in-depth interviews were conducted representing 18 different programmes. Eight programmes were observed directly, 48 programme documents were reviewed, and data from 1812 programme participants were reviewed. The analysis led to a four-stage process model for implementing successful antistigma programmes targeting healthcare providers, informed by the basic social process 'targeting the roots of healthcare provider stigma'. The process model developed through this research may function as a tool to help guide the development and implementation of antistigma programmes in healthcare contexts. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  2. Block-localized wavefunction (BLW) method at the density functional theory (DFT) level.

    PubMed

    Mo, Yirong; Song, Lingchun; Lin, Yuchun

    2007-08-30

    The block-localized wavefunction (BLW) approach is an ab initio valence bond (VB) method incorporating the efficiency of molecular orbital (MO) theory. It can generate the wavefunction for a resonance structure or diabatic state self-consistently by partitioning the overall electrons and primitive orbitals into several subgroups and expanding each block-localized molecular orbital in only one subspace. Although block-localized molecular orbitals in the same subspace are constrained to be orthogonal (a feature of MO theory), orbitals between different subspaces are generally nonorthogonal (a feature of VB theory). The BLW method is particularly useful in the quantification of the electron delocalization (resonance) effect within a molecule and the charge-transfer effect between molecules. In this paper, we extend the BLW method to the density functional theory (DFT) level and implement the BLW-DFT method to the quantum mechanical software GAMESS. Test applications to the pi conjugation in the planar allyl radical and ions with the basis sets of 6-31G(d), 6-31+G(d), 6-311+G(d,p), and cc-pVTZ show that the basis set dependency is insignificant. In addition, the BLW-DFT method can also be used to elucidate the nature of intermolecular interactions. Examples of pi-cation interactions and solute-solvent interactions will be presented and discussed. By expressing each diabatic state with one BLW, the BLW method can be further used to study chemical reactions and electron-transfer processes whose potential energy surfaces are typically described by two or more diabatic states.

  3. Application of relativistic coupled-cluster theory to electron impact excitation of Mg+ in the plasma environment

    NASA Astrophysics Data System (ADS)

    Sharma, Lalita; Sahoo, Bijaya Kumar; Malkar, Pooja; Srivastava, Rajesh

    2018-01-01

    A relativistic coupled-cluster theory is implemented to study electron impact excitations of atomic species. As a test case, the electron impact excitations of the 3 s 2 S 1/2-3 p 2 P 1/2;3/2 resonance transitions are investigated in the singly charged magnesium (Mg+) ion using this theory. Accuracies of wave functions of Mg+ are justified by evaluating its attachment energies of the relevant states and compared with the experimental values. The continuum wave function of the projectile electron are obtained by solving Dirac equations assuming distortion potential as static potential of the ground state of Mg+. Comparison of the calculated electron impact excitation differential and total cross-sections with the available measurements are found to be in very good agreements at various incident electron energies. Further, calculations are carried out in the plasma environment in the Debye-Hückel model framework, which could be useful in the astrophysics. Influence of plasma strength on the cross-sections as well as linear polarization of the photon emission in the 3 p 2 P 3/2-3 s 2 S 1/2 transition is investigated for different incident electron energies.

  4. Children and Welfare Reform: What Policy Theories Are Being Implemented in States Where Most Poor Children Live?

    ERIC Educational Resources Information Center

    Johnson, Cathy Marie; Gais, Thomas Lewis; Lawrence, Catherine

    This paper revisits 1997-98 findings that indicated that during the first years of state implementation of Temporary Assistance for Needy Families (TANF), states were most likely to implement the environment theory, which claims that children benefit socially and psychologically from being part of a household in which caregivers have jobs, and…

  5. Density Functional Theory and Beyond for Band-Gap Screening: Performance for Transition-Metal Oxides and Dichalcogenides.

    PubMed

    Li, Wenqing; Walther, Christian F J; Kuc, Agnieszka; Heine, Thomas

    2013-07-09

    The performance of a wide variety of commonly used density functionals, as well as two screened hybrid functionals (HSE06 and TB-mBJ), on predicting electronic structures of a large class of en vogue materials, such as metal oxides, chalcogenides, and nitrides, is discussed in terms of band gaps, band structures, and projected electronic densities of states. Contrary to GGA, hybrid functionals and GGA+U, both HSE06 and TB-mBJ are able to predict band gaps with an appreciable accuracy of 25% and thus allow the screening of various classes of transition-metal-based compounds, i.e., mixed or doped materials, at modest computational cost. The calculated electronic structures are largely unaffected by the choice of basis functions and software implementation, however, might be subject to the treatment of the core electrons.

  6. The Making of SPINdle

    NASA Astrophysics Data System (ADS)

    Lam, Ho-Pun; Governatori, Guido

    We present the design and implementation of SPINdle - an open source Java based defeasible logic reasoner capable to perform efficient and scalable reasoning on defeasible logic theories (including theories with over 1 million rules). The implementation covers both the standard and modal extensions to defeasible logics. It can be used as a standalone theory prover and can be embedded into any applications as a defeasible logic rule engine. It allows users or agents to issues queries, on a given knowledge base or a theory generated on the fly by other applications, and automatically produces the conclusions of its consequences. The theory can also be represented using XML.

  7. Combined Uncertainty and A-Posteriori Error Bound Estimates for CFD Calculations: Theory and Implementation

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2014-01-01

    Simulation codes often utilize finite-dimensional approximation resulting in numerical error. Some examples include, numerical methods utilizing grids and finite-dimensional basis functions, particle methods using a finite number of particles. These same simulation codes also often contain sources of uncertainty, for example, uncertain parameters and fields associated with the imposition of initial and boundary data,uncertain physical model parameters such as chemical reaction rates, mixture model parameters, material property parameters, etc.

  8. A Fuzzy Logic Optimal Control Law Solution to the CMMCA Tracking Problem

    DTIC Science & Technology

    1993-03-01

    or from a transfer function. Many times, however, the resulting algorithms are so complex as to be completely or essentially useless. Applications...implemented in a nearly real time computer simulation. Located within the LQ framework are all the performance data for both the ClMCA and the CX...repuired nor desired. 34 - / k more general and less exacting framework was used. In order to concentrate on tho theory and problem solution, it was

  9. Theory and Implementation of a VLSI Stray Insensitive Switched Capacitor Composite Operational Amplifier

    DTIC Science & Technology

    1994-06-01

    to the simulations, we get a proof of correct concept that matches the mathematical foundation of the microchip. 108 Vill. APPLICATIONS A. WHERE AND...ORGANIZATION (if applicable ) 8c. ADDRESS (City, State, and ZIP Code) 10. SOURCE OF FUNDING NUMBERS Ivogru Elewwn No. Pro8a No. Task No. Wor Unit Acess L...necessary and identify by block number) ’ FIELD GROUP SUBGROUP Mathematical derivation of circuit transfer functions, Composite Operational Amplifiers

  10. Enlightened Multiscale Simulation of Biochemical Networks. Core Theory, Validating Experiments, and Implementation in Open Software

    DTIC Science & Technology

    2006-10-01

    organisms that can either be in the lysogenic (latent) or lytic (active) state. If following its infection of E . coli , the λ-phage virus enters the...and unfolded proteins (b) in the heat shock response system . . . . . 31 3 Robust stability of the model of Heat Shock in E - coli ...stochastic reachability analysis, all in the context of two biologically motivated and functionally important systems: the heat shock response in E . coli and

  11. A hybrid method for evaluating enterprise architecture implementation.

    PubMed

    Nikpay, Fatemeh; Ahmad, Rodina; Yin Kia, Chiam

    2017-02-01

    Enterprise Architecture (EA) implementation evaluation provides a set of methods and practices for evaluating the EA implementation artefacts within an EA implementation project. There are insufficient practices in existing EA evaluation models in terms of considering all EA functions and processes, using structured methods in developing EA implementation, employing matured practices, and using appropriate metrics to achieve proper evaluation. The aim of this research is to develop a hybrid evaluation method that supports achieving the objectives of EA implementation. To attain this aim, the first step is to identify EA implementation evaluation practices. To this end, a Systematic Literature Review (SLR) was conducted. Second, the proposed hybrid method was developed based on the foundation and information extracted from the SLR, semi-structured interviews with EA practitioners, program theory evaluation and Information Systems (ISs) evaluation. Finally, the proposed method was validated by means of a case study and expert reviews. This research provides a suitable foundation for researchers who wish to extend and continue this research topic with further analysis and exploration, and for practitioners who would like to employ an effective and lightweight evaluation method for EA projects. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. The Cartridge Theory: a description of the functioning of horizontal subsurface flow constructed wetlands for wastewater treatment, based on modelling results.

    PubMed

    Samsó, Roger; García, Joan

    2014-03-01

    Despite the fact that horizontal subsurface flow constructed wetlands have been in operation for several decades now, there is still no clear understanding of some of their most basic internal functioning patterns. To fill this knowledge gap, on this paper we present what we call "The Cartridge Theory". This theory was derived from simulation results obtained with the BIO_PORE model and explains the functioning of urban wastewater treatment wetlands based on the interaction between bacterial communities and the accumulated solids leading to clogging. In this paper we start by discussing some changes applied to the biokinetic model implemented in BIO_PORE (CWM1) so that the growth of bacterial communities is consistent with a well-known population dynamics models. This discussion, combined with simulation results for a pilot wetland system, led to the introduction of "The Cartridge Theory", which states that the granular media of horizontal subsurface flow wetlands can be assimilated to a generic cartridge which is progressively consumed (clogged) with inert solids from inlet to outlet. Simulations also revealed that bacterial communities are poorly distributed within the system and that their location is not static but changes over time, moving towards the outlet as a consequence of the progressive clogging of the granular media. According to these findings, the life-span of constructed wetlands corresponds to the time when bacterial communities are pushed as much towards the outlet that their biomass is not anymore sufficient to remove the desirable proportion of the influent pollutants. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. Conformational study of glyoxal bis(amidinohydrazone) by ab initio methods

    NASA Astrophysics Data System (ADS)

    Mannfors, B.; Koskinen, J. T.; Pietilä, L.-O.

    1997-08-01

    We report the first ab initio molecular orbital study on the ground state of the endiamine tautomer of glyoxal bis(amidinohydrazone) (or glyoxal bis(guanylhydrazone), GBG) free base. The calculations were performed at the following levels of theory: Hartree-Fock, second-order Møller-Plesset perturbation theory and density functional theory (B-LYP and B3-LYP) as implemented in the Gaussian 94 software. The standard basis set 6-31G(d) was found to be sufficient. The default fine grid of Gaussian 94 was used in the density functional calculations. Molecular properties, such as optimized structures, total energies and the electrostatic potential derived (CHELPG) atomic charges, were studied as functions of C-C and N-N conformations. The lowest energy conformation was found to be all- trans, in agreement with the experimental solid-state structure. The second conformer with respect to rotation around the central C-C bond was found to be the cis conformer with an MP2//HF energy of 4.67 kcal mol -1. For rotation around the N-N bond the energy increased monotonically from the trans conformation to the cis conformation, the cis energy being very high, 22.01 kcal mol -1 (MP2//HF). The atomic charges were shown to be conformation dependent, and the bond charge increments and especially the conformational changes of the bond charge increments were found to be easily transferable between structurally related systems.

  14. Factorization and resummation of Higgs boson differential distributions in soft-collinear effective theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mantry, Sonny; Petriello, Frank

    We derive a factorization theorem for the Higgs boson transverse momentum (p{sub T}) and rapidity (Y) distributions at hadron colliders, using the soft-collinear effective theory (SCET), for m{sub h}>>p{sub T}>>{Lambda}{sub QCD}, where m{sub h} denotes the Higgs mass. In addition to the factorization of the various scales involved, the perturbative physics at the p{sub T} scale is further factorized into two collinear impact-parameter beam functions (IBFs) and an inverse soft function (ISF). These newly defined functions are of a universal nature for the study of differential distributions at hadron colliders. The additional factorization of the p{sub T}-scale physics simplifies themore » implementation of higher order radiative corrections in {alpha}{sub s}(p{sub T}). We derive formulas for factorization in both momentum and impact parameter space and discuss the relationship between them. Large logarithms of the relevant scales in the problem are summed using the renormalization group equations of the effective theories. Power corrections to the factorization theorem in p{sub T}/m{sub h} and {Lambda}{sub QCD}/p{sub T} can be systematically derived. We perform multiple consistency checks on our factorization theorem including a comparison with known fixed-order QCD results. We compare the SCET factorization theorem with the Collins-Soper-Sterman approach to low-p{sub T} resummation.« less

  15. Adapting SAFT-γ perturbation theory to site-based molecular dynamics simulation. II. Confined fluids and vapor-liquid interfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghobadi, Ahmadreza F.; Elliott, J. Richard, E-mail: elliot1@uakron.edu

    2014-07-14

    In this work, a new classical density functional theory is developed for group-contribution equations of state (EOS). Details of implementation are demonstrated for the recently-developed SAFT-γ WCA EOS and selective applications are studied for confined fluids and vapor-liquid interfaces. The acronym WCA (Weeks-Chandler-Andersen) refers to the characterization of the reference part of the third-order thermodynamic perturbation theory applied in formulating the EOS. SAFT-γ refers to the particular form of “statistical associating fluid theory” that is applied to the fused-sphere, heteronuclear, united-atom molecular models of interest. For the monomer term, the modified fundamental measure theory is extended to WCA-spheres. A newmore » chain functional is also introduced for fused and soft heteronuclear chains. The attractive interactions are taken into account by considering the structure of the fluid, thus elevating the theory beyond the mean field approximation. The fluctuations of energy are also included via a non-local third-order perturbation theory. The theory includes resolution of the density profiles of individual groups such as CH{sub 2} and CH{sub 3} and satisfies stoichiometric constraints for the density profiles. New molecular simulations are conducted to demonstrate the accuracy of each Helmholtz free energy contribution in reproducing the microstructure of inhomogeneous systems at the united-atom level of coarse graining. At each stage, comparisons are made to assess where the present theory stands relative to the current state of the art for studying inhomogeneous fluids. Overall, it is shown that the characteristic features of real molecular fluids are captured both qualitatively and quantitatively. For example, the average pore density deviates ∼2% from simulation data for attractive pentadecane in a 2-nm slit pore. Another example is the surface tension of ethane/heptane mixture, which deviates ∼1% from simulation data while the theory reproduces the excess accumulation of ethane at the interface.« less

  16. Multidimensional biochemical information processing of dynamical patterns

    NASA Astrophysics Data System (ADS)

    Hasegawa, Yoshihiko

    2018-02-01

    Cells receive signaling molecules by receptors and relay information via sensory networks so that they can respond properly depending on the type of signal. Recent studies have shown that cells can extract multidimensional information from dynamical concentration patterns of signaling molecules. We herein study how biochemical systems can process multidimensional information embedded in dynamical patterns. We model the decoding networks by linear response functions, and optimize the functions with the calculus of variations to maximize the mutual information between patterns and output. We find that, when the noise intensity is lower, decoders with different linear response functions, i.e., distinct decoders, can extract much information. However, when the noise intensity is higher, distinct decoders do not provide the maximum amount of information. This indicates that, when transmitting information by dynamical patterns, embedding information in multiple patterns is not optimal when the noise intensity is very large. Furthermore, we explore the biochemical implementations of these decoders using control theory and demonstrate that these decoders can be implemented biochemically through the modification of cascade-type networks, which are prevalent in actual signaling pathways.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Kuang; Libisch, Florian; Carter, Emily A., E-mail: eac@princeton.edu

    We report a new implementation of the density functional embedding theory (DFET) in the VASP code, using the projector-augmented-wave (PAW) formalism. Newly developed algorithms allow us to efficiently perform optimized effective potential optimizations within PAW. The new algorithm generates robust and physically correct embedding potentials, as we verified using several test systems including a covalently bound molecule, a metal surface, and bulk semiconductors. We show that with the resulting embedding potential, embedded cluster models can reproduce the electronic structure of point defects in bulk semiconductors, thereby demonstrating the validity of DFET in semiconductors for the first time. Compared to ourmore » previous version, the new implementation of DFET within VASP affords use of all features of VASP (e.g., a systematic PAW library, a wide selection of functionals, a more flexible choice of U correction formalisms, and faster computational speed) with DFET. Furthermore, our results are fairly robust with respect to both plane-wave and Gaussian type orbital basis sets in the embedded cluster calculations. This suggests that the density functional embedding method is potentially an accurate and efficient way to study properties of isolated defects in semiconductors.« less

  18. The development of acoustic experiments for off-campus teaching and learning

    NASA Astrophysics Data System (ADS)

    Wild, Graham; Swan, Geoff

    2011-05-01

    In this article, we show the implementation of a computer-based digital storage oscilloscope (DSO) and function generator (FG) using the computer's soundcard for off-campus acoustic experiments. The microphone input is used for the DSO, and a speaker jack is used as the FG. In an effort to reduce the cost of implementing the experiment, we examine software available for free, online. A small number of applications were compared in terms of their interface and functionality, for both the DSO and the FG. The software was then used to investigate standing waves in pipes using the computer-based DSO. Standing wave theory taught in high school and in first year physics is based on a one-dimensional model. With the use of the DSO's fast Fourier transform function, the experimental uncertainly alone was not sufficient to account for the difference observed between the measure and the calculated frequencies. Hence the original experiment was expanded upon to include the end correction effect. The DSO was also used for other simple acoustics experiments, in areas such as the physics of music.

  19. Multidimensional biochemical information processing of dynamical patterns.

    PubMed

    Hasegawa, Yoshihiko

    2018-02-01

    Cells receive signaling molecules by receptors and relay information via sensory networks so that they can respond properly depending on the type of signal. Recent studies have shown that cells can extract multidimensional information from dynamical concentration patterns of signaling molecules. We herein study how biochemical systems can process multidimensional information embedded in dynamical patterns. We model the decoding networks by linear response functions, and optimize the functions with the calculus of variations to maximize the mutual information between patterns and output. We find that, when the noise intensity is lower, decoders with different linear response functions, i.e., distinct decoders, can extract much information. However, when the noise intensity is higher, distinct decoders do not provide the maximum amount of information. This indicates that, when transmitting information by dynamical patterns, embedding information in multiple patterns is not optimal when the noise intensity is very large. Furthermore, we explore the biochemical implementations of these decoders using control theory and demonstrate that these decoders can be implemented biochemically through the modification of cascade-type networks, which are prevalent in actual signaling pathways.

  20. Sensitivity of Optimal Solutions to Control Problems for Second Order Evolution Subdifferential Inclusions.

    PubMed

    Bartosz, Krzysztof; Denkowski, Zdzisław; Kalita, Piotr

    In this paper the sensitivity of optimal solutions to control problems described by second order evolution subdifferential inclusions under perturbations of state relations and of cost functionals is investigated. First we establish a new existence result for a class of such inclusions. Then, based on the theory of sequential [Formula: see text]-convergence we recall the abstract scheme concerning convergence of minimal values and minimizers. The abstract scheme works provided we can establish two properties: the Kuratowski convergence of solution sets for the state relations and some complementary [Formula: see text]-convergence of the cost functionals. Then these two properties are implemented in the considered case.

  1. Self-Consistent Optimization of Excited States within Density-Functional Tight-Binding.

    PubMed

    Kowalczyk, Tim; Le, Khoa; Irle, Stephan

    2016-01-12

    We present an implementation of energies and gradients for the ΔDFTB method, an analogue of Δ-self-consistent-field density functional theory (ΔSCF) within density-functional tight-binding, for the lowest singlet excited state of closed-shell molecules. Benchmarks of ΔDFTB excitation energies, optimized geometries, Stokes shifts, and vibrational frequencies reveal that ΔDFTB provides a qualitatively correct description of changes in molecular geometries and vibrational frequencies due to excited-state relaxation. The accuracy of ΔDFTB Stokes shifts is comparable to that of ΔSCF-DFT, and ΔDFTB performs similarly to ΔSCF with the PBE functional for vertical excitation energies of larger chromophores where the need for efficient excited-state methods is most urgent. We provide some justification for the use of an excited-state reference density in the DFTB expansion of the electronic energy and demonstrate that ΔDFTB preserves many of the properties of its parent ΔSCF approach. This implementation fills an important gap in the extended framework of DFTB, where access to excited states has been limited to the time-dependent linear-response approach, and affords access to rapid exploration of a valuable class of excited-state potential energy surfaces.

  2. Semiempirical UNO-CAS and UNO-CI: method and applications in nanoelectronics.

    PubMed

    Dral, Pavlo O; Clark, Timothy

    2011-10-20

    Unrestricted Natural Orbital-Complete Active Space Configuration Interaction, abbreviated as UNO-CAS, has been implemented for NDDO-based semiempirical molecular-orbital (MO) theory. A computationally more economic technique, UNO-CIS, in which we use a configuration interaction (CI) calculation with only single excitations (CIS) to calculate excited states, has also been implemented and tested. The class of techniques in which unrestricted natural orbitals (UNOs) are used as the reference for CI calculations is denoted UNO-CI. Semiempirical UNO-CI gives good results for the optical band gaps of organic semiconductors such as polyynes and polyacenes, which are promising materials for nanoelectronics. The results of these semiempirical UNO-CI techniques are generally in better agreement with experiment than those obtained with the corresponding conventional semiempirical CI methods and comparable to or better than those obtained with far more computationally expensive methods such as time-dependent density-functional theory. We also show that symmetry breaking in semiempirical UHF calculations is very useful for predicting the diradical character of organic compounds in the singlet spin state.

  3. An improved local radial point interpolation method for transient heat conduction analysis

    NASA Astrophysics Data System (ADS)

    Wang, Feng; Lin, Gao; Zheng, Bao-Jing; Hu, Zhi-Qiang

    2013-06-01

    The smoothing thin plate spline (STPS) interpolation using the penalty function method according to the optimization theory is presented to deal with transient heat conduction problems. The smooth conditions of the shape functions and derivatives can be satisfied so that the distortions hardly occur. Local weak forms are developed using the weighted residual method locally from the partial differential equations of the transient heat conduction. Here the Heaviside step function is used as the test function in each sub-domain to avoid the need for a domain integral. Essential boundary conditions can be implemented like the finite element method (FEM) as the shape functions possess the Kronecker delta property. The traditional two-point difference method is selected for the time discretization scheme. Three selected numerical examples are presented in this paper to demonstrate the availability and accuracy of the present approach comparing with the traditional thin plate spline (TPS) radial basis functions.

  4. Introducing Evidence Through Research "Push": Using Theory and Qualitative Methods.

    PubMed

    Morden, Andrew; Ong, Bie Nio; Brooks, Lauren; Jinks, Clare; Porcheret, Mark; Edwards, John J; Dziedzic, Krysia S

    2015-11-01

    A multitude of factors can influence the uptake and implementation of complex interventions in health care. A plethora of theories and frameworks recognize the need to establish relationships, understand organizational dynamics, address context and contingency, and engage key decision makers. Less attention is paid to how theories that emphasize relational contexts can actually be deployed to guide the implementation of an intervention. The purpose of the article is to demonstrate the potential role of qualitative research aligned with theory to inform complex interventions. We detail a study underpinned by theory and qualitative research that (a) ensured key actors made sense of the complex intervention at the earliest stage of adoption and (b) aided initial engagement with the intervention. We conclude that using theoretical approaches aligned with qualitative research can provide insights into the context and dynamics of health care settings that in turn can be used to aid intervention implementation. © The Author(s) 2015.

  5. Extended Lagrangian Density Functional Tight-Binding Molecular Dynamics for Molecules and Solids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aradi, Bálint; Niklasson, Anders M. N.; Frauenheim, Thomas

    A computationally fast quantum mechanical molecular dynamics scheme using an extended Lagrangian density functional tight-binding formulation has been developed and implemented in the DFTB+ electronic structure program package for simulations of solids and molecular systems. The scheme combines the computational speed of self-consistent density functional tight-binding theory with the efficiency and long-term accuracy of extended Lagrangian Born–Oppenheimer molecular dynamics. Furthermore, for systems without self-consistent charge instabilities, only a single diagonalization or construction of the single-particle density matrix is required in each time step. The molecular dynamics simulation scheme can also be applied to a broad range of problems in materialsmore » science, chemistry, and biology.« less

  6. Extended Lagrangian Density Functional Tight-Binding Molecular Dynamics for Molecules and Solids

    DOE PAGES

    Aradi, Bálint; Niklasson, Anders M. N.; Frauenheim, Thomas

    2015-06-26

    A computationally fast quantum mechanical molecular dynamics scheme using an extended Lagrangian density functional tight-binding formulation has been developed and implemented in the DFTB+ electronic structure program package for simulations of solids and molecular systems. The scheme combines the computational speed of self-consistent density functional tight-binding theory with the efficiency and long-term accuracy of extended Lagrangian Born–Oppenheimer molecular dynamics. Furthermore, for systems without self-consistent charge instabilities, only a single diagonalization or construction of the single-particle density matrix is required in each time step. The molecular dynamics simulation scheme can also be applied to a broad range of problems in materialsmore » science, chemistry, and biology.« less

  7. Regulation of C:N:P stoichiometry of microbes and soil organic matter by optimizing enzyme allocation: an omics-informed model study

    NASA Astrophysics Data System (ADS)

    Song, Y.; Yao, Q.; Wang, G.; Yang, X.; Mayes, M. A.

    2017-12-01

    Increasing evidences is indicating that soil organic matter (SOM) decomposition and stabilization process is a continuum process and controlled by both microbial functions and their interaction with minerals (known as the microbial efficiency-matrix stabilization theory (MEMS)). Our metagenomics analysis of soil samples from both P-deficit and P-fertilization sites in Panama has demonstrated that community-level enzyme functions could adapt to maximize the acquisition of limiting nutrients and minimize energy demand for foraging (known as the optimal foraging theory). This optimization scheme can mitigate the imbalance of C/P ratio between soil substrate and microbial community and relieve the P limitation on microbial carbon use efficiency over the time. Dynamic allocation of multiple enzyme groups and their interaction with microbial/substrate stoichiometry has rarely been considered in biogeochemical models due to the difficulties in identifying microbial functional groups and quantifying the change in enzyme expression in response to soil nutrient availability. This study aims to represent the omics-informed optimal foraging theory in the Continuum Microbial ENzyme Decomposition model (CoMEND), which was developed to represent the continuum SOM decomposition process following the MEMS theory. The SOM pools in the model are classified based on soil chemical composition (i.e. Carbohydrates, lignin, N-rich SOM and P-rich SOM) and the degree of SOM depolymerization. The enzyme functional groups for decomposition of each SOM pool and N/P mineralization are identified by the relative composition of gene copy numbers. The responses of microbial activities and SOM decomposition to nutrient availability are simulated by optimizing the allocation of enzyme functional groups following the optimal foraging theory. The modeled dynamic enzyme allocation in response to P availability is evaluated by the metagenomics data measured from P addition and P-deficit soil samples in Panama sites.The implementation of dynamic enzyme allocation in response to nutrient availability in the CoMEND model enables us to capture the varying microbial C/P ratio and soil carbon dynamics in response to shifting nutrient constraints over time in tropical soils.

  8. Simulations of nanocrystals under pressure: combining electronic enthalpy and linear-scaling density-functional theory.

    PubMed

    Corsini, Niccolò R C; Greco, Andrea; Hine, Nicholas D M; Molteni, Carla; Haynes, Peter D

    2013-08-28

    We present an implementation in a linear-scaling density-functional theory code of an electronic enthalpy method, which has been found to be natural and efficient for the ab initio calculation of finite systems under hydrostatic pressure. Based on a definition of the system volume as that enclosed within an electronic density isosurface [M. Cococcioni, F. Mauri, G. Ceder, and N. Marzari, Phys. Rev. Lett. 94, 145501 (2005)], it supports both geometry optimizations and molecular dynamics simulations. We introduce an approach for calibrating the parameters defining the volume in the context of geometry optimizations and discuss their significance. Results in good agreement with simulations using explicit solvents are obtained, validating our approach. Size-dependent pressure-induced structural transformations and variations in the energy gap of hydrogenated silicon nanocrystals are investigated, including one comparable in size to recent experiments. A detailed analysis of the polyamorphic transformations reveals three types of amorphous structures and their persistence on depressurization is assessed.

  9. Simulations of nanocrystals under pressure: Combining electronic enthalpy and linear-scaling density-functional theory

    NASA Astrophysics Data System (ADS)

    Corsini, Niccolò R. C.; Greco, Andrea; Hine, Nicholas D. M.; Molteni, Carla; Haynes, Peter D.

    2013-08-01

    We present an implementation in a linear-scaling density-functional theory code of an electronic enthalpy method, which has been found to be natural and efficient for the ab initio calculation of finite systems under hydrostatic pressure. Based on a definition of the system volume as that enclosed within an electronic density isosurface [M. Cococcioni, F. Mauri, G. Ceder, and N. Marzari, Phys. Rev. Lett. 94, 145501 (2005)], 10.1103/PhysRevLett.94.145501, it supports both geometry optimizations and molecular dynamics simulations. We introduce an approach for calibrating the parameters defining the volume in the context of geometry optimizations and discuss their significance. Results in good agreement with simulations using explicit solvents are obtained, validating our approach. Size-dependent pressure-induced structural transformations and variations in the energy gap of hydrogenated silicon nanocrystals are investigated, including one comparable in size to recent experiments. A detailed analysis of the polyamorphic transformations reveals three types of amorphous structures and their persistence on depressurization is assessed.

  10. Positive semidefinite tensor factorizations of the two-electron integral matrix for low-scaling ab initio electronic structure.

    PubMed

    Hoy, Erik P; Mazziotti, David A

    2015-08-14

    Tensor factorization of the 2-electron integral matrix is a well-known technique for reducing the computational scaling of ab initio electronic structure methods toward that of Hartree-Fock and density functional theories. The simplest factorization that maintains the positive semidefinite character of the 2-electron integral matrix is the Cholesky factorization. In this paper, we introduce a family of positive semidefinite factorizations that generalize the Cholesky factorization. Using an implementation of the factorization within the parametric 2-RDM method [D. A. Mazziotti, Phys. Rev. Lett. 101, 253002 (2008)], we study several inorganic molecules, alkane chains, and potential energy curves and find that this generalized factorization retains the accuracy and size extensivity of the Cholesky factorization, even in the presence of multi-reference correlation. The generalized family of positive semidefinite factorizations has potential applications to low-scaling ab initio electronic structure methods that treat electron correlation with a computational cost approaching that of the Hartree-Fock method or density functional theory.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trevisanutto, Paolo E.; Vignale, Giovanni, E-mail: vignaleg@missouri.edu

    Ab initio electronic structure calculations of two-dimensional layered structures are typically performed using codes that were developed for three-dimensional structures, which are periodic in all three directions. The introduction of a periodicity in the third direction (perpendicular to the layer) is completely artificial and may lead in some cases to spurious results and to difficulties in treating the action of external fields. In this paper we develop a new approach, which is “native” to quasi-2D materials, making use of basis function that are periodic in the plane, but atomic-like in the perpendicular direction. We show how some of the basicmore » tools of ab initio electronic structure theory — density functional theory, GW approximation and Bethe-Salpeter equation — are implemented in the new basis. We argue that the new approach will be preferable to the conventional one in treating the peculiarities of layered materials, including the long range of the unscreened Coulomb interaction in insulators, and the effects of strain, corrugations, and external fields.« less

  12. Development and pilot-test of the Workplace Readiness Questionnaire, a theory-based instrument to measure small workplaces’ readiness to implement wellness programs

    PubMed Central

    Hannon, Peggy A.; Helfrich, Christian D.; Chan, K. Gary; Allen, Claire L.; Hammerback, Kristen; Kohn, Marlana J.; Parrish, Amanda T.; Weiner, Bryan J.; Harris, Jeffrey R.

    2016-01-01

    Purpose To develop a theory-based questionnaire to assess readiness for change in small workplaces adopting wellness programs. Design In developing our scale, we first tested items via “think-aloud” interviews. We tested the revised items in a cross-sectional quantitative telephone survey. Setting Small workplaces (20–250 employees) in low-wage industries. Subjects Decision-makers representing small workplaces in King County, Washington (think-aloud interviews, n=9) and the United States (telephone survey, n=201). Measures We generated items for each construct in Weiner’s theory of organizational readiness for change. We also measured workplace characteristics and current implementation of workplace wellness programs. Analysis We assessed reliability by coefficient alpha for each of the readiness questionnaire subscales. We tested the association of all subscales with employers’ current implementation of wellness policies, programs, and communications, and conducted a path analysis to test the associations in the theory of organizational readiness to change. Results Each of the readiness subscales exhibited acceptable internal reliability (coefficient alpha range = .75–.88) and was positively associated with wellness program implementation (p <.05). The path analysis was consistent with the theory of organizational readiness to change, except change efficacy did not predict change-related effort. Conclusion We developed a new questionnaire to assess small workplaces’ readiness to adopt and implement evidence-based wellness programs. Our findings also provide empirical validation of Weiner’s theory of readiness for change. PMID:26389975

  13. Development and Pilot Test of the Workplace Readiness Questionnaire, a Theory-Based Instrument to Measure Small Workplaces' Readiness to Implement Wellness Programs.

    PubMed

    Hannon, Peggy A; Helfrich, Christian D; Chan, K Gary; Allen, Claire L; Hammerback, Kristen; Kohn, Marlana J; Parrish, Amanda T; Weiner, Bryan J; Harris, Jeffrey R

    2017-01-01

    To develop a theory-based questionnaire to assess readiness for change in small workplaces adopting wellness programs. In developing our scale, we first tested items via "think-aloud" interviews. We tested the revised items in a cross-sectional quantitative telephone survey. The study setting comprised small workplaces (20-250 employees) in low-wage industries. Decision-makers representing small workplaces in King County, Washington (think-aloud interviews, n = 9), and the United States (telephone survey, n = 201) served as study subjects. We generated items for each construct in Weiner's theory of organizational readiness for change. We also measured workplace characteristics and current implementation of workplace wellness programs. We assessed reliability by coefficient alpha for each of the readiness questionnaire subscales. We tested the association of all subscales with employers' current implementation of wellness policies, programs, and communications, and conducted a path analysis to test the associations in the theory of organizational readiness to change. Each of the readiness subscales exhibited acceptable internal reliability (coefficient alpha range, .75-.88) and was positively associated with wellness program implementation ( p < .05). The path analysis was consistent with the theory of organizational readiness to change, except change efficacy did not predict change-related effort. We developed a new questionnaire to assess small workplaces' readiness to adopt and implement evidence-based wellness programs. Our findings also provide empirical validation of Weiner's theory of readiness for change.

  14. The Optoelectronic Properties of Nanoparticles from First Principles Calculations

    NASA Astrophysics Data System (ADS)

    Brawand, Nicholas Peter

    The tunable optoelectronic properties of nanoparticles through the modification of their size, shape, and surface chemistry, make them promising platforms for numerous applications, including electronic and solar conversion devices. However, the rational design and optimization of nanostructured materials remain open challenges, e.g. due to difficulties in controlling and reproducing synthetic processes and in precise atomic-scale characterization. Hence, the need for accurate theoretical predictions, which can complement and help interpret experiments and provide insight into the underlying physical properties of nanostructured materials. This dissertation focuses on the development and application of first principles calculations to predict the optoelectronic properties of nanoparticles. Novel methods based on density functional theory are developed, implemented, and applied to predict both optical and charge transport properties. In particular, the generalization of dielectric dependent hybrid functionals to finite systems is introduced and shown to yield highly accurate electronic structure properties of molecules and nanoparticles, including photoemission and absorption properties. In addition, an implementation of constrained density functional theory is discussed, for the calculation of hopping transport in nanoparticle systems. The implementation was verified against literature results and compared against other methods used to compute transport properties, showing that some methods used in the literature give unphysical results for thermally disordered systems. Furthermore, the constrained density functional theory implementation was coupled to the self-consistent image charge method, making it possible to include image charge effects self-consistently when predicting charge transport properties of nanoparticles near interfaces. The methods developed in this dissertation were then applied to study the optoelectronic and transport properties of specific systems, in particular, silicon and lead chalcogenide nanoparticles. In the case of Si, blinking in oxidized Si nanoparticles was addressed. Si dangling bonds at the surface were found to introduce defect states which, depending on their charge and local stress conditions, may give rise to ON and OFF states responsible for exponential blinking statistics. We also investigated, engineering of band edge positions of nanoparticles through post-synthetic surface chemistry modification, with a focus on lead chalcogenides. In collaboration with experiment, we demonstrated how band edge positions of lead sulfide nanoparticles can be tuned by over 2.0 eV. We established a clear relationship between ligand dipole moments and nanoparticle band edge shifts which can be used to engineer nanoparticles for optoelectronic applications. Calculations of transport properties focused on charge transfer in silicon and lead chalcogenide nanoparticles. Si nanoparticles with deep defects and shallow impurities were investigated, showing that shallow defects may be more detrimental to charge transport than previously assumed. In the case of lead chalcogenide nanoparticles, hydrogen was found to form complexes with defects which can be used to remove potentially detrimental charge traps in nanoparticle solids. The methods and results presented in this dissertation are expected to help guide engineering of nanoparticles for future device applications.

  15. Plane-Wave Implementation and Performance of à-la-Carte Coulomb-Attenuated Exchange-Correlation Functionals for Predicting Optical Excitation Energies in Some Notorious Cases.

    PubMed

    Bircher, Martin P; Rothlisberger, Ursula

    2018-06-12

    Linear-response time-dependent density functional theory (LR-TD-DFT) has become a valuable tool in the calculation of excited states of molecules of various sizes. However, standard generalized-gradient approximation and hybrid exchange-correlation (xc) functionals often fail to correctly predict charge-transfer (CT) excitations with low orbital overlap, thus limiting the scope of the method. The Coulomb-attenuation method (CAM) in the form of the CAM-B3LYP functional has been shown to reliably remedy this problem in many CT systems, making accurate predictions possible. However, in spite of a rather consistent performance across different orbital overlap regimes, some pitfalls remain. Here, we present a fully flexible and adaptable implementation of the CAM for Γ-point calculations within the plane-wave pseudopotential molecular dynamics package CPMD and explore how customized xc functionals can improve the optical spectra of some notorious cases. We find that results obtained using plane waves agree well with those from all-electron calculations employing atom-centered bases, and that it is possible to construct a new Coulomb-attenuated xc functional based on simple considerations. We show that such a functional is able to outperform CAM-B3LYP in some cases, while retaining similar accuracy in systems where CAM-B3LYP performs well.

  16. Low-complexity piecewise-affine virtual sensors: theory and design

    NASA Astrophysics Data System (ADS)

    Rubagotti, Matteo; Poggi, Tomaso; Oliveri, Alberto; Pascucci, Carlo Alberto; Bemporad, Alberto; Storace, Marco

    2014-03-01

    This paper is focused on the theoretical development and the hardware implementation of low-complexity piecewise-affine direct virtual sensors for the estimation of unmeasured variables of interest of nonlinear systems. The direct virtual sensor is designed directly from measured inputs and outputs of the system and does not require a dynamical model. The proposed approach allows one to design estimators which mitigate the effect of the so-called 'curse of dimensionality' of simplicial piecewise-affine functions, and can be therefore applied to relatively high-order systems, enjoying convergence and optimality properties. An automatic toolchain is also presented to generate the VHDL code describing the digital circuit implementing the virtual sensor, starting from the set of measured input and output data. The proposed methodology is applied to generate an FPGA implementation of the virtual sensor for the estimation of vehicle lateral velocity, using a hardware-in-the-loop setting.

  17. Model-based object classification using unification grammars and abstract representations

    NASA Astrophysics Data System (ADS)

    Liburdy, Kathleen A.; Schalkoff, Robert J.

    1993-04-01

    The design and implementation of a high level computer vision system which performs object classification is described. General object labelling and functional analysis require models of classes which display a wide range of geometric variations. A large representational gap exists between abstract criteria such as `graspable' and current geometric image descriptions. The vision system developed and described in this work addresses this problem and implements solutions based on a fusion of semantics, unification, and formal language theory. Object models are represented using unification grammars, which provide a framework for the integration of structure and semantics. A methodology for the derivation of symbolic image descriptions capable of interacting with the grammar-based models is described and implemented. A unification-based parser developed for this system achieves object classification by determining if the symbolic image description can be unified with the abstract criteria of an object model. Future research directions are indicated.

  18. Sequential and parallel image restoration: neural network implementations.

    PubMed

    Figueiredo, M T; Leitao, J N

    1994-01-01

    Sequential and parallel image restoration algorithms and their implementations on neural networks are proposed. For images degraded by linear blur and contaminated by additive white Gaussian noise, maximum a posteriori (MAP) estimation and regularization theory lead to the same high dimension convex optimization problem. The commonly adopted strategy (in using neural networks for image restoration) is to map the objective function of the optimization problem into the energy of a predefined network, taking advantage of its energy minimization properties. Departing from this approach, we propose neural implementations of iterative minimization algorithms which are first proved to converge. The developed schemes are based on modified Hopfield (1985) networks of graded elements, with both sequential and parallel updating schedules. An algorithm supported on a fully standard Hopfield network (binary elements and zero autoconnections) is also considered. Robustness with respect to finite numerical precision is studied, and examples with real images are presented.

  19. The Excursion Set Theory of Halo Mass Functions, Halo Clustering, and Halo Growth

    NASA Astrophysics Data System (ADS)

    Zentner, Andrew R.

    I review the excursion set theory with particular attention toward applications to cold dark matter halo formation and growth, halo abundance, and halo clustering. After a brief introduction to notation and conventions, I begin by recounting the heuristic argument leading to the mass function of bound objects given by Press and Schechter. I then review the more formal derivation of the Press-Schechter halo mass function that makes use of excursion sets of the density field. The excursion set formalism is powerful and can be applied to numerous other problems. I review the excursion set formalism for describing both halo clustering and bias and the properties of void regions. As one of the most enduring legacies of the excursion set approach and one of its most common applications, I spend considerable time reviewing the excursion set theory of halo growth. This section of the review culminates with the description of two Monte Carlo methods for generating ensembles of halo mass accretion histories. In the last section, I emphasize that the standard excursion set approach is the result of several simplifying assumptions. Dropping these assumptions can lead to more faithful predictions and open excursion set theory to new applications. One such assumption is that the height of the barriers that define collapsed objects is a constant function of scale. I illustrate the implementation of the excursion set approach for barriers of arbitrary shape. One such application is the now well-known improvement of the excursion set mass function derived from the "moving" barrier for ellipsoidal collapse. I also emphasize that the statement that halo accretion histories are independent of halo environment in the excursion set approach is not a general prediction of the theory. It is a simplifying assumption. I review the method for constructing correlated random walks of the density field in the more general case. I construct a simple toy model to illustrate that excursion set theory (with a constant barrier height) makes a simple and general prediction for the relation between halo accretion histories and the large-scale environments of halos: regions of high density preferentially contain late-forming halos and conversely for regions of low density. I conclude with a brief discussion of the importance of this prediction relative to recent numerical studies of the environmental dependence of halo properties.

  20. Interactive Sonification Exploring Emergent Behavior Applying Models for Biological Information and Listening

    PubMed Central

    Choi, Insook

    2018-01-01

    Sonification is an open-ended design task to construct sound informing a listener of data. Understanding application context is critical for shaping design requirements for data translation into sound. Sonification requires methodology to maintain reproducibility when data sources exhibit non-linear properties of self-organization and emergent behavior. This research formalizes interactive sonification in an extensible model to support reproducibility when data exhibits emergent behavior. In the absence of sonification theory, extensibility demonstrates relevant methods across case studies. The interactive sonification framework foregrounds three factors: reproducible system implementation for generating sonification; interactive mechanisms enhancing a listener's multisensory observations; and reproducible data from models that characterize emergent behavior. Supramodal attention research suggests interactive exploration with auditory feedback can generate context for recognizing irregular patterns and transient dynamics. The sonification framework provides circular causality as a signal pathway for modeling a listener interacting with emergent behavior. The extensible sonification model adopts a data acquisition pathway to formalize functional symmetry across three subsystems: Experimental Data Source, Sound Generation, and Guided Exploration. To differentiate time criticality and dimensionality of emerging dynamics, tuning functions are applied between subsystems to maintain scale and symmetry of concurrent processes and temporal dynamics. Tuning functions accommodate sonification design strategies that yield order parameter values to render emerging patterns discoverable as well as rehearsable, to reproduce desired instances for clinical listeners. Case studies are implemented with two computational models, Chua's circuit and Swarm Chemistry social agent simulation, generating data in real-time that exhibits emergent behavior. Heuristic Listening is introduced as an informal model of a listener's clinical attention to data sonification through multisensory interaction in a context of structured inquiry. Three methods are introduced to assess the proposed sonification framework: Listening Scenario classification, data flow Attunement, and Sonification Design Patterns to classify sound control. Case study implementations are assessed against these methods comparing levels of abstraction between experimental data and sound generation. Outcomes demonstrate the framework performance as a reference model for representing experimental implementations, also for identifying common sonification structures having different experimental implementations, identifying common functions implemented in different subsystems, and comparing impact of affordances across multiple implementations of listening scenarios. PMID:29755311

  1. Robotic comfort zones

    NASA Astrophysics Data System (ADS)

    Likhachev, Maxim; Arkin, Ronald C.

    2000-10-01

    The paper investigates how the psychological notion of comfort can be useful in the design of robotic systems. A review of the existing study of human comfort, especially regarding its presence in infants, is conducted with the goal being to determine the relevant characteristics for mapping it onto the robotics domain. Focus is place on the identification of the salient features in the environment that affect the comfort level. Factors involved include current state familiarity, working conditions, the amount and location of available resources, etc. As part of our newly developed comfort function theory, the notion of an object as a psychological attachment for a robot is also introduced, as espoused in Bowlby's theory of attachment. The output space of the comfort function and its dependency on the comfort level are analyzed. The results of the derivation of this comfort function are then presented in terms of the impact they have on robotic behavior. Justification for the use of the comfort function are then presented in terms of the impact they have on robotic behavior. Justification for the use of the comfort function in the domain of robotics is presented with relevance for real-world operations. Also, a transformation of the theoretical discussion into a mathematical framework suitable for implementation within a behavior-based control system is presented. The paper concludes with results of simulation studies and real robot experiments using the derived comfort function.

  2. A GPU-based large-scale Monte Carlo simulation method for systems with long-range interactions

    NASA Astrophysics Data System (ADS)

    Liang, Yihao; Xing, Xiangjun; Li, Yaohang

    2017-06-01

    In this work we present an efficient implementation of Canonical Monte Carlo simulation for Coulomb many body systems on graphics processing units (GPU). Our method takes advantage of the GPU Single Instruction, Multiple Data (SIMD) architectures, and adopts the sequential updating scheme of Metropolis algorithm. It makes no approximation in the computation of energy, and reaches a remarkable 440-fold speedup, compared with the serial implementation on CPU. We further use this method to simulate primitive model electrolytes, and measure very precisely all ion-ion pair correlation functions at high concentrations. From these data, we extract the renormalized Debye length, renormalized valences of constituent ions, and renormalized dielectric constants. These results demonstrate unequivocally physics beyond the classical Poisson-Boltzmann theory.

  3. Velocity-gauge real-time TDDFT within a numerical atomic orbital basis set

    NASA Astrophysics Data System (ADS)

    Pemmaraju, C. D.; Vila, F. D.; Kas, J. J.; Sato, S. A.; Rehr, J. J.; Yabana, K.; Prendergast, David

    2018-05-01

    The interaction of laser fields with solid-state systems can be modeled efficiently within the velocity-gauge formalism of real-time time dependent density functional theory (RT-TDDFT). In this article, we discuss the implementation of the velocity-gauge RT-TDDFT equations for electron dynamics within a linear combination of atomic orbitals (LCAO) basis set framework. Numerical results obtained from our LCAO implementation, for the electronic response of periodic systems to both weak and intense laser fields, are compared to those obtained from established real-space grid and Full-Potential Linearized Augmented Planewave approaches. Potential applications of the LCAO based scheme in the context of extreme ultra-violet and soft X-ray spectroscopies involving core-electronic excitations are discussed.

  4. Design and Implementation of Mobile Learning System for Soldiers’ Vocational Skill Identification Based on Android

    NASA Astrophysics Data System (ADS)

    Ma, Jinqiang

    2017-09-01

    To carry out the identification of the professional skills of the soldiers is to further promote the regularization of the needs of the fire brigade, in accordance with the “public security active forces soldiers professional skills identification implementation approach” to meet the needs of candidates for mobile learning to solve the paper learning materials bring a lot of inconvenience; This article uses the Android technology to develop a set of soldiers professional skills Identification Theory learning app, the learning software based on mobile learning, learning function is perfect, you can learn to practice, to achieve the goal of learning at any time, to enhance the soldier's post ability has a good practical value.

  5. The evolution of sexes: A specific test of the disruptive selection theory.

    PubMed

    da Silva, Jack

    2018-01-01

    The disruptive selection theory of the evolution of anisogamy posits that the evolution of a larger body or greater organismal complexity selects for a larger zygote, which in turn selects for larger gametes. This may provide the opportunity for one mating type to produce more numerous, small gametes, forcing the other mating type to produce fewer, large gametes. Predictions common to this and related theories have been partially upheld. Here, a prediction specific to the disruptive selection theory is derived from a previously published game-theoretic model that represents the most complete description of the theory. The prediction, that the ratio of macrogamete to microgamete size should be above three for anisogamous species, is supported for the volvocine algae. A fully population genetic implementation of the model, involving mutation, genetic drift, and selection, is used to verify the game-theoretic approach and accurately simulates the evolution of gamete sizes in anisogamous species. This model was extended to include a locus for gamete motility and shows that oogamy should evolve whenever there is costly motility. The classic twofold cost of sex may be derived from the fitness functions of these models, showing that this cost is ultimately due to genetic conflict.

  6. The Evolution of an Interprofessional Shared Decision-Making Research Program: Reflective Case Study of an Emerging Paradigm

    PubMed Central

    Menear, Matthew; Stacey, Dawn; Brière, Nathalie; Légaré, France

    2016-01-01

    Introduction: Healthcare research increasingly focuses on interprofessional collaboration and on shared decision making, but knowledge gaps remain about effective strategies for implementing interprofessional collaboration and shared decision-making together in clinical practice. We used Kuhn’s theory of scientific revolutions to reflect on how an integrated interprofessional shared decision-making approach was developed and implemented over time. Methods: In 2007, an interdisciplinary team initiated a new research program to promote the implementation of an interprofessional shared decision-making approach in clinical settings. For this reflective case study, two new team members analyzed the team’s four projects, six research publications, one unpublished and two published protocols and organized them into recognizable phases according to Kuhn’s theory. Results: The merging of two young disciplines led to challenges characteristic of emerging paradigms. Implementation of interprofessional shared-decision making was hindered by a lack of conceptual clarity, a dearth of theories and models, little methodological guidance, and insufficient evaluation instruments. The team developed a new model, identified new tools, and engaged knowledge users in a theory-based approach to implementation. However, several unresolved challenges remain. Discussion: This reflective case study sheds light on the evolution of interdisciplinary team science. It offers new approaches to implementing emerging knowledge in the clinical context. PMID:28435417

  7. The Evolution of an Interprofessional Shared Decision-Making Research Program: Reflective Case Study of an Emerging Paradigm.

    PubMed

    Dogba, Maman Joyce; Menear, Matthew; Stacey, Dawn; Brière, Nathalie; Légaré, France

    2016-07-19

    Healthcare research increasingly focuses on interprofessional collaboration and on shared decision making, but knowledge gaps remain about effective strategies for implementing interprofessional collaboration and shared decision-making together in clinical practice. We used Kuhn's theory of scientific revolutions to reflect on how an integrated interprofessional shared decision-making approach was developed and implemented over time. In 2007, an interdisciplinary team initiated a new research program to promote the implementation of an interprofessional shared decision-making approach in clinical settings. For this reflective case study, two new team members analyzed the team's four projects, six research publications, one unpublished and two published protocols and organized them into recognizable phases according to Kuhn's theory. The merging of two young disciplines led to challenges characteristic of emerging paradigms. Implementation of interprofessional shared-decision making was hindered by a lack of conceptual clarity, a dearth of theories and models, little methodological guidance, and insufficient evaluation instruments. The team developed a new model, identified new tools, and engaged knowledge users in a theory-based approach to implementation. However, several unresolved challenges remain. This reflective case study sheds light on the evolution of interdisciplinary team science. It offers new approaches to implementing emerging knowledge in the clinical context.

  8. Sparse High Dimensional Models in Economics

    PubMed Central

    Fan, Jianqing; Lv, Jinchi; Qi, Lei

    2010-01-01

    This paper reviews the literature on sparse high dimensional models and discusses some applications in economics and finance. Recent developments of theory, methods, and implementations in penalized least squares and penalized likelihood methods are highlighted. These variable selection methods are proved to be effective in high dimensional sparse modeling. The limits of dimensionality that regularization methods can handle, the role of penalty functions, and their statistical properties are detailed. Some recent advances in ultra-high dimensional sparse modeling are also briefly discussed. PMID:22022635

  9. Sub-saturation matter in compact stars: Nuclear modelling in the framework of the extended Thomas-Fermi theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aymard, François; Gulminelli, Francesca; Margueron, Jérôme

    A recently introduced analytical model for the nuclear density profile [1] is implemented in the Extended Thomas-Fermi (ETF) energy density functional. This allows to (i) shed a new light on the issue of the sign of surface symmetry energy in nuclear mass formulas, as well as to (ii) show the importance of the in-medium corrections to the nuclear cluster energies in thermodynamic conditions relevant for the description of core-collapse supernovae and (proto)-neutron star crust.

  10. DFT applied to the study of carbon-doped zinc-blende (cubic) GaN

    NASA Astrophysics Data System (ADS)

    Espitia R, M. J.; Ortega-López, C.; Rodríguez Martínez, J. A.

    2016-08-01

    Employing first principles within the framework of density functional theory, the structural properties, electronic structure, and magnetism of C-doped zincblende (cubic) GaN were investigated. The calculations were carried out using the pseudopotential method, employed exactly as implemented in Quantum ESPRESSO code. For GaC0.0625N0.9375 concentration, a metallic behavior was found. This metallic property comes from the hybridization and polarization of C-2p states and their neighboring N-2p and G-4p states.

  11. Investigation of structural stability and elastic properties of CrH and MnH: A first principles study

    NASA Astrophysics Data System (ADS)

    Kanagaprabha, S.; Rajeswarapalanichamy, R.; Sudhapriyanga, G.; Murugan, A.; Santhosh, M.; Iyakutti, K.

    2015-06-01

    The structural and mechanical properties of CrH and MnH are investigated using first principles calculation based on density functional theory as implemented in VASP code with generalized gradient approximation. The calculated ground state properties are in good agreement with previous experimental and other theoretical results. A structural phase transition from NaCl to NiAs phase at a pressure of 76 GPa is predicted for both CrH and MnH.

  12. Sub-saturation matter in compact stars: Nuclear modelling in the framework of the extended Thomas-Fermi theory

    NASA Astrophysics Data System (ADS)

    Aymard, François; Gulminelli, Francesca; Margueron, Jérôme

    2015-02-01

    A recently introduced analytical model for the nuclear density profile [1] is implemented in the Extended Thomas-Fermi (ETF) energy density functional. This allows to (i) shed a new light on the issue of the sign of surface symmetry energy in nuclear mass formulas, as well as to (ii) show the importance of the in-medium corrections to the nuclear cluster energies in thermodynamic conditions relevant for the description of core-collapse supernovae and (proto)-neutron star crust.

  13. Computer model of a reverberant and parallel circuit coupling

    NASA Astrophysics Data System (ADS)

    Kalil, Camila de Andrade; de Castro, Maria Clícia Stelling; Cortez, Célia Martins

    2017-11-01

    The objective of the present study was to deepen the knowledge about the functioning of the neural circuits by implementing a signal transmission model using the Graph Theory in a small network of neurons composed of an interconnected reverberant and parallel circuit, in order to investigate the processing of the signals in each of them and the effects on the output of the network. For this, a program was developed in C language and simulations were done using neurophysiological data obtained in the literature.

  14. Computational Software to Fit Seismic Data Using Epidemic-Type Aftershock Sequence Models and Modeling Performance Comparisons

    NASA Astrophysics Data System (ADS)

    Chu, A.

    2016-12-01

    Modern earthquake catalogs are often analyzed using spatial-temporal point process models such as the epidemic-type aftershock sequence (ETAS) models of Ogata (1998). My work implements three of the homogeneous ETAS models described in Ogata (1998). With a model's log-likelihood function, my software finds the Maximum-Likelihood Estimates (MLEs) of the model's parameters to estimate the homogeneous background rate and the temporal and spatial parameters that govern triggering effects. EM-algorithm is employed for its advantages of stability and robustness (Veen and Schoenberg, 2008). My work also presents comparisons among the three models in robustness, convergence speed, and implementations from theory to computing practice. Up-to-date regional seismic data of seismic active areas such as Southern California and Japan are used to demonstrate the comparisons. Data analysis has been done using computer languages Java and R. Java has the advantages of being strong-typed and easiness of controlling memory resources, while R has the advantages of having numerous available functions in statistical computing. Comparisons are also made between the two programming languages in convergence and stability, computational speed, and easiness of implementation. Issues that may affect convergence such as spatial shapes are discussed.

  15. Kinetic modeling and fitting software for interconnected reaction schemes: VisKin.

    PubMed

    Zhang, Xuan; Andrews, Jared N; Pedersen, Steen E

    2007-02-15

    Reaction kinetics for complex, highly interconnected kinetic schemes are modeled using analytical solutions to a system of ordinary differential equations. The algorithm employs standard linear algebra methods that are implemented using MatLab functions in a Visual Basic interface. A graphical user interface for simple entry of reaction schemes facilitates comparison of a variety of reaction schemes. To ensure microscopic balance, graph theory algorithms are used to determine violations of thermodynamic cycle constraints. Analytical solutions based on linear differential equations result in fast comparisons of first order kinetic rates and amplitudes as a function of changing ligand concentrations. For analysis of higher order kinetics, we also implemented a solution using numerical integration. To determine rate constants from experimental data, fitting algorithms that adjust rate constants to fit the model to imported data were implemented using the Levenberg-Marquardt algorithm or using Broyden-Fletcher-Goldfarb-Shanno methods. We have included the ability to carry out global fitting of data sets obtained at varying ligand concentrations. These tools are combined in a single package, which we have dubbed VisKin, to guide and analyze kinetic experiments. The software is available online for use on PCs.

  16. Partial information decomposition as a unified approach to the specification of neural goal functions.

    PubMed

    Wibral, Michael; Priesemann, Viola; Kay, Jim W; Lizier, Joseph T; Phillips, William A

    2017-03-01

    In many neural systems anatomical motifs are present repeatedly, but despite their structural similarity they can serve very different tasks. A prime example for such a motif is the canonical microcircuit of six-layered neo-cortex, which is repeated across cortical areas, and is involved in a number of different tasks (e.g. sensory, cognitive, or motor tasks). This observation has spawned interest in finding a common underlying principle, a 'goal function', of information processing implemented in this structure. By definition such a goal function, if universal, cannot be cast in processing-domain specific language (e.g. 'edge filtering', 'working memory'). Thus, to formulate such a principle, we have to use a domain-independent framework. Information theory offers such a framework. However, while the classical framework of information theory focuses on the relation between one input and one output (Shannon's mutual information), we argue that neural information processing crucially depends on the combination of multiple inputs to create the output of a processor. To account for this, we use a very recent extension of Shannon Information theory, called partial information decomposition (PID). PID allows to quantify the information that several inputs provide individually (unique information), redundantly (shared information) or only jointly (synergistic information) about the output. First, we review the framework of PID. Then we apply it to reevaluate and analyze several earlier proposals of information theoretic neural goal functions (predictive coding, infomax and coherent infomax, efficient coding). We find that PID allows to compare these goal functions in a common framework, and also provides a versatile approach to design new goal functions from first principles. Building on this, we design and analyze a novel goal function, called 'coding with synergy', which builds on combining external input and prior knowledge in a synergistic manner. We suggest that this novel goal function may be highly useful in neural information processing. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  17. A critical assessment of theories/models used in health communication for HIV/AIDS.

    PubMed

    Airhihenbuwa, C O; Obregon, R

    2000-01-01

    Most theories and models used to develop human immunodeficiency virus (HIV)/acquired immune deficiency syndrome (AIDS) communication are based on social psychology that emphasizes individualism. Researchers including communication and health scholars are now questioning the presumed global relevance of these models and thus the need to develop innovative theories and models that take into account regional contexts. In this paper, we discuss the commonly used theories and models in HIV/AIDS communication. Furthermore, we argue that the flaws in the application of the commonly used "classical" models in health communication are because of contextual differences in locations where these models are applied. That is to say that these theories and models are being applied in contexts for which they were not designed. For example, the differences in health behaviors are often the function of culture. Therefore, culture should be viewed for its strength and not always as a barrier. The metaphorical coupling of "culture" and "barrier" needs to be exposed, deconstructed, and reconstructed so that new, positive, cultural linkages can be forged. The HIV/AIDS pandemic has served as a flashpoint to either highlight the importance or deny the relevance of theories and models while at the same time addressing the importance of culture in the development and implementation of communication programs.

  18. fixedTimeEvents: An R package for the distribution of distances between discrete events in fixed time

    NASA Astrophysics Data System (ADS)

    Liland, Kristian Hovde; Snipen, Lars

    When a series of Bernoulli trials occur within a fixed time frame or limited space, it is often interesting to assess if the successful outcomes have occurred completely at random, or if they tend to group together. One example, in genetics, is detecting grouping of genes within a genome. Approximations of the distribution of successes are possible, but they become inaccurate for small sample sizes. In this article, we describe the exact distribution of time between random, non-overlapping successes in discrete time of fixed length. A complete description of the probability mass function, the cumulative distribution function, mean, variance and recurrence relation is included. We propose an associated test for the over-representation of short distances and illustrate the methodology through relevant examples. The theory is implemented in an R package including probability mass, cumulative distribution, quantile function, random number generator, simulation functions, and functions for testing.

  19. Hybrid density-functional calculations of phonons in LaCoO3

    NASA Astrophysics Data System (ADS)

    Gryaznov, Denis; Evarestov, Robert A.; Maier, Joachim

    2010-12-01

    Phonon frequencies at Γ point in nonmagnetic rhombohedral phase of LaCoO3 were calculated using density-functional theory with hybrid exchange correlation functional PBE0. The calculations involved a comparison of results for two types of basis functions commonly used in ab initio calculations, namely, the plane-wave approach and linear combination of atomic orbitals, as implemented in VASP and CRYSTAL computer codes, respectively. A good qualitative, but also within an error margin of less than 30%, a quantitative agreement was observed not only between the two formalisms but also between theoretical and experimental phonon frequency predictions. Moreover, the correlation between the phonon symmetries in cubic and rhombohedral phases is discussed in detail on the basis of group-theoretical analysis. It is concluded that the hybrid PBE0 functional is able to predict correctly the phonon properties in LaCoO3 .

  20. Short-range density functional correlation within the restricted active space CI method

    NASA Astrophysics Data System (ADS)

    Casanova, David

    2018-03-01

    In the present work, I introduce a hybrid wave function-density functional theory electronic structure method based on the range separation of the electron-electron Coulomb operator in order to recover dynamic electron correlations missed in the restricted active space configuration interaction (RASCI) methodology. The working equations and the computational algorithm for the implementation of the new approach, i.e., RAS-srDFT, are presented, and the method is tested in the calculation of excitation energies of organic molecules. The good performance of the RASCI wave function in combination with different short-range exchange-correlation functionals in the computation of relative energies represents a quantitative improvement with respect to the RASCI results and paves the path for the development of RAS-srDFT as a promising scheme in the computation of the ground and excited states where nondynamic and dynamic electron correlations are important.

  1. Reference hypernetted chain theory for ferrofluid bilayer: Distribution functions compared with Monte Carlo

    NASA Astrophysics Data System (ADS)

    Polyakov, Evgeny A.; Vorontsov-Velyaminov, Pavel N.

    2014-08-01

    Properties of ferrofluid bilayer (modeled as a system of two planar layers separated by a distance h and each layer carrying a soft sphere dipolar liquid) are calculated in the framework of inhomogeneous Ornstein-Zernike equations with reference hypernetted chain closure (RHNC). The bridge functions are taken from a soft sphere (1/r12) reference system in the pressure-consistent closure approximation. In order to make the RHNC problem tractable, the angular dependence of the correlation functions is expanded into special orthogonal polynomials according to Lado. The resulting equations are solved using the Newton-GRMES algorithm as implemented in the public-domain solver NITSOL. Orientational densities and pair distribution functions of dipoles are compared with Monte Carlo simulation results. A numerical algorithm for the Fourier-Hankel transform of any positive integer order on a uniform grid is presented.

  2. Ecosystem services: From theory to implementation

    PubMed Central

    Daily, Gretchen C.; Matson, Pamela A.

    2008-01-01

    Around the world, leaders are increasingly recognizing ecosystems as natural capital assets that supply life-support services of tremendous value. The challenge is to turn this recognition into incentives and institutions that will guide wise investments in natural capital, on a large scale. Advances are required on three key fronts, each featured here: the science of ecosystem production functions and service mapping; the design of appropriate finance, policy, and governance systems; and the art of implementing these in diverse biophysical and social contexts. Scientific understanding of ecosystem production functions is improving rapidly but remains a limiting factor in incorporating natural capital into decisions, via systems of national accounting and other mechanisms. Novel institutional structures are being established for a broad array of services and places, creating a need and opportunity for systematic assessment of their scope and limitations. Finally, it is clear that formal sharing of experience, and defining of priorities for future work, could greatly accelerate the rate of innovation and uptake of new approaches. PMID:18621697

  3. Manual control models of industrial management

    NASA Technical Reports Server (NTRS)

    Crossman, E. R. F. W.

    1972-01-01

    The industrial engineer is often required to design and implement control systems and organization for manufacturing and service facilities, to optimize quality, delivery, and yield, and minimize cost. Despite progress in computer science most such systems still employ human operators and managers as real-time control elements. Manual control theory should therefore be applicable to at least some aspects of industrial system design and operations. Formulation of adequate model structures is an essential prerequisite to progress in this area; since real-world production systems invariably include multilevel and multiloop control, and are implemented by timeshared human effort. A modular structure incorporating certain new types of functional element, has been developed. This forms the basis for analysis of an industrial process operation. In this case it appears that managerial controllers operate in a discrete predictive mode based on fast time modelling, with sampling interval related to plant dynamics. Successive aggregation causes reduced response bandwidth and hence increased sampling interval as a function of level.

  4. PROFILE: Airfoil Geometry Manipulation and Display. User's Guide

    NASA Technical Reports Server (NTRS)

    Collins, Leslie; Saunders, David

    1997-01-01

    This report provides user information for program PROFILE, an aerodynamics design utility for plotting, tabulating, and manipulating airfoil profiles. A dozen main functions are available. The theory and implementation details for two of the more complex options are also presented. These are the REFINE option, for smoothing curvature in selected regions while retaining or seeking some specified thickness ratio, and the OPTIMIZE option, which seeks a specified curvature distribution. Use of programs QPLOT and BPLOT is also described, since all of the plots provided by PROFILE (airfoil coordinates, curvature distributions, pressure distributions)) are achieved via the general-purpose QPLOT utility. BPLOT illustrates (again, via QPLOT) the shape functions used by two of PROFILE's options. These three utilities should be distributed as one package. They were designed and implemented for the Applied Aerodynamics Branch at NASA Ames Research Center, Moffett Field, California. They are all written in FORTRAN 77 and run on DEC and SGI systems under OpenVMS and IRIX.

  5. Ecosystem services: from theory to implementation.

    PubMed

    Daily, Gretchen C; Matson, Pamela A

    2008-07-15

    Around the world, leaders are increasingly recognizing ecosystems as natural capital assets that supply life-support services of tremendous value. The challenge is to turn this recognition into incentives and institutions that will guide wise investments in natural capital, on a large scale. Advances are required on three key fronts, each featured here: the science of ecosystem production functions and service mapping; the design of appropriate finance, policy, and governance systems; and the art of implementing these in diverse biophysical and social contexts. Scientific understanding of ecosystem production functions is improving rapidly but remains a limiting factor in incorporating natural capital into decisions, via systems of national accounting and other mechanisms. Novel institutional structures are being established for a broad array of services and places, creating a need and opportunity for systematic assessment of their scope and limitations. Finally, it is clear that formal sharing of experience, and defining of priorities for future work, could greatly accelerate the rate of innovation and uptake of new approaches.

  6. Implementing large-scale workforce change: learning from 55 pilot sites of allied health workforce redesign in Queensland, Australia

    PubMed Central

    2013-01-01

    Background Increasingly, health workforces are undergoing high-level ‘re-engineering’ to help them better meet the needs of the population, workforce and service delivery. Queensland Health implemented a large scale 5-year workforce redesign program across more than 13 health-care disciplines. This study synthesized the findings from this program to identify and codify mechanisms associated with successful workforce redesign to help inform other large workforce projects. Methods This study used Inductive Logic Reasoning (ILR), a process that uses logic models as the primary functional tool to develop theories of change, which are subsequently validated through proposition testing. Initial theories of change were developed from a systematic review of the literature and synthesized using a logic model. These theories of change were then developed into propositions and subsequently tested empirically against documentary, interview, and survey data from 55 projects in the workforce redesign program. Results Three overarching principles were identified that optimized successful workforce redesign: (1) drivers for change need to be close to practice; (2) contexts need to be supportive both at the local levels and legislatively; and (3) mechanisms should include appropriate engagement, resources to facilitate change management, governance, and support structures. Attendance to these factors was uniformly associated with success of individual projects. Conclusions ILR is a transparent and reproducible method for developing and testing theories of workforce change. Despite the heterogeneity of projects, professions, and approaches used, a consistent set of overarching principles underpinned success of workforce change interventions. These concepts have been operationalized into a workforce change checklist. PMID:24330616

  7. Edgeworth streaming model for redshift space distortions

    NASA Astrophysics Data System (ADS)

    Uhlemann, Cora; Kopp, Michael; Haugg, Thomas

    2015-09-01

    We derive the Edgeworth streaming model (ESM) for the redshift space correlation function starting from an arbitrary distribution function for biased tracers of dark matter by considering its two-point statistics and show that it reduces to the Gaussian streaming model (GSM) when neglecting non-Gaussianities. We test the accuracy of the GSM and ESM independent of perturbation theory using the Horizon Run 2 N -body halo catalog. While the monopole of the redshift space halo correlation function is well described by the GSM, higher multipoles improve upon including the leading order non-Gaussian correction in the ESM: the GSM quadrupole breaks down on scales below 30 Mpc /h whereas the ESM stays accurate to 2% within statistical errors down to 10 Mpc /h . To predict the scale-dependent functions entering the streaming model we employ convolution Lagrangian perturbation theory (CLPT) based on the dust model and local Lagrangian bias. Since dark matter halos carry an intrinsic length scale given by their Lagrangian radius, we extend CLPT to the coarse-grained dust model and consider two different smoothing approaches operating in Eulerian and Lagrangian space, respectively. The coarse graining in Eulerian space features modified fluid dynamics different from dust while the coarse graining in Lagrangian space is performed in the initial conditions with subsequent single-streaming dust dynamics, implemented by smoothing the initial power spectrum in the spirit of the truncated Zel'dovich approximation. Finally, we compare the predictions of the different coarse-grained models for the streaming model ingredients to N -body measurements and comment on the proper choice of both the tracer distribution function and the smoothing scale. Since the perturbative methods we considered are not yet accurate enough on small scales, the GSM is sufficient when applied to perturbation theory.

  8. Implementation of customized health information technology in diabetes self management programs.

    PubMed

    Alexander, Susan; Frith, Karen H; O'Keefe, Louise; Hennigan, Michael A

    2011-01-01

    The project was a nurse-led implementation of a software application, designed to combine clinical and demographic records for a diabetes education program, which would result in secure, long-term record storage. Clinical information systems may be prohibitively expensive for small practices and require extensive training for implementation. A review of the literature suggests that the use of simple, practice-based registries offer an economical method of monitoring the outcomes of diabetic patients. The database was designed using a common software application, Microsoft Access. The theory used to guide implementation and staff training was Rogers' Diffusion of Innovations theory (1995). Outcomes after a 3-month period included incorporation of 100% of new clinical and demographic patient records into the database and positive changes in staff attitudes regarding software applications used in diabetes self-management training. These objectives were met while keeping project costs under budgeted amounts. As a function of the clinical nurse specialist (CNS) researcher role, there is a need for CNSs to identify innovative and economical methods of data collection. The success of this nurse-led project reinforces suggestions in the literature for less costly methods of data maintenance in small practice settings. Ongoing utilization and enhancement have resulted in the creation of a robust database that could aid in the research of multiple clinical issues. Clinical nurse specialists can use existing evidence to guide and improve both their own practice and outcomes for patients and organizations. Further research regarding specific factors that predict efficient transition of informatics applications, how these factors vary according to practice settings, and the role of the CNS in implementation of such applications is needed.

  9. On computing special functions in marine engineering

    NASA Astrophysics Data System (ADS)

    Constantinescu, E.; Bogdan, M.

    2015-11-01

    Important modeling applications in marine engineering conduct us to a special class of solutions for difficult differential equations with variable coefficients. In order to be able to solve and implement such models (in wave theory, in acoustics, in hydrodynamics, in electromagnetic waves, but also in many other engineering fields), it is necessary to compute so called special functions: Bessel functions, modified Bessel functions, spherical Bessel functions, Hankel functions. The aim of this paper is to develop numerical solutions in Matlab for the above mentioned special functions. Taking into account the main properties for Bessel and modified Bessel functions, we shortly present analytically solutions (where possible) in the form of series. Especially it is studied the behavior of these special functions using Matlab facilities: numerical solutions and plotting. Finally, it will be compared the behavior of the special functions and point out other directions for investigating properties of Bessel and spherical Bessel functions. The asymptotic forms of Bessel functions and modified Bessel functions allow determination of important properties of these functions. The modified Bessel functions tend to look more like decaying and growing exponentials.

  10. Microscopically based energy density functionals for nuclei using the density matrix expansion: Implementation and pre-optimization

    NASA Astrophysics Data System (ADS)

    Stoitsov, M.; Kortelainen, M.; Bogner, S. K.; Duguet, T.; Furnstahl, R. J.; Gebremariam, B.; Schunck, N.

    2010-11-01

    In a recent series of articles, Gebremariam, Bogner, and Duguet derived a microscopically based nuclear energy density functional by applying the density matrix expansion (DME) to the Hartree-Fock energy obtained from chiral effective field theory two- and three-nucleon interactions. Owing to the structure of the chiral interactions, each coupling in the DME functional is given as the sum of a coupling constant arising from zero-range contact interactions and a coupling function of the density arising from the finite-range pion exchanges. Because the contact contributions have essentially the same structure as those entering empirical Skyrme functionals, a microscopically guided Skyrme phenomenology has been suggested in which the contact terms in the DME functional are released for optimization to finite-density observables to capture short-range correlation energy contributions from beyond Hartree-Fock. The present article is the first attempt to assess the ability of the newly suggested DME functional, which has a much richer set of density dependencies than traditional Skyrme functionals, to generate sensible and stable results for nuclear applications. The results of the first proof-of-principle calculations are given, and numerous practical issues related to the implementation of the new functional in existing Skyrme codes are discussed. Using a restricted singular value decomposition optimization procedure, it is found that the new DME functional gives numerically stable results and exhibits a small but systematic reduction of our test χ2 function compared to standard Skyrme functionals, thus justifying its suitability for future global optimizations and large-scale calculations.

  11. The running coupling of the minimal sextet composite Higgs model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fodor, Zoltan; Holland, Kieran; Kuti, Julius

    We compute the renormalized running coupling of SU(3) gauge theory coupled to N f = 2 flavors of massless Dirac fermions in the 2-index-symmetric (sextet) representation. This model is of particular interest as a minimal realization of the strongly interacting composite Higgs scenario. A recently proposed finite volume gradient flow scheme is used. The calculations are performed at several lattice spacings with two different implementations of the gradient flow allowing for a controlled continuum extrapolation and particular attention is paid to estimating the systematic uncertainties. For small values of the renormalized coupling our results for the β-function agree with perturbation theory. For moderate couplings we observe a downward deviation relative to the 2-loop β-function but in the coupling range where the continuum extrapolation is fully under control we do not observe an infrared fixed point. The explored range includes the locations of the zero of the 3-loop and the 4-loop β-functions in themore » $$\\overline{MS}$$ scheme. The absence of a non-trivial zero in the β-function in the explored range of the coupling is consistent with our earlier findings based on hadronic observables, the chiral condensate and the GMOR relation. The present work is the first to report continuum non-perturbative results for the sextet model.« less

  12. A Theory of Secondary Teachers' Adaptations When Implementing a Reading Intervention Program

    ERIC Educational Resources Information Center

    Leko, Melinda M.; Roberts, Carly A.; Pek, Yvonne

    2015-01-01

    This study examined the causes and consequences of secondary teachers' adaptations when implementing a research-based reading intervention program. Interview, observation, and artifact data were collected on five middle school intervention teachers, leading to a grounded theory composed of the core component, reconciliation through adaptation, and…

  13. Structure and properties of fullerene molecular crystals with linear-scaling van der Waals density functional theory

    NASA Astrophysics Data System (ADS)

    Mostofi, Arash; Andrinopoulos, Lampros; Hine, Nicholas

    2014-03-01

    Fullerene molecular crystals are of technological promise for their use in heterojunction photovoltaic cells. An improved theoretical understanding of their structure and properties would be a step towards the rational design of new devices. Simulations based on density-functional theory (DFT) are invaluable for developing such insight, but standard semi-local functionals do not capture the important inter-molecular van der Waals (vdW) interactions in fullerene crystals. Furthermore the computational cost associated with the large unit cells needed are at the limit or beyond the capabilities of traditional DFT methods. In this work we overcome these limitations by using our implementation of a number of vdW-DFs in the ONETEP linear-scaling DFT code to study the structural properties of C60 molecular crystals. Powder neutron diffraction shows that the low-temperature Pa-3 phase is orientationally ordered with individual C60 units rotated around the [111] direction. We fully explore the energy landscape associated with the rotation angle and find two stable structures that are energetically very close, one of which corresponds to the experimentally observed structure. We further consider the effect of orientational disorder in very large supercells of thousands of atoms.

  14. First-principles study of paraelectric and ferroelectric CsH2PO4 including dispersion forces: Stability and related vibrational, dielectric, and elastic properties

    NASA Astrophysics Data System (ADS)

    Van Troeye, Benoit; van Setten, Michiel Jan; Giantomassi, Matteo; Torrent, Marc; Rignanese, Gian-Marco; Gonze, Xavier

    2017-01-01

    Using density functional theory (DFT) and density functional perturbation theory (DFPT), we investigate the stability and response functions of CsH2PO4 , a ferroelectric material at low temperature. This material cannot be described properly by the usual (semi)local approximations within DFT. The long-range e--e- correlation needs to be properly taken into account, using, for instance, Grimme's DFT-D methods, as investigated in this work. We find that DFT-D3(BJ) performs the best for the members of the dihydrogenated alkali phosphate family (KH2PO4 , RbH2PO4 , CsH2PO4 ), leading to experimental lattice parameters reproduced with an average deviation of 0.5%. With these DFT-D methods, the structural, dielectric, vibrational, and mechanical properties of CsH2PO4 are globally in excellent agreement with the available experiments (<2 % MAPE for Raman-active phonons). Our study suggests the possible existence of a new low-temperature phase of CsH2PO4 , not yet reported experimentally. Finally, we report the implementation of DFT-D contributions to elastic constants within DFPT.

  15. Tensor Algebra Library for NVidia Graphics Processing Units

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liakh, Dmitry

    This is a general purpose math library implementing basic tensor algebra operations on NVidia GPU accelerators. This software is a tensor algebra library that can perform basic tensor algebra operations, including tensor contractions, tensor products, tensor additions, etc., on NVidia GPU accelerators, asynchronously with respect to the CPU host. It supports a simultaneous use of multiple NVidia GPUs. Each asynchronous API function returns a handle which can later be used for querying the completion of the corresponding tensor algebra operation on a specific GPU. The tensors participating in a particular tensor operation are assumed to be stored in local RAMmore » of a node or GPU RAM. The main research area where this library can be utilized is the quantum many-body theory (e.g., in electronic structure theory).« less

  16. Vapor-liquid equilibrium and equation of state of two-dimensional fluids from a discrete perturbation theory

    NASA Astrophysics Data System (ADS)

    Trejos, Víctor M.; Santos, Andrés; Gámez, Francisco

    2018-05-01

    The interest in the description of the properties of fluids of restricted dimensionality is growing for theoretical and practical reasons. In this work, we have firstly developed an analytical expression for the Helmholtz free energy of the two-dimensional square-well fluid in the Barker-Henderson framework. This equation of state is based on an approximate analytical radial distribution function for d-dimensional hard-sphere fluids (1 ≤ d ≤ 3) and is validated against existing and new simulation results. The so-obtained equation of state is implemented in a discrete perturbation theory able to account for general potential shapes. The prototypical Lennard-Jones and Yukawa fluids are tested in its two-dimensional version against available and new simulation data with semiquantitative agreement.

  17. Practical auxiliary basis implementation of Rung 3.5 functionals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Janesko, Benjamin G., E-mail: b.janesko@tcu.edu; Scalmani, Giovanni; Frisch, Michael J.

    2014-07-21

    Approximate exchange-correlation functionals for Kohn-Sham density functional theory often benefit from incorporating exact exchange. Exact exchange is constructed from the noninteracting reference system's nonlocal one-particle density matrix γ(r{sup -vector},r{sup -vector}′). Rung 3.5 functionals attempt to balance the strengths and limitations of exact exchange using a new ingredient, a projection of γ(r{sup -vector},r{sup -vector} ′) onto a semilocal model density matrix γ{sub SL}(ρ(r{sup -vector}),∇ρ(r{sup -vector}),r{sup -vector}−r{sup -vector} ′). γ{sub SL} depends on the electron density ρ(r{sup -vector}) at reference point r{sup -vector}, and is closely related to semilocal model exchange holes. We present a practical implementation of Rung 3.5 functionals, expandingmore » the r{sup -vector}−r{sup -vector} ′ dependence of γ{sub SL} in an auxiliary basis set. Energies and energy derivatives are obtained from 3D numerical integration as in standard semilocal functionals. We also present numerical tests of a range of properties, including molecular thermochemistry and kinetics, geometries and vibrational frequencies, and bandgaps and excitation energies. Rung 3.5 functionals typically provide accuracy intermediate between semilocal and hybrid approximations. Nonlocal potential contributions from γ{sub SL} yield interesting successes and failures for band structures and excitation energies. The results enable and motivate continued exploration of Rung 3.5 functional forms.« less

  18. A systematic review of the use of theory in randomized controlled trials of audit and feedback

    PubMed Central

    2013-01-01

    Background Audit and feedback is one of the most widely used and promising interventions in implementation research, yet also one of the most variably effective. Understanding this variability has been limited in part by lack of attention to the theoretical and conceptual basis underlying audit and feedback. Examining the extent of theory use in studies of audit and feedback will yield better understanding of the causal pathways of audit and feedback effectiveness and inform efforts to optimize this important intervention. Methods A total of 140 studies in the 2012 Cochrane update on audit and feedback interventions were independently reviewed by two investigators. Variables were extracted related to theory use in the study design, measurement, implementation or interpretation. Theory name, associated reference, and the location of theory use as reported in the study were extracted. Theories were organized by type (e.g., education, diffusion, organization, psychology), and theory utilization was classified into seven categories (justification, intervention design, pilot testing, evaluation, predictions, post hoc, other). Results A total of 20 studies (14%) reported use of theory in any aspect of the study design, measurement, implementation or interpretation. In only 13 studies (9%) was a theory reportedly used to inform development of the intervention. A total of 18 different theories across educational, psychological, organizational and diffusion of innovation perspectives were identified. Rogers’ Diffusion of Innovations and Bandura’s Social Cognitive Theory were the most widely used (3.6% and 3%, respectively). Conclusions The explicit use of theory in studies of audit and feedback was rare. A range of theories was found, but not consistency of theory use. Advancing our understanding of audit and feedback will require more attention to theoretically informed studies and intervention design. PMID:23759034

  19. A systematic review of the use of theory in randomized controlled trials of audit and feedback.

    PubMed

    Colquhoun, Heather L; Brehaut, Jamie C; Sales, Anne; Ivers, Noah; Grimshaw, Jeremy; Michie, Susan; Carroll, Kelly; Chalifoux, Mathieu; Eva, Kevin W

    2013-06-10

    Audit and feedback is one of the most widely used and promising interventions in implementation research, yet also one of the most variably effective. Understanding this variability has been limited in part by lack of attention to the theoretical and conceptual basis underlying audit and feedback. Examining the extent of theory use in studies of audit and feedback will yield better understanding of the causal pathways of audit and feedback effectiveness and inform efforts to optimize this important intervention. A total of 140 studies in the 2012 Cochrane update on audit and feedback interventions were independently reviewed by two investigators. Variables were extracted related to theory use in the study design, measurement, implementation or interpretation. Theory name, associated reference, and the location of theory use as reported in the study were extracted. Theories were organized by type (e.g., education, diffusion, organization, psychology), and theory utilization was classified into seven categories (justification, intervention design, pilot testing, evaluation, predictions, post hoc, other). A total of 20 studies (14%) reported use of theory in any aspect of the study design, measurement, implementation or interpretation. In only 13 studies (9%) was a theory reportedly used to inform development of the intervention. A total of 18 different theories across educational, psychological, organizational and diffusion of innovation perspectives were identified. Rogers' Diffusion of Innovations and Bandura's Social Cognitive Theory were the most widely used (3.6% and 3%, respectively). The explicit use of theory in studies of audit and feedback was rare. A range of theories was found, but not consistency of theory use. Advancing our understanding of audit and feedback will require more attention to theoretically informed studies and intervention design.

  20. Working memory costs of task switching.

    PubMed

    Liefooghe, Baptist; Barrouillet, Pierre; Vandierendonck, André; Camos, Valérie

    2008-05-01

    Although many accounts of task switching emphasize the importance of working memory as a substantial source of the switch cost, there is a lack of evidence demonstrating that task switching actually places additional demands on working memory. The present study addressed this issue by implementing task switching in continuous complex span tasks with strictly controlled time parameters. A series of 4 experiments demonstrate that recall performance decreased as a function of the number of task switches and that the concurrent load of item maintenance had no influence on task switching. These results indicate that task switching induces a cost on working memory functioning. Implications for theories of task switching, working memory, and resource sharing are addressed.

  1. Crystal-field splittings in rare-earth-based hard magnets: An ab initio approach

    NASA Astrophysics Data System (ADS)

    Delange, Pascal; Biermann, Silke; Miyake, Takashi; Pourovskii, Leonid

    2017-10-01

    We apply the first-principles density functional theory + dynamical mean-field theory framework to evaluate the crystal-field splitting on rare-earth sites in hard magnetic intermetallics. An atomic (Hubbard-I) approximation is employed for local correlations on the rare-earth 4 f shell and self-consistency in the charge density is implemented. We reduce the density functional theory self-interaction contribution to the crystal-field splitting by properly averaging the 4 f charge density before recalculating the one-electron Kohn-Sham potential. Our approach is shown to reproduce the experimental crystal-field splitting in the prototypical rare-earth hard magnet SmCo5. Applying it to R Fe12 and R Fe12X hard magnets (R =Nd , Sm and X =N , Li), we obtain in particular a large positive value of the crystal-field parameter A20〈r2〉 in NdFe12N resulting in a strong out-of-plane anisotropy observed experimentally. The sign of A20〈r2〉 is predicted to be reversed by substituting N with Li, leading to a strong out-of-plane anisotropy in SmFe12Li . We discuss the origin of this strong impact of N and Li interstitials on the crystal-field splitting on rare-earth sites.

  2. Communication: Recovering the flat-plane condition in electronic structure theory at semi-local DFT cost

    NASA Astrophysics Data System (ADS)

    Bajaj, Akash; Janet, Jon Paul; Kulik, Heather J.

    2017-11-01

    The flat-plane condition is the union of two exact constraints in electronic structure theory: (i) energetic piecewise linearity with fractional electron removal or addition and (ii) invariant energetics with change in electron spin in a half filled orbital. Semi-local density functional theory (DFT) fails to recover the flat plane, exhibiting convex fractional charge errors (FCE) and concave fractional spin errors (FSE) that are related to delocalization and static correlation errors. We previously showed that DFT+U eliminates FCE but now demonstrate that, like other widely employed corrections (i.e., Hartree-Fock exchange), it worsens FSE. To find an alternative strategy, we examine the shape of semi-local DFT deviations from the exact flat plane and we find this shape to be remarkably consistent across ions and molecules. We introduce the judiciously modified DFT (jmDFT) approach, wherein corrections are constructed from few-parameter, low-order functional forms that fit the shape of semi-local DFT errors. We select one such physically intuitive form and incorporate it self-consistently to correct semi-local DFT. We demonstrate on model systems that jmDFT represents the first easy-to-implement, no-overhead approach to recovering the flat plane from semi-local DFT.

  3. Functional Programming in Computer Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Loren James; Davis, Marion Kei

    We explore functional programming through a 16-week internship at Los Alamos National Laboratory. Functional programming is a branch of computer science that has exploded in popularity over the past decade due to its high-level syntax, ease of parallelization, and abundant applications. First, we summarize functional programming by listing the advantages of functional programming languages over the usual imperative languages, and we introduce the concept of parsing. Second, we discuss the importance of lambda calculus in the theory of functional programming. Lambda calculus was invented by Alonzo Church in the 1930s to formalize the concept of effective computability, and every functionalmore » language is essentially some implementation of lambda calculus. Finally, we display the lasting products of the internship: additions to a compiler and runtime system for the pure functional language STG, including both a set of tests that indicate the validity of updates to the compiler and a compiler pass that checks for illegal instances of duplicate names.« less

  4. Practical application of game theory based production flow planning method in virtual manufacturing networks

    NASA Astrophysics Data System (ADS)

    Olender, M.; Krenczyk, D.

    2016-08-01

    Modern enterprises have to react quickly to dynamic changes in the market, due to changing customer requirements and expectations. One of the key area of production management, that must continuously evolve by searching for new methods and tools for increasing the efficiency of manufacturing systems is the area of production flow planning and control. These aspects are closely connected with the ability to implement the concept of Virtual Enterprises (VE) and Virtual Manufacturing Network (VMN) in which integrated infrastructure of flexible resources are created. In the proposed approach, the players role perform the objects associated with the objective functions, allowing to solve the multiobjective production flow planning problems based on the game theory, which is based on the theory of the strategic situation. For defined production system and production order models ways of solving the problem of production route planning in VMN on computational examples for different variants of production flow is presented. Possible decision strategy to use together with an analysis of calculation results is shown.

  5. Dissipative time-dependent quantum transport theory.

    PubMed

    Zhang, Yu; Yam, Chi Yung; Chen, GuanHua

    2013-04-28

    A dissipative time-dependent quantum transport theory is developed to treat the transient current through molecular or nanoscopic devices in presence of electron-phonon interaction. The dissipation via phonon is taken into account by introducing a self-energy for the electron-phonon coupling in addition to the self-energy caused by the electrodes. Based on this, a numerical method is proposed. For practical implementation, the lowest order expansion is employed for the weak electron-phonon coupling case and the wide-band limit approximation is adopted for device and electrodes coupling. The corresponding hierarchical equation of motion is derived, which leads to an efficient and accurate time-dependent treatment of inelastic effect on transport for the weak electron-phonon interaction. The resulting method is applied to a one-level model system and a gold wire described by tight-binding model to demonstrate its validity and the importance of electron-phonon interaction for the quantum transport. As it is based on the effective single-electron model, the method can be readily extended to time-dependent density functional theory.

  6. Use of the Lorentz-operator in relativistic quantum mechanics to guarentee a single-energy root

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ritchie, A B

    1998-08-01

    The Lorentz-operator form of relativistic quantum mechanics, with relativistic wave equation i{h_bar}{partial_derivative}{psi}/{partial_derivative}t=(mc{sup 2}{gamma}+e{Phi}){psi}, is implemented to guarantee a single-energy root. The Lorentz factor as modified by Pauli's ansatz is given by {gamma}={radical}1+[{rvec {sigma}}{center_dot}(i{h_bar}{rvec {del}}+(e/c){rvec A})]{sup 2}/m{sup 2}c{sup 2}, such that the theory is appropriate for electrons. Magnetic fine structure in the Lorentz relativistic wave equation emerges on the use of an appropriate operator form of the Lienard-Wiechert four- potential ({Phi},{rvec A}) from electromagnetic theory. Although computationally more intensive the advantage of the theory is the elimination of the negative-root of the energy and an interpretation of the wave function basedmore » on a one-particle, positive definite probability density like that of nonrelativistic quantum mechanics.« less

  7. Coupling density functional theory to polarizable force fields for efficient and accurate Hamiltonian molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Schwörer, Magnus; Breitenfeld, Benedikt; Tröster, Philipp; Bauer, Sebastian; Lorenzen, Konstantin; Tavan, Paul; Mathias, Gerald

    2013-06-01

    Hybrid molecular dynamics (MD) simulations, in which the forces acting on the atoms are calculated by grid-based density functional theory (DFT) for a solute molecule and by a polarizable molecular mechanics (PMM) force field for a large solvent environment composed of several 103-105 molecules, pose a challenge. A corresponding computational approach should guarantee energy conservation, exclude artificial distortions of the electron density at the interface between the DFT and PMM fragments, and should treat the long-range electrostatic interactions within the hybrid simulation system in a linearly scaling fashion. Here we describe a corresponding Hamiltonian DFT/(P)MM implementation, which accounts for inducible atomic dipoles of a PMM environment in a joint DFT/PMM self-consistency iteration. The long-range parts of the electrostatics are treated by hierarchically nested fast multipole expansions up to a maximum distance dictated by the minimum image convention of toroidal boundary conditions and, beyond that distance, by a reaction field approach such that the computation scales linearly with the number of PMM atoms. Short-range over-polarization artifacts are excluded by using Gaussian inducible dipoles throughout the system and Gaussian partial charges in the PMM region close to the DFT fragment. The Hamiltonian character, the stability, and efficiency of the implementation are investigated by hybrid DFT/PMM-MD simulations treating one molecule of the water dimer and of bulk water by DFT and the respective remainder by PMM.

  8. Polarizable embedding with a multiconfiguration short-range density functional theory linear response method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hedegård, Erik Donovan, E-mail: erik.hedegard@phys.chem.ethz.ch; Department of Physics, Chemistry and Pharmacy, University of Southern Denmark, Campusvej 55, DK-5230 Odense; Olsen, Jógvan Magnus Haugaard

    2015-03-21

    We present here the coupling of a polarizable embedding (PE) model to the recently developed multiconfiguration short-range density functional theory method (MC-srDFT), which can treat multiconfigurational systems with a simultaneous account for dynamical and static correlation effects. PE-MC-srDFT is designed to combine efficient treatment of complicated electronic structures with inclusion of effects from the surrounding environment. The environmental effects encompass classical electrostatic interactions as well as polarization of both the quantum region and the environment. Using response theory, molecular properties such as excitation energies and oscillator strengths can be obtained. The PE-MC-srDFT method and the additional terms required for linearmore » response have been implemented in a development version of DALTON. To benchmark the PE-MC-srDFT approach against the literature data, we have investigated the low-lying electronic excitations of acetone and uracil, both immersed in water solution. The PE-MC-srDFT results are consistent and accurate, both in terms of the calculated solvent shift and, unlike regular PE-MCSCF, also with respect to the individual absolute excitation energies. To demonstrate the capabilities of PE-MC-srDFT, we also investigated the retinylidene Schiff base chromophore embedded in the channelrhodopsin protein. While using a much more compact reference wave function in terms of active space, our PE-MC-srDFT approach yields excitation energies comparable in quality to CASSCF/CASPT2 benchmarks.« less

  9. Neurite, a Finite Difference Large Scale Parallel Program for the Simulation of Electrical Signal Propagation in Neurites under Mechanical Loading

    PubMed Central

    García-Grajales, Julián A.; Rucabado, Gabriel; García-Dopico, Antonio; Peña, José-María; Jérusalem, Antoine

    2015-01-01

    With the growing body of research on traumatic brain injury and spinal cord injury, computational neuroscience has recently focused its modeling efforts on neuronal functional deficits following mechanical loading. However, in most of these efforts, cell damage is generally only characterized by purely mechanistic criteria, functions of quantities such as stress, strain or their corresponding rates. The modeling of functional deficits in neurites as a consequence of macroscopic mechanical insults has been rarely explored. In particular, a quantitative mechanically based model of electrophysiological impairment in neuronal cells, Neurite, has only very recently been proposed. In this paper, we present the implementation details of this model: a finite difference parallel program for simulating electrical signal propagation along neurites under mechanical loading. Following the application of a macroscopic strain at a given strain rate produced by a mechanical insult, Neurite is able to simulate the resulting neuronal electrical signal propagation, and thus the corresponding functional deficits. The simulation of the coupled mechanical and electrophysiological behaviors requires computational expensive calculations that increase in complexity as the network of the simulated cells grows. The solvers implemented in Neurite—explicit and implicit—were therefore parallelized using graphics processing units in order to reduce the burden of the simulation costs of large scale scenarios. Cable Theory and Hodgkin-Huxley models were implemented to account for the electrophysiological passive and active regions of a neurite, respectively, whereas a coupled mechanical model accounting for the neurite mechanical behavior within its surrounding medium was adopted as a link between electrophysiology and mechanics. This paper provides the details of the parallel implementation of Neurite, along with three different application examples: a long myelinated axon, a segmented dendritic tree, and a damaged axon. The capabilities of the program to deal with large scale scenarios, segmented neuronal structures, and functional deficits under mechanical loading are specifically highlighted. PMID:25680098

  10. Combining Vision with Voice: A Learning and Implementation Structure Promoting Teachers' Internalization of Practices Based on Self-Determination Theory

    ERIC Educational Resources Information Center

    Assor, Avi; Kaplan, Haya; Feinberg, Ofra; Tal, Karen

    2009-01-01

    We propose that self-determination theory's conceptualization of internalization may help school reformers overcome the recurrent problem of "the predictable failure of educational reform" (Sarason, 1993). Accordingly, we present a detailed learning and implementation structure to promote teachers' internalization and application of ideas and…

  11. Activity Theory as a Framework for Investigating District-Classroom System Interactions and Their Influences on Technology Integration

    ERIC Educational Resources Information Center

    Anthony, Anika Ball

    2012-01-01

    Technology implementation research indicates that teachers' beliefs and knowledge, as well as a host of institutional factors, can influence technology integration. Drawing on third-generation activity theory, this article conceptualizes technology implementation as a network of planning and integration activities carried out by technology…

  12. You've Shown the Program Model Is Effective. Now What?

    ERIC Educational Resources Information Center

    Ellickson, Phyllis L.

    2014-01-01

    Rigorous tests of theory-based programs require faithful implementation. Otherwise, lack of results might be attributable to faulty program delivery, faulty theory, or both. However, once the evidence indicates the model works and merits broader dissemination, implementation issues do not fade away. How can developers enhance the likelihood that…

  13. A Unifying Theory of Biological Function.

    PubMed

    van Hateren, J H

    2017-01-01

    A new theory that naturalizes biological function is explained and compared with earlier etiological and causal role theories. Etiological (or selected effects) theories explain functions from how they are caused over their evolutionary history. Causal role theories analyze how functional mechanisms serve the current capacities of their containing system. The new proposal unifies the key notions of both kinds of theories, but goes beyond them by explaining how functions in an organism can exist as factors with autonomous causal efficacy. The goal-directedness and normativity of functions exist in this strict sense as well. The theory depends on an internal physiological or neural process that mimics an organism's fitness, and modulates the organism's variability accordingly. The structure of the internal process can be subdivided into subprocesses that monitor specific functions in an organism. The theory matches well with each intuition on a previously published list of intuited ideas about biological functions, including intuitions that have posed difficulties for other theories.

  14. From Classification to Causality: Advancing Understanding of Mechanisms of Change in Implementation Science.

    PubMed

    Lewis, Cara C; Klasnja, Predrag; Powell, Byron J; Lyon, Aaron R; Tuzzio, Leah; Jones, Salene; Walsh-Bailey, Callie; Weiner, Bryan

    2018-01-01

    The science of implementation has offered little toward understanding how different implementation strategies work. To improve outcomes of implementation efforts, the field needs precise, testable theories that describe the causal pathways through which implementation strategies function. In this perspective piece, we describe a four-step approach to developing causal pathway models for implementation strategies. First, it is important to ensure that implementation strategies are appropriately specified. Some strategies in published compilations are well defined but may not be specified in terms of its core component that can have a reliable and measureable impact. Second, linkages between strategies and mechanisms need to be generated. Existing compilations do not offer mechanisms by which strategies act, or the processes or events through which an implementation strategy operates to affect desired implementation outcomes. Third, it is critical to identify proximal and distal outcomes the strategy is theorized to impact, with the former being direct, measurable products of the strategy and the latter being one of eight implementation outcomes (1). Finally, articulating effect modifiers, like preconditions and moderators, allow for an understanding of where, when, and why strategies have an effect on outcomes of interest. We argue for greater precision in use of terms for factors implicated in implementation processes; development of guidelines for selecting research design and study plans that account for practical constructs and allow for the study of mechanisms; psychometrically strong and pragmatic measures of mechanisms; and more robust curation of evidence for knowledge transfer and use.

  15. Accurate calculation and modeling of the adiabatic connection in density functional theory

    NASA Astrophysics Data System (ADS)

    Teale, A. M.; Coriani, S.; Helgaker, T.

    2010-04-01

    Using a recently implemented technique for the calculation of the adiabatic connection (AC) of density functional theory (DFT) based on Lieb maximization with respect to the external potential, the AC is studied for atoms and molecules containing up to ten electrons: the helium isoelectronic series, the hydrogen molecule, the beryllium isoelectronic series, the neon atom, and the water molecule. The calculation of AC curves by Lieb maximization at various levels of electronic-structure theory is discussed. For each system, the AC curve is calculated using Hartree-Fock (HF) theory, second-order Møller-Plesset (MP2) theory, coupled-cluster singles-and-doubles (CCSD) theory, and coupled-cluster singles-doubles-perturbative-triples [CCSD(T)] theory, expanding the molecular orbitals and the effective external potential in large Gaussian basis sets. The HF AC curve includes a small correlation-energy contribution in the context of DFT, arising from orbital relaxation as the electron-electron interaction is switched on under the constraint that the wave function is always a single determinant. The MP2 and CCSD AC curves recover the bulk of the dynamical correlation energy and their shapes can be understood in terms of a simple energy model constructed from a consideration of the doubles-energy expression at different interaction strengths. Differentiation of this energy expression with respect to the interaction strength leads to a simple two-parameter doubles model (AC-D) for the AC integrand (and hence the correlation energy of DFT) as a function of the interaction strength. The structure of the triples-energy contribution is considered in a similar fashion, leading to a quadratic model for the triples correction to the AC curve (AC-T). From a consideration of the structure of a two-level configuration-interaction (CI) energy expression of the hydrogen molecule, a simple two-parameter CI model (AC-CI) is proposed to account for the effects of static correlation on the AC. When parametrized in terms of the same input data, the AC-CI model offers improved performance over the corresponding AC-D model, which is shown to be the lowest-order contribution to the AC-CI model. The utility of the accurately calculated AC curves for the analysis of standard density functionals is demonstrated for the BLYP exchange-correlation functional and the interaction-strength-interpolation (ISI) model AC integrand. From the results of this analysis, we investigate the performance of our proposed two-parameter AC-D and AC-CI models when a simple density functional for the AC at infinite interaction strength is employed in place of information at the fully interacting point. The resulting two-parameter correlation functionals offer a qualitatively correct behavior of the AC integrand with much improved accuracy over previous attempts. The AC integrands in the present work are recommended as a basis for further work, generating functionals that avoid spurious error cancellations between exchange and correlation energies and give good accuracy for the range of densities and types of correlation contained in the systems studied here.

  16. Advancing theory development: exploring the leadership-climate relationship as a mechanism of the implementation of cultural competence.

    PubMed

    Guerrero, Erick G; Fenwick, Karissa; Kong, Yinfei

    2017-11-14

    Leadership style and specific organizational climates have emerged as critical mechanisms to implement targeted practices in organizations. Drawing from relevant theories, we propose that climate for implementation of cultural competence reflects how transformational leadership may enhance the organizational implementation of culturally responsive practices in health care organizations. Using multilevel data from 427 employees embedded in 112 addiction treatment programs collected in 2013, confirmatory factor analysis showed adequate fit statistics for our measure of climate for implementation of cultural competence (Cronbach's alpha = .88) and three outcomes: knowledge (Cronbach's alpha = .88), services (Cronbach's alpha = .86), and personnel (Cronbach's alpha = .86) practices. Results from multilevel path analyses indicate a positive relationship between employee perceptions of transformational leadership and climate for implementation of cultural competence (standardized indirect effect = .057, bootstrap p < .001). We also found a positive indirect effect between transformational leadership and each of the culturally competent practices: knowledge (standardized indirect effect = .006, bootstrap p = .004), services (standardized indirect effect = .019, bootstrap p < .001), and personnel (standardized indirect effect = .014, bootstrap p = .005). Findings contribute to implementation science. They build on leadership theory and offer evidence of the mediating role of climate in the implementation of cultural competence in addiction health service organizations.

  17. Assessment of school wellness policies implementation by benchmarking against diffusion of innovation framework.

    PubMed

    Harriger, Dinah; Lu, Wenhua; McKyer, E Lisako J; Pruitt, Buzz E; Goodson, Patricia

    2014-04-01

    The School Wellness Policy (SWP) mandate marks one of the first innovative and extensive efforts of the US government to address the child obesity epidemic and the influence of the school environment on child health. However, no systematic review has been conducted to examine the implementation of the mandate. The study examines the literature on SWP implementation by using the Diffusion of Innovations Theory as a framework. Empirically based literature on SWP was systematically searched and analyzed. A theory-driven approach was used to categorize the articles by 4 diffusion stages: restructuring/redefining, clarifying, routinizing, and multiple stages. Twenty-one studies were identified, and 3 key characteristics of the reviewed literature were captured: (1) uniformity in methodology, (2) role of context in analyzing policy implementation, and (3) lack of information related to policy clarification. Over half of the studies were published by duplicate set of authors, and only 1 study employed a pure qualitative methodology. Only 2 articles include an explicit theoretical framework to study theory-driven constructs related to SWP implementation. Policy implementation research can inform the policy process. Therefore, it is essential that policy implementation is measured accurately. Failing to clearly define implementation constructs may result in misguided conclusion. © 2014, American School Health Association.

  18. Finding common ground in implementation: towards a theory of gradual commonality.

    PubMed

    Ter Haar, Marian; Aarts, Noelle; Verhoeven, Piet

    2016-03-01

    This article reports on an empirical study that aimed to design a practice-based theory about collaboration on the local implementation of a nationally developed health-promoting intervention. The study's objective is to better understand the dynamic process of complex collaboration. The research is based on a Delphi study among some 100 individuals in local and regional networks, in which various professionals work together to implement the BeweegKuur, which translates as 'course of exercise'. The BeweegKuur is a combined lifestyle intervention aimed at promoting sufficient physical exercise and a healthy diet among people in the Netherlands who are overweight and at risk of diabetes. The Delphi study in three rounds systematically and interactively constructs a common perspective on implementation, reflecting stakeholders' ideas about the collaboration and providing an insight into how these ideas are influenced by the context of the implementation. The statistical and qualitative analyses of the responses to the feedback in the Delphi study form the basis for this practice-based theory on complex collaboration, called the theory of gradual commonality. During interaction, consensus gradually emerges about co-creation as a collaboration strategy. Co-creation leaves room for various ways of achieving the ambitions of the BeweegKuur. This article discusses the importance of this practice-based theory and the value of the Delphi research strategy for promoting health. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  19. Procedures to develop a computerized adaptive test to assess patient-reported physical functioning.

    PubMed

    McCabe, Erin; Gross, Douglas P; Bulut, Okan

    2018-06-07

    The purpose of this paper is to demonstrate the procedures to develop and implement a computerized adaptive patient-reported outcome (PRO) measure using secondary analysis of a dataset and items from fixed-format legacy measures. We conducted secondary analysis of a dataset of responses from 1429 persons with work-related lower extremity impairment. We calibrated three measures of physical functioning on the same metric, based on item response theory (IRT). We evaluated efficiency and measurement precision of various computerized adaptive test (CAT) designs using computer simulations. IRT and confirmatory factor analyses support combining the items from the three scales for a CAT item bank of 31 items. The item parameters for IRT were calculated using the generalized partial credit model. CAT simulations show that reducing the test length from the full 31 items to a maximum test length of 8 items, or 20 items is possible without a significant loss of information (95, 99% correlation with legacy measure scores). We demonstrated feasibility and efficiency of using CAT for PRO measurement of physical functioning. The procedures we outlined are straightforward, and can be applied to other PRO measures. Additionally, we have included all the information necessary to implement the CAT of physical functioning in the electronic supplementary material of this paper.

  20. Never the twain shall meet? - a comparison of implementation science and policy implementation research

    PubMed Central

    2013-01-01

    Background Many of society’s health problems require research-based knowledge acted on by healthcare practitioners together with implementation of political measures from governmental agencies. However, there has been limited knowledge exchange between implementation science and policy implementation research, which has been conducted since the early 1970s. Based on a narrative review of selective literature on implementation science and policy implementation research, the aim of this paper is to describe the characteristics of policy implementation research, analyze key similarities and differences between this field and implementation science, and discuss how knowledge assembled in policy implementation research could inform implementation science. Discussion Following a brief overview of policy implementation research, several aspects of the two fields were described and compared: the purpose and origins of the research; the characteristics of the research; the development and use of theory; determinants of change (independent variables); and the impact of implementation (dependent variables). The comparative analysis showed that there are many similarities between the two fields, yet there are also profound differences. Still, important learning may be derived from several aspects of policy implementation research, including issues related to the influence of the context of implementation and the values and norms of the implementers (the healthcare practitioners) on implementation processes. Relevant research on various associated policy topics, including The Advocacy Coalition Framework, Governance Theory, and Institutional Theory, may also contribute to improved understanding of the difficulties of implementing evidence in healthcare. Implementation science is at a relatively early stage of development, and advancement of the field would benefit from accounting for knowledge beyond the parameters of the immediate implementation science literature. Summary There are many common issues in policy implementation research and implementation science. Research in both fields deals with the challenges of translating intentions into desired changes. Important learning may be derived from several aspects of policy implementation research. PMID:23758952

  1. Never the twain shall meet?--a comparison of implementation science and policy implementation research.

    PubMed

    Nilsen, Per; Ståhl, Christian; Roback, Kerstin; Cairney, Paul

    2013-06-10

    Many of society's health problems require research-based knowledge acted on by healthcare practitioners together with implementation of political measures from governmental agencies. However, there has been limited knowledge exchange between implementation science and policy implementation research, which has been conducted since the early 1970s. Based on a narrative review of selective literature on implementation science and policy implementation research, the aim of this paper is to describe the characteristics of policy implementation research, analyze key similarities and differences between this field and implementation science, and discuss how knowledge assembled in policy implementation research could inform implementation science. Following a brief overview of policy implementation research, several aspects of the two fields were described and compared: the purpose and origins of the research; the characteristics of the research; the development and use of theory; determinants of change (independent variables); and the impact of implementation (dependent variables). The comparative analysis showed that there are many similarities between the two fields, yet there are also profound differences. Still, important learning may be derived from several aspects of policy implementation research, including issues related to the influence of the context of implementation and the values and norms of the implementers (the healthcare practitioners) on implementation processes. Relevant research on various associated policy topics, including The Advocacy Coalition Framework, Governance Theory, and Institutional Theory, may also contribute to improved understanding of the difficulties of implementing evidence in healthcare. Implementation science is at a relatively early stage of development, and advancement of the field would benefit from accounting for knowledge beyond the parameters of the immediate implementation science literature. There are many common issues in policy implementation research and implementation science. Research in both fields deals with the challenges of translating intentions into desired changes. Important learning may be derived from several aspects of policy implementation research.

  2. Velocity-gauge real-time TDDFT within a numerical atomic orbital basis set

    DOE PAGES

    Pemmaraju, C. D.; Vila, F. D.; Kas, J. J.; ...

    2018-02-07

    The interaction of laser fields with solid-state systems can be modeled efficiently within the velocity-gauge formalism of real-time time dependent density functional theory (RT-TDDFT). In this article, we discuss the implementation of the velocity-gauge RT-TDDFT equations for electron dynamics within a linear combination of atomic orbitals (LCAO) basis set framework. Numerical results obtained from our LCAO implementation, for the electronic response of periodic systems to both weak and intense laser fields, are compared to those obtained from established real-space grid and Full-Potential Linearized Augmented Planewave approaches. As a result, potential applications of the LCAO based scheme in the context ofmore » extreme ultra-violet and soft X-ray spectroscopies involving core-electronic excitations are discussed.« less

  3. Velocity-gauge real-time TDDFT within a numerical atomic orbital basis set

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pemmaraju, C. D.; Vila, F. D.; Kas, J. J.

    The interaction of laser fields with solid-state systems can be modeled efficiently within the velocity-gauge formalism of real-time time dependent density functional theory (RT-TDDFT). In this article, we discuss the implementation of the velocity-gauge RT-TDDFT equations for electron dynamics within a linear combination of atomic orbitals (LCAO) basis set framework. Numerical results obtained from our LCAO implementation, for the electronic response of periodic systems to both weak and intense laser fields, are compared to those obtained from established real-space grid and Full-Potential Linearized Augmented Planewave approaches. As a result, potential applications of the LCAO based scheme in the context ofmore » extreme ultra-violet and soft X-ray spectroscopies involving core-electronic excitations are discussed.« less

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKemmish, Laura K., E-mail: laura.mckemmish@gmail.com; Research School of Chemistry, Australian National University, Canberra

    Algorithms for the efficient calculation of two-electron integrals in the newly developed mixed ramp-Gaussian basis sets are presented, alongside a Fortran90 implementation of these algorithms, RAMPITUP. These new basis sets have significant potential to (1) give some speed-up (estimated at up to 20% for large molecules in fully optimised code) to general-purpose Hartree-Fock (HF) and density functional theory quantum chemistry calculations, replacing all-Gaussian basis sets, and (2) give very large speed-ups for calculations of core-dependent properties, such as electron density at the nucleus, NMR parameters, relativistic corrections, and total energies, replacing the current use of Slater basis functions or verymore » large specialised all-Gaussian basis sets for these purposes. This initial implementation already demonstrates roughly 10% speed-ups in HF/R-31G calculations compared to HF/6-31G calculations for large linear molecules, demonstrating the promise of this methodology, particularly for the second application. As well as the reduction in the total primitive number in R-31G compared to 6-31G, this timing advantage can be attributed to the significant reduction in the number of mathematically complex intermediate integrals after modelling each ramp-Gaussian basis-function-pair as a sum of ramps on a single atomic centre.« less

  5. Density Functional O(N) Calculations

    NASA Astrophysics Data System (ADS)

    Ordejón, Pablo

    1998-03-01

    We have developed a scheme for performing Density Functional Theory calculations with O(N) scaling.(P. Ordejón, E. Artacho and J. M. Soler, Phys. Rev. B, 53), 10441 (1996) The method uses arbitrarily flexible and complete Atomic Orbitals (AO) basis sets. This gives a wide range of choice, from extremely fast calculations with minimal basis sets, to greatly accurate calculations with complete sets. The size-efficiency of AO bases, together with the O(N) scaling of the algorithm, allow the application of the method to systems with many hundreds of atoms, in single processor workstations. I will present the SIESTA code,(D. Sanchez-Portal, P. Ordejón, E. Artacho and J. M. Soler, Int. J. Quantum Chem., 65), 453 (1997) in which the method is implemented, with several LDA, LSD and GGA functionals available, and using norm-conserving, non-local pseudopotentials (in the Kleinman-Bylander form) to eliminate the core electrons. The calculation of static properties such as energies, forces, pressure, stress and magnetic moments, as well as molecular dynamics (MD) simulations capabilities (including variable cell shape, constant temperature and constant pressure MD) are fully implemented. I will also show examples of the accuracy of the method, and applications to large-scale materials and biomolecular systems.

  6. A new time dependent density functional algorithm for large systems and plasmons in metal clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baseggio, Oscar; Fronzoni, Giovanna; Stener, Mauro, E-mail: stener@univ.trieste.it

    2015-07-14

    A new algorithm to solve the Time Dependent Density Functional Theory (TDDFT) equations in the space of the density fitting auxiliary basis set has been developed and implemented. The method extracts the spectrum from the imaginary part of the polarizability at any given photon energy, avoiding the bottleneck of Davidson diagonalization. The original idea which made the present scheme very efficient consists in the simplification of the double sum over occupied-virtual pairs in the definition of the dielectric susceptibility, allowing an easy calculation of such matrix as a linear combination of constant matrices with photon energy dependent coefficients. The methodmore » has been applied to very different systems in nature and size (from H{sub 2} to [Au{sub 147}]{sup −}). In all cases, the maximum deviations found for the excitation energies with respect to the Amsterdam density functional code are below 0.2 eV. The new algorithm has the merit not only to calculate the spectrum at whichever photon energy but also to allow a deep analysis of the results, in terms of transition contribution maps, Jacob plasmon scaling factor, and induced density analysis, which have been all implemented.« less

  7. An open-source framework for analyzing N-electron dynamics. II. Hybrid density functional theory/configuration interaction methodology.

    PubMed

    Hermann, Gunter; Pohl, Vincent; Tremblay, Jean Christophe

    2017-10-30

    In this contribution, we extend our framework for analyzing and visualizing correlated many-electron dynamics to non-variational, highly scalable electronic structure method. Specifically, an explicitly time-dependent electronic wave packet is written as a linear combination of N-electron wave functions at the configuration interaction singles (CIS) level, which are obtained from a reference time-dependent density functional theory (TDDFT) calculation. The procedure is implemented in the open-source Python program detCI@ORBKIT, which extends the capabilities of our recently published post-processing toolbox (Hermann et al., J. Comput. Chem. 2016, 37, 1511). From the output of standard quantum chemistry packages using atom-centered Gaussian-type basis functions, the framework exploits the multideterminental structure of the hybrid TDDFT/CIS wave packet to compute fundamental one-electron quantities such as difference electronic densities, transient electronic flux densities, and transition dipole moments. The hybrid scheme is benchmarked against wave function data for the laser-driven state selective excitation in LiH. It is shown that all features of the electron dynamics are in good quantitative agreement with the higher-level method provided a judicious choice of functional is made. Broadband excitation of a medium-sized organic chromophore further demonstrates the scalability of the method. In addition, the time-dependent flux densities unravel the mechanistic details of the simulated charge migration process at a glance. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  8. Adapting Growth Pole Theory to Community College Development.

    ERIC Educational Resources Information Center

    Brumbach, Mary A.

    2002-01-01

    Explains growth pole theory, which is the theory that growth manifests itself at poles of growth, rather than everywhere at once. Applies this theory to community college development, and offers advice for implementing growth poles by taking an entrepreneurial approach to education. (NB)

  9. Hiding in plain sight: communication theory in implementation science.

    PubMed

    Manojlovich, Milisa; Squires, Janet E; Davies, Barbara; Graham, Ian D

    2015-04-23

    Poor communication among healthcare professionals is a pressing problem, contributing to widespread barriers to patient safety. The word "communication" means to share or make common. In the literature, two communication paradigms dominate: (1) communication as a transactional process responsible for information exchange, and (2) communication as a transformational process responsible for causing change. Implementation science has focused on information exchange attributes while largely ignoring transformational attributes of communication. In this paper, we debate the merits of encompassing both paradigms. We conducted a two-staged literature review searching for the concept of communication in implementation science to understand how communication is conceptualized. Twenty-seven theories, models, or frameworks were identified; only Rogers' Diffusion of Innovations theory provides a definition of communication and includes both communication paradigms. Most models (notable exceptions include Diffusion of Innovations, The Ottawa Model of Research Use, and Normalization Process Theory) describe communication as a transactional process. But thinking of communication solely as information transfer or exchange misrepresents reality. We recommend that implementation science theories (1) propose and test the concept of shared understanding when describing communication, (2) acknowledge that communication is multi-layered, identify at least a few layers, and posit how identified layers might affect the development of shared understanding, (3) acknowledge that communication occurs in a social context, providing a frame of reference for both individuals and groups, (4) acknowledge the unpredictability of communication (and healthcare processes in general), and (5) engage with and draw on work done by communication theorists. Implementation science literature has conceptualized communication as a transactional process (when communication has been mentioned at all), thereby ignoring a key contributor to implementation intervention success. When conceptualized as a transformational process, the focus of communication moves to shared understanding and is grounded in human interactions and the way we go about constructing knowledge. Instead of hiding in plain sight, we suggest explicitly acknowledging the role that communication plays in our implementation efforts. By using both paradigms, we can investigate when communication facilitates implementation, when it does not, and how to improve it so that our implementation and clinical interventions are embraced by clinicians and patients alike.

  10. Incorporating spatial context into statistical classification of multidimensional image data

    NASA Technical Reports Server (NTRS)

    Bauer, M. E. (Principal Investigator); Tilton, J. C.; Swain, P. H.

    1981-01-01

    Compound decision theory is employed to develop a general statistical model for classifying image data using spatial context. The classification algorithm developed from this model exploits the tendency of certain ground-cover classes to occur more frequently in some spatial contexts than in others. A key input to this contextural classifier is a quantitative characterization of this tendency: the context function. Several methods for estimating the context function are explored, and two complementary methods are recommended. The contextural classifier is shown to produce substantial improvements in classification accuracy compared to the accuracy produced by a non-contextural uniform-priors maximum likelihood classifier when these methods of estimating the context function are used. An approximate algorithm, which cuts computational requirements by over one-half, is presented. The search for an optimal implementation is furthered by an exploration of the relative merits of using spectral classes or information classes for classification and/or context function estimation.

  11. Towards a Functionally-Formed Air Traffic System-of-Systems

    NASA Technical Reports Server (NTRS)

    Conway, Sheila R.; Consiglio, Maria C.

    2005-01-01

    Incremental improvements to the national aviation infrastructure have not resulted in sufficient increases in capacity and flexibility to meet emerging demand. Unfortunately, revolutionary changes capable of substantial and rapid increases in capacity have proven elusive. Moreover, significant changes have been difficult to implement, and the operational consequences of such change, difficult to predict due to the system s complexity. Some research suggests redistributing air traffic control functions through the system, but this work has largely been dismissed out of hand, accused of being impractical. However, the case for functionally-based reorganization of form can be made from a theoretical, systems perspective. This paper investigates Air Traffic Management functions and their intrinsic biases towards centralized/distributed operations, grounded in systems engineering and information technology theories. Application of these concepts to a small airport operations design is discussed. From this groundwork, a robust, scalable system transformation plan may be made in light of uncertain demand.

  12. A Pearson VII distribution function for fast calculation of dechanneling and angular dispersion of beams

    NASA Astrophysics Data System (ADS)

    Shao, Lin; Peng, Luohan

    2009-12-01

    Although multiple scattering theories have been well developed, numerical calculation is complicated and only tabulated values have been available, which has caused inconvenience in practical use. We have found that a Pearson VII distribution function can be used to fit Lugujjo and Mayer's probability curves in describing the dechanneling phenomenon in backscattering analysis, over a wide range of disorder levels. Differentiation of the obtained function gives another function to calculate angular dispersion of the beam in the frameworks by Sigmund and Winterbon. The present work provides an easy calculation of both dechanneling probability and angular dispersion for any arbitrary combination of beam and target having a reduced thickness ⩾0.6, which can be implemented in modeling of channeling spectra. Furthermore, we used a Monte Carlo simulation program to calculate the deflection probability and compared them with previously tabulated data. A good agreement was reached.

  13. Sparse maps—A systematic infrastructure for reduced-scaling electronic structure methods. II. Linear scaling domain based pair natural orbital coupled cluster theory

    NASA Astrophysics Data System (ADS)

    Riplinger, Christoph; Pinski, Peter; Becker, Ute; Valeev, Edward F.; Neese, Frank

    2016-01-01

    Domain based local pair natural orbital coupled cluster theory with single-, double-, and perturbative triple excitations (DLPNO-CCSD(T)) is a highly efficient local correlation method. It is known to be accurate and robust and can be used in a black box fashion in order to obtain coupled cluster quality total energies for large molecules with several hundred atoms. While previous implementations showed near linear scaling up to a few hundred atoms, several nonlinear scaling steps limited the applicability of the method for very large systems. In this work, these limitations are overcome and a linear scaling DLPNO-CCSD(T) method for closed shell systems is reported. The new implementation is based on the concept of sparse maps that was introduced in Part I of this series [P. Pinski, C. Riplinger, E. F. Valeev, and F. Neese, J. Chem. Phys. 143, 034108 (2015)]. Using the sparse map infrastructure, all essential computational steps (integral transformation and storage, initial guess, pair natural orbital construction, amplitude iterations, triples correction) are achieved in a linear scaling fashion. In addition, a number of additional algorithmic improvements are reported that lead to significant speedups of the method. The new, linear-scaling DLPNO-CCSD(T) implementation typically is 7 times faster than the previous implementation and consumes 4 times less disk space for large three-dimensional systems. For linear systems, the performance gains and memory savings are substantially larger. Calculations with more than 20 000 basis functions and 1000 atoms are reported in this work. In all cases, the time required for the coupled cluster step is comparable to or lower than for the preceding Hartree-Fock calculation, even if this is carried out with the efficient resolution-of-the-identity and chain-of-spheres approximations. The new implementation even reduces the error in absolute correlation energies by about a factor of two, compared to the already accurate previous implementation.

  14. A Non-Local, Energy-Optimized Kernel: Recovering Second-Order Exchange and Beyond in Extended Systems

    NASA Astrophysics Data System (ADS)

    Bates, Jefferson; Laricchia, Savio; Ruzsinszky, Adrienn

    The Random Phase Approximation (RPA) is quickly becoming a standard method beyond semi-local Density Functional Theory that naturally incorporates weak interactions and eliminates self-interaction error. RPA is not perfect, however, and suffers from self-correlation error as well as an incorrect description of short-ranged correlation typically leading to underbinding. To improve upon RPA we introduce a short-ranged, exchange-like kernel that is one-electron self-correlation free for one and two electron systems in the high-density limit. By tuning the one free parameter in our model to recover an exact limit of the homogeneous electron gas correlation energy we obtain a non-local, energy-optimized kernel that reduces the errors of RPA for both homogeneous and inhomogeneous solids. To reduce the computational cost of the standard kernel-corrected RPA, we also implement RPA renormalized perturbation theory for extended systems, and demonstrate its capability to describe the dominant correlation effects with a low-order expansion in both metallic and non-metallic systems. Furthermore we stress that for norm-conserving implementations the accuracy of RPA and beyond RPA structural properties compared to experiment is inherently limited by the choice of pseudopotential. Current affiliation: King's College London.

  15. Local self-energies for V and Pd emergent from a nonlocal LDA+FLEX implementation

    NASA Astrophysics Data System (ADS)

    Savrasov, Sergey Y.; Resta, Giacomo; Wan, Xiangang

    2018-04-01

    In the spirit of recently developed LDA+U and LDA+DMFT methods, we implement a combination of density functional theory in its local density approximation (LDA) with a k - and ω -dependent self-energy found from diagrammatic fluctuational exchange (FLEX) approximation. The active Hilbert space here is described by the correlated subset of electrons which allows one to tremendously reduce the sizes of the matrices needed to represent charge and spin susceptibilities. The method is perturbative in nature but accounts for both bubble and ladder diagrams and accumulates the physics of momentum-resolved spin fluctuations missing in such popular approach as GW. As an application, we study correlation effects on band structures in V and Pd. The d -electron self-energies emergent from this calculation are found to be remarkably k independent. However, when we compare our calculated electronic mass enhancements against LDA+DMFT, we find that for the longstanding problem of spin fluctuations in Pd, LDA+FLEX delivers a better agreement with experiment, although this conclusion depends on a particular value of the Hubbard U used in the simulation. We also discuss outcomes of a recently proposed combination of k -dependent FLEX with dynamical mean-field theory (DMFT).

  16. Structure and osmotic pressure of ionic microgel dispersions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hedrick, Mary M.; Department of Chemistry and Biochemistry, North Dakota State University, Fargo, North Dakota 58108-6050; Chung, Jun Kyung

    We investigate structural and thermodynamic properties of aqueous dispersions of ionic microgels—soft colloidal gel particles that exhibit unusual phase behavior. Starting from a coarse-grained model of microgel macroions as charged spheres that are permeable to microions, we perform simulations and theoretical calculations using two complementary implementations of Poisson-Boltzmann (PB) theory. Within a one-component model, based on a linear-screening approximation for effective electrostatic pair interactions, we perform molecular dynamics simulations to compute macroion-macroion radial distribution functions, static structure factors, and macroion contributions to the osmotic pressure. For the same model, using a variational approximation for the free energy, we compute bothmore » macroion and microion contributions to the osmotic pressure. Within a spherical cell model, which neglects macroion correlations, we solve the nonlinear PB equation to compute microion distributions and osmotic pressures. By comparing the one-component and cell model implementations of PB theory, we demonstrate that the linear-screening approximation is valid for moderately charged microgels. By further comparing cell model predictions with simulation data for osmotic pressure, we chart the cell model’s limits in predicting osmotic pressures of salty dispersions.« less

  17. Evaluation of rockfish conservation area networks in the United States and Canada relative to the dispersal distance for black rockfish (Sebastes melanops)

    PubMed Central

    Lotterhos, Katie E; Dick, Stefan J; Haggarty, Dana R

    2014-01-01

    Marine reserves networks are implemented as a way to mitigate the impact of fishing on marine ecosystems. Theory suggests that a reserve network will function synergistically when connected by dispersal, but the scale of dispersal is often unknown. On the Pacific coast of the United States and Canada, both countries have recently implemented a number of rockfish conservation areas (RCAs) to protect exploited rockfish species, but no study has evaluated the connectivity within networks in each country or between the two countries. We used isolation-by-distance theory to estimate the scale of dispersal from microsatellite data in the black rockfish, Sebastes melanops, and compared this estimate with the distance between RCAs that would protect this species. Within each country, we found that the distance between RCAs was generally within the confidence intervals of mean dispersal per generation. The distance between these two RCA networks, however, was greater than the average dispersal per generation. The data were also consistent with a genetic break between southern Oregon and central Oregon. We discuss whether additional nearshore RCAs in southern Oregon and Washington would help promote connectivity between RCA's for shallow-water rockfishes. PMID:24567745

  18. Final Technical Report for Quantum Embedding for Correlated Electronic Structure in Large Systems and the Condensed Phase

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chan, Garnet Kin-Lic

    2017-04-30

    This is the final technical report. We briefly describe some selected results below. Developments in density matrix embedding. DMET is a quantum embedding theory that we introduced at the beginning of the last funding period, around 2012-2013. Since the first DMET papers, which demonstrated proof-of- principle calculations on the Hubbard model and hydrogen rings, we have carried out a number of different developments, including: Extending the DMET technology to compute broken symmetry phases, including magnetic phases and super- conductivity (Pub. 13); Calibrating the accuracy of DMET and its cluster size convergence against other methods, and formulation of a dynamical clustermore » analog (Pubs. 4, 10) (see Fig. 1); Implementing DMET for ab-initio molecular calculations, and exploring different self-consistency criteria (Pubs. 9, 14); Using embedding to defi ne quantum classical interfaces Pub. 2; Formulating DMET for spectral functions (Pub. 7) (see Fig. 1); Extending DMET to coupled fermion-boson problems (Pub. 12). Together with these embedding developments, we have also implemented a wide variety of impurity solvers within our DMET framework, including DMRG (Pub. 3), AFQMC (Pub. 10), and coupled cluster theory (CC) (Pub. 9).« less

  19. Developing and exploring a theory for the lateral erosion of bedrock channels for use in landscape evolution models

    NASA Astrophysics Data System (ADS)

    Langston, Abigail L.; Tucker, Gregory E.

    2018-01-01

    Understanding how a bedrock river erodes its banks laterally is a frontier in geomorphology. Theories for the vertical incision of bedrock channels are widely implemented in the current generation of landscape evolution models. However, in general existing models do not seek to implement the lateral migration of bedrock channel walls. This is problematic, as modeling geomorphic processes such as terrace formation and hillslope-channel coupling depends on the accurate simulation of valley widening. We have developed and implemented a theory for the lateral migration of bedrock channel walls in a catchment-scale landscape evolution model. Two model formulations are presented, one representing the slow process of widening a bedrock canyon and the other representing undercutting, slumping, and rapid downstream sediment transport that occurs in softer bedrock. Model experiments were run with a range of values for bedrock erodibility and tendency towards transport- or detachment-limited behavior and varying magnitudes of sediment flux and water discharge in order to determine the role that each plays in the development of wide bedrock valleys. The results show that this simple, physics-based theory for the lateral erosion of bedrock channels produces bedrock valleys that are many times wider than the grid discretization scale. This theory for the lateral erosion of bedrock channel walls and the numerical implementation of the theory in a catchment-scale landscape evolution model is a significant first step towards understanding the factors that control the rates and spatial extent of wide bedrock valleys.

  20. A Grounded Theory of Behavior Management Strategy Selection, Implementation, and Perceived Effectiveness Reported by First-Year Elementary Teachers

    ERIC Educational Resources Information Center

    Smart, Julie B.; Igo, L. Brent

    2010-01-01

    In this grounded theory study, 19 teachers were interviewed and then, in constant comparative fashion, the interview data were analyzed. The theoretical model that emerged from the data describes novice teachers' tendencies to select and implement differing strategies related to the severity of student behavior. When confronting mild student…

  1. A Case Study on Educators' Perceptions of Leader Behaviors throughout Response-to-Intervention Implementation

    ERIC Educational Resources Information Center

    Strickland, Thomas Joel

    2017-01-01

    The purpose of this two-case study was to describe the perceptions of middle school administrators and teachers concerning leader behaviors throughout the implementation of Response-to-Intervention (RTI) programs. The theory which guided this study is transformational leadership theory (Bass, 1990) as it related directly to how administrators and…

  2. Implementing Multiple Intelligences: The New City School Experience. Fastback 407.

    ERIC Educational Resources Information Center

    Hoerr, Thomas R.

    This brief reviews the concept of multiple intelligences (MI) and discusses the implementation of the theory of MI in the New City School, an independent school in St. Louis (Missouri). The theory of MI, as developed by Howard Gardner, says that there are at least seven different intelligences: linguistic, logical, musical, bodily-kinesthetic,…

  3. Does One-to-One Technology Really Work: An Evaluation through the Lens of Activity Theory

    ERIC Educational Resources Information Center

    Holen, Jodi Bergland; Hung, Woei; Gourneau, Bonni

    2017-01-01

    This program evaluation study examines an implementation of a one-to-one laptop initiative in a rural high school. Specifically, the researchers adopted a holistic view in evaluating the process and outcomes of this implementation by examining the interrelationships among the key participants using activity theory as a conceptual framework.…

  4. Multiconfiguration Pair-Density Functional Theory Outperforms Kohn-Sham Density Functional Theory and Multireference Perturbation Theory for Ground-State and Excited-State Charge Transfer.

    PubMed

    Ghosh, Soumen; Sonnenberger, Andrew L; Hoyer, Chad E; Truhlar, Donald G; Gagliardi, Laura

    2015-08-11

    The correct description of charge transfer in ground and excited states is very important for molecular interactions, photochemistry, electrochemistry, and charge transport, but it is very challenging for Kohn-Sham (KS) density functional theory (DFT). KS-DFT exchange-correlation functionals without nonlocal exchange fail to describe both ground- and excited-state charge transfer properly. We have recently proposed a theory called multiconfiguration pair-density functional theory (MC-PDFT), which is based on a combination of multiconfiguration wave function theory with a new type of density functional called an on-top density functional. Here we have used MC-PDFT to study challenging ground- and excited-state charge-transfer processes by using on-top density functionals obtained by translating KS exchange-correlation functionals. For ground-state charge transfer, MC-PDFT performs better than either the PBE exchange-correlation functional or CASPT2 wave function theory. For excited-state charge transfer, MC-PDFT (unlike KS-DFT) shows qualitatively correct behavior at long-range with great improvement in predicted excitation energies.

  5. How Singapore Junior College Science Teachers Address Curriculum Reforms: A Theory

    ERIC Educational Resources Information Center

    Lim, Patrick; Pyvis, David

    2012-01-01

    Using grounded theory research methodology, a theory was developed to explain how Singapore junior college science teachers implement educational reforms underpinning the key initiatives of the "Thinking Schools, Learning Nation" policy. The theory suggests Singapore junior college science teachers "deal with" implementing…

  6. Measuring implementation intentions in the context of the theory of planned behavior.

    PubMed

    Rise, Jostein; Thompson, Marianne; Verplanken, Bas

    2003-04-01

    The usefulness of measuring implementation intentions in the context of the theory of planned behavior (TPB) was explored among 112 Norwegian college students. They responded to a questionnaire measuring past behavior, perceived behavioral control, behavioral intentions, implementation intentions, and actual performance of regular exercising and recycling of drinking cartons. Implementation intentions were measured using five items relating to recycling and four items relating to exercise, which showed satisfactory internal consistencies. Consistent with the main prediction, the presence of implementation intentions was related to performing the two behaviors, although behavioral intentions were the strongest determinant for both behaviors. The results suggest that the TPB may benefit from inclusion of the concept of implementation intentions to provide a more complete understanding of the psychological process in which motivation is translated into action.

  7. Hybrid modeling in biochemical systems theory by means of functional petri nets.

    PubMed

    Wu, Jialiang; Voit, Eberhard

    2009-02-01

    Many biological systems are genuinely hybrids consisting of interacting discrete and continuous components and processes that often operate at different time scales. It is therefore desirable to create modeling frameworks capable of combining differently structured processes and permitting their analysis over multiple time horizons. During the past 40 years, Biochemical Systems Theory (BST) has been a very successful approach to elucidating metabolic, gene regulatory, and signaling systems. However, its foundation in ordinary differential equations has precluded BST from directly addressing problems containing switches, delays, and stochastic effects. In this study, we extend BST to hybrid modeling within the framework of Hybrid Functional Petri Nets (HFPN). First, we show how the canonical GMA and S-system models in BST can be directly implemented in a standard Petri Net framework. In a second step we demonstrate how to account for different types of time delays as well as for discrete, stochastic, and switching effects. Using representative test cases, we validate the hybrid modeling approach through comparative analyses and simulations with other approaches and highlight the feasibility, quality, and efficiency of the hybrid method.

  8. Theoretical calculation of reorganization energy for electron self-exchange reaction by constrained density functional theory and constrained equilibrium thermodynamics.

    PubMed

    Ren, Hai-Sheng; Ming, Mei-Jun; Ma, Jian-Yi; Li, Xiang-Yuan

    2013-08-22

    Within the framework of constrained density functional theory (CDFT), the diabatic or charge localized states of electron transfer (ET) have been constructed. Based on the diabatic states, inner reorganization energy λin has been directly calculated. For solvent reorganization energy λs, a novel and reasonable nonequilibrium solvation model is established by introducing a constrained equilibrium manipulation, and a new expression of λs has been formulated. It is found that λs is actually the cost of maintaining the residual polarization, which equilibrates with the extra electric field. On the basis of diabatic states constructed by CDFT, a numerical algorithm using the new formulations with the dielectric polarizable continuum model (D-PCM) has been implemented. As typical test cases, self-exchange ET reactions between tetracyanoethylene (TCNE) and tetrathiafulvalene (TTF) and their corresponding ionic radicals in acetonitrile are investigated. The calculated reorganization energies λ are 7293 cm(-1) for TCNE/TCNE(-) and 5939 cm(-1) for TTF/TTF(+) reactions, agreeing well with available experimental results of 7250 cm(-1) and 5810 cm(-1), respectively.

  9. Center for Modeling of Turbulence and Transition (CMOTT): Research Briefs, 1992

    NASA Technical Reports Server (NTRS)

    Liou, William W. (Editor)

    1992-01-01

    The progress is reported of the Center for Modeling of Turbulence and Transition (CMOTT). The main objective of the CMOTT is to develop, validate and implement the turbulence and transition models for practical engineering flows. The flows of interest are three-dimensional, incompressible and compressible flows with chemical reaction. The research covers two-equation (e.g., k-e) and algebraic Reynolds-stress models, second moment closure models, probability density function (pdf) models, Renormalization Group Theory (RNG), Large Eddy Simulation (LES) and Direct Numerical Simulation (DNS).

  10. The equations of motion of an artificial satellite in nonsingular variables

    NASA Technical Reports Server (NTRS)

    Giacaglia, G. E. O.

    1975-01-01

    The equations of motion of an artificial satellite are given in nonsingular variables. Any term in the geopotential is considered as well as luni-solar perturbations up to an arbitrary power of r/r prime; r prime being the geocentric distance of the disturbing body. Resonances with tesseral harmonics and with the moon or sun are also considered. By neglecting the shadow effect, the disturbing function for solar radiation is also developed in nonsingular variables for the long periodic perturbations. Formulas are developed for implementation of the theory in actual computations.

  11. Mott transition and suppression of orbital fluctuations in orthorhombic 3d1 perovskites.

    PubMed

    Pavarini, E; Biermann, S; Poteryaev, A; Lichtenstein, A I; Georges, A; Andersen, O K

    2004-04-30

    Using t(2g) Wannier functions, a low-energy Hamiltonian is derived for orthorhombic 3d(1) transition-metal oxides. Electronic correlations are treated with a new implementation of dynamical mean-field theory for noncubic systems. Good agreement with photoemission data is obtained. The interplay of correlation effects and cation covalency (GdFeO3-type distortions) is found to suppress orbital fluctuations in LaTiO3 and even more in YTiO3, and to favor the transition to the insulating state.

  12. The topological particle and Morse theory

    NASA Astrophysics Data System (ADS)

    Rogers, Alice

    2000-09-01

    Canonical BRST quantization of the topological particle defined by a Morse function h is described. Stochastic calculus, using Brownian paths which implement the WKB method in a new way providing rigorous tunnelling results even in curved space, is used to give an explicit and simple expression for the matrix elements of the evolution operator for the BRST Hamiltonian. These matrix elements lead to a representation of the manifold cohomology in terms of critical points of h along lines developed by Witten (Witten E 1982 J. Diff. Geom. 17 661-92).

  13. Structural properties of lead-lithium alloys

    NASA Astrophysics Data System (ADS)

    Khambholja, S. G.; Satikunvar, D. D.; Abhishek, Agraj; Thakore, B. Y.

    2018-05-01

    Lead-Lihtium alloys have found large number of applications as liquid metal coolants in nuclear reactors. Large number of experimental work is reported for this system. However, complete theoretical description is still rare. In this scenario, we in the present work report the study of ground state properties of Lead-Lithium system. The present study is performed using plane wave pseudopotential density functional theory as implemented in Quantum ESPRESSO package. The theoretical findings are in agreement with previously reported experimental data. Some conclusions are drawn based on present study, which will be helpful for a comprehensive study.

  14. Medical student selection and society: Lessons we learned from sociological theories.

    PubMed

    Yaghmaei, Minoo; Yazdani, Shahram; Ahmady, Soleiman

    2016-01-01

    The aim of this study was to show the interaction between the society, applicants and medical schools in terms of medical student selection. In this study, the trends to implement social factors in the selection process were highlighted. These social factors were explored through functionalism and conflict theories, each focusing on different categories of social factors. While functionalist theorists pay attention to diversity in the selection process, conflict theorists highlight the importance of socio-economic class. Although both theories believe in sorting, their different views are reflected in their sorting strategies. Both theories emphasize the importance of the person-society relationship in motivation to enter university. Furthermore, the impacts of social goals on the selection policies are derived from both theories. Theories in the sociology of education offer an approach to student selection that acknowledges and supports complexity, plurality of approaches and innovative means of selection. Medical student selection does not solely focus on the individual assessment and qualification, but it focuses on a social and collective process, which includes all the influences and interactions between the medical schools and the society. Sociological perspective of medical student selection proposes a model that envelops the individual and the society. In this model, the selection methods should meet the criteria of merit at the individual level, while the selection policies should aim at the society goals at the institutional level.

  15. Lattice cluster theory for dense, thin polymer films.

    PubMed

    Freed, Karl F

    2015-04-07

    While the application of the lattice cluster theory (LCT) to study the miscibility of polymer blends has greatly expanded our understanding of the monomer scale molecular details influencing miscibility, the corresponding theory for inhomogeneous systems has not yet emerged because of considerable technical difficulties and much greater complexity. Here, we present a general formulation enabling the extension of the LCT to describe the thermodynamic properties of dense, thin polymer films using a high dimension, high temperature expansion. Whereas the leading order of the LCT for bulk polymer systems is essentially simple Flory-Huggins theory, the highly non-trivial leading order inhomogeneous LCT (ILCT) for a film with L layers already involves the numerical solution of 3(L - 1) coupled, highly nonlinear equations for the various density profiles in the film. The new theory incorporates the essential "transport" constraints of Helfand and focuses on the strict imposition of excluded volume constraints, appropriate to dense polymer systems, rather than the maintenance of chain connectivity as appropriate for lower densities and as implemented in self-consistent theories of polymer adsorption at interfaces. The ILCT is illustrated by presenting examples of the computed profiles of the density, the parallel and perpendicular bonds, and the chain ends for free standing and supported films as a function of average film density, chain length, temperature, interaction with support, and chain stiffness. The results generally agree with expected general trends.

  16. Elevated carbon dioxide is predicted to promote coexistence among competing species in a trait-based model

    DOE PAGES

    Ali, Ashehad A.; Medlyn, Belinda E.; Aubier, Thomas G.; ...

    2015-10-06

    Differential species responses to atmospheric CO 2 concentration (C a) could lead to quantitative changes in competition among species and community composition, with flow-on effects for ecosystem function. However, there has been little theoretical analysis of how elevated C a (eC a) will affect plant competition, or how composition of plant communities might change. Such theoretical analysis is needed for developing testable hypotheses to frame experimental research. Here, we investigated theoretically how plant competition might change under eC a by implementing two alternative competition theories, resource use theory and resource capture theory, in a plant carbon and nitrogen cycling model.more » The model makes several novel predictions for the impact of eC a on plant community composition. Using resource use theory, the model predicts that eC a is unlikely to change species dominance in competition, but is likely to increase coexistence among species. Using resource capture theory, the model predicts that eC a may increase community evenness. Collectively, both theories suggest that eC a will favor coexistence and hence that species diversity should increase with eC a. Our theoretical analysis leads to a novel hypothesis for the impact of eC a on plant community composition. In this study, the hypothesis has potential to help guide the design and interpretation of eC a experiments.« less

  17. Towards a framework for the elicitation of dilemmas.

    PubMed

    Burger, Marc J C

    2008-08-01

    This paper covers the main findings of the doctoral research that was concerned with seeking to extend aspects of dilemma theory. In professional practice, the Trompenaars Hampden-Turner Dilemma Reconciliation Process(TM) is a vehicle delivering dilemma theory in application. It informs a manager or leader on how to explore the dilemmas they face, how to reconcile the tensions that result, and how to structure the action steps for implementing the reconciled solutions. This vehicle forms the professional practice of the author who seeks to bring more rigor to consulting practice and thereby also contribute to theory development in the domain. The critical review of dilemma theory reveals that previous authors are inconsistent and variously invalid in their use of the terms 'dilemma theory,' 'dilemma methodology,' 'dilemma process,' 'dilemma reconciliation,' etc., and therefore an attempt is made to resolve these inconsistencies by considering whether 'dilemmaism' at the meta-level might be positioned as a new paradigm of inquiry for (management) research that embodies ontological, epistemological, and methodical premises that frame an approach to the resolution of real world business problems in (multi) disciplinary; (multi) functional and (multi) cultural business environments. This research offers contributions to knowledge, professional practice and theory development from the exploration of the SPID model as a way to make the elicitation of dilemmas more rigorous and structured and in the broader context of exploring 'dilemmaism' as a new paradigm of inquiry.

  18. Necessary and sufficient condition for the realization of the complex wavelet

    NASA Astrophysics Data System (ADS)

    Keita, Alpha; Qing, Qianqin; Wang, Nengchao

    1997-04-01

    Wavelet theory is a whole new signal analysis theory in recent years, and the appearance of which is attracting lots of experts in many different fields giving it a deepen study. Wavelet transformation is a new kind of time. Frequency domain analysis method of localization in can-be- realized time domain or frequency domain. It has many perfect characteristics that many other kinds of time frequency domain analysis, such as Gabor transformation or Viginier. For example, it has orthogonality, direction selectivity, variable time-frequency domain resolution ratio, adjustable local support, parsing data in little amount, and so on. All those above make wavelet transformation a very important new tool and method in signal analysis field. Because the calculation of complex wavelet is very difficult, in application, real wavelet function is used. In this paper, we present a necessary and sufficient condition that the real wavelet function can be obtained by the complex wavelet function. This theorem has some significant values in theory. The paper prepares its technique from Hartley transformation, then, it gives the complex wavelet was a signal engineering expert. His Hartley transformation, which also mentioned by Hartley, had been overlooked for about 40 years, for the social production conditions at that time cannot help to show its superiority. Only when it came to the end of 70s and the early 80s, after the development of the fast algorithm of Fourier transformation and the hardware implement to some degree, the completely some positive-negative transforming method was coming to take seriously. W transformation, which mentioned by Zhongde Wang, pushed the studying work of Hartley transformation and its fast algorithm forward. The kernel function of Hartley transformation.

  19. Light-cone quantization of two dimensional field theory in the path integral approach

    NASA Astrophysics Data System (ADS)

    Cortés, J. L.; Gamboa, J.

    1999-05-01

    A quantization condition due to the boundary conditions and the compatification of the light cone space-time coordinate x- is identified at the level of the classical equations for the right-handed fermionic field in two dimensions. A detailed analysis of the implications of the implementation of this quantization condition at the quantum level is presented. In the case of the Thirring model one has selection rules on the excitations as a function of the coupling and in the case of the Schwinger model a double integer structure of the vacuum is derived in the light-cone frame. Two different quantized chiral Schwinger models are found, one of them without a θ-vacuum structure. A generalization of the quantization condition to theories with several fermionic fields and to higher dimensions is presented.

  20. Real time polymer nanocomposites-based physical nanosensors: theory and modeling.

    PubMed

    Bellucci, Stefano; Shunin, Yuri; Gopeyenko, Victor; Lobanova-Shunina, Tamara; Burlutskaya, Nataly; Zhukovskii, Yuri

    2017-09-01

    Functionalized carbon nanotubes and graphene nanoribbons nanostructures, serving as the basis for the creation of physical pressure and temperature nanosensors, are considered as tools for ecological monitoring and medical applications. Fragments of nanocarbon inclusions with different morphologies, presenting a disordered system, are regarded as models for nanocomposite materials based on carbon nanoсluster suspension in dielectric polymer environments (e.g., epoxy resins). We have formulated the approach of conductivity calculations for carbon-based polymer nanocomposites using the effective media cluster approach, disordered systems theory and conductivity mechanisms analysis, and obtained the calibration dependences. Providing a proper description of electric responses in nanosensoring systems, we demonstrate the implementation of advanced simulation models suitable for real time control nanosystems. We also consider the prospects and prototypes of the proposed physical nanosensor models providing the comparisons with experimental calibration dependences.

  1. Real time polymer nanocomposites-based physical nanosensors: theory and modeling

    NASA Astrophysics Data System (ADS)

    Bellucci, Stefano; Shunin, Yuri; Gopeyenko, Victor; Lobanova-Shunina, Tamara; Burlutskaya, Nataly; Zhukovskii, Yuri

    2017-09-01

    Functionalized carbon nanotubes and graphene nanoribbons nanostructures, serving as the basis for the creation of physical pressure and temperature nanosensors, are considered as tools for ecological monitoring and medical applications. Fragments of nanocarbon inclusions with different morphologies, presenting a disordered system, are regarded as models for nanocomposite materials based on carbon nanoсluster suspension in dielectric polymer environments (e.g., epoxy resins). We have formulated the approach of conductivity calculations for carbon-based polymer nanocomposites using the effective media cluster approach, disordered systems theory and conductivity mechanisms analysis, and obtained the calibration dependences. Providing a proper description of electric responses in nanosensoring systems, we demonstrate the implementation of advanced simulation models suitable for real time control nanosystems. We also consider the prospects and prototypes of the proposed physical nanosensor models providing the comparisons with experimental calibration dependences.

  2. The position of the targets of the public health policy of maternal and child in Bandung

    NASA Astrophysics Data System (ADS)

    Sugyati, C.; Mariana, D.; Sjoraida, D. F.

    2018-03-01

    This study seeks to make a deep, systematic analysis of the urgency of implementing elements in the implementation of public health policies, especially in the field of mother and child in Bandung City, West Java. This study is important to evaluate whether the government services on maternal and child health is sufficient or not. With the descriptive-qualitative method, this study presents a discussion of how the implementers interact with the community as their targets in implementing public health programs in Bandung City so that their presence is indispensable. With theories of implementation of policies and health campaign, the data was obtained and showed that (a) the unity of the coordination and uniformity of information services, and a network of cooperation in public health institutions, in the Government of Bandung City, have been performed well; (b) in getting their rights the targets are highly motivated for the services of public health and some of them function to be the volunteers to assist local health policy implementers. However, the lack of health care workers who were directly addressing maternal and child health was perceived by the public so well that this study recommends the convening of additional formal health workers in the community.

  3. On the implementation of the spherical collapse model for dark energy models

    NASA Astrophysics Data System (ADS)

    Pace, Francesco; Meyer, Sven; Bartelmann, Matthias

    2017-10-01

    In this work we review the theory of the spherical collapse model and critically analyse the aspects of the numerical implementation of its fundamental equations. By extending a recent work by [1], we show how different aspects, such as the initial integration time, the definition of constant infinity and the criterion for the extrapolation method (how close the inverse of the overdensity has to be to zero at the collapse time) can lead to an erroneous estimation (a few per mill error which translates to a few percent in the mass function) of the key quantity in the spherical collapse model: the linear critical overdensity δc, which plays a crucial role for the mass function of halos. We provide a better recipe to adopt in designing a code suitable to a generic smooth dark energy model and we compare our numerical results with analytic predictions for the EdS and the ΛCDM models. We further discuss the evolution of δc for selected classes of dark energy models as a general test of the robustness of our implementation. We finally outline which modifications need to be taken into account to extend the code to more general classes of models, such as clustering dark energy models and non-minimally coupled models.

  4. The adaptive buffered force QM/MM method in the CP2K and AMBER software packages

    DOE PAGES

    Mones, Letif; Jones, Andrew; Götz, Andreas W.; ...

    2015-02-03

    We present the implementation and validation of the adaptive buffered force (AdBF) quantum-mechanics/molecular-mechanics (QM/MM) method in two popular packages, CP2K and AMBER. The implementations build on the existing QM/MM functionality in each code, extending it to allow for redefinition of the QM and MM regions during the simulation and reducing QM-MM interface errors by discarding forces near the boundary according to the buffered force-mixing approach. New adaptive thermostats, needed by force-mixing methods, are also implemented. Different variants of the method are benchmarked by simulating the structure of bulk water, water autoprotolysis in the presence of zinc and dimethyl-phosphate hydrolysis usingmore » various semiempirical Hamiltonians and density functional theory as the QM model. It is shown that with suitable parameters, based on force convergence tests, the AdBF QM/MM scheme can provide an accurate approximation of the structure in the dynamical QM region matching the corresponding fully QM simulations, as well as reproducing the correct energetics in all cases. Adaptive unbuffered force-mixing and adaptive conventional QM/MM methods also provide reasonable results for some systems, but are more likely to suffer from instabilities and inaccuracies.« less

  5. Paraxial diffractive elements for space-variant linear transforms

    NASA Astrophysics Data System (ADS)

    Teiwes, Stephan; Schwarzer, Heiko; Gu, Ben-Yuan

    1998-06-01

    Optical linear transform architectures bear good potential for future developments of very powerful hybrid vision systems and neural network classifiers. The optical modules of such systems could be used as pre-processors to solve complex linear operations at very high speed in order to simplify an electronic data post-processing. However, the applicability of linear optical architectures is strongly connected with the fundamental question of how to implement a specific linear transform by optical means and physical imitations. The large majority of publications on this topic focusses on the optical implementation of space-invariant transforms by the well-known 4f-setup. Only few papers deal with approaches to implement selected space-variant transforms. In this paper, we propose a simple algebraic method to design diffractive elements for an optical architecture in order to realize arbitrary space-variant transforms. The design procedure is based on a digital model of scalar, paraxial wave theory and leads to optimal element transmission functions within the model. Its computational and physical limitations are discussed in terms of complexity measures. Finally, the design procedure is demonstrated by some examples. Firstly, diffractive elements for the realization of different rotation operations are computed and, secondly, a Hough transform element is presented. The correct optical functions of the elements are proved in computer simulation experiments.

  6. On the implementation of the spherical collapse model for dark energy models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pace, Francesco; Meyer, Sven; Bartelmann, Matthias, E-mail: francesco.pace@manchester.ac.uk, E-mail: sven.meyer@uni-heidelberg.de, E-mail: bartelmann@uni-heidelberg.de

    In this work we review the theory of the spherical collapse model and critically analyse the aspects of the numerical implementation of its fundamental equations. By extending a recent work by [1], we show how different aspects, such as the initial integration time, the definition of constant infinity and the criterion for the extrapolation method (how close the inverse of the overdensity has to be to zero at the collapse time) can lead to an erroneous estimation (a few per mill error which translates to a few percent in the mass function) of the key quantity in the spherical collapsemore » model: the linear critical overdensity δ{sub c}, which plays a crucial role for the mass function of halos. We provide a better recipe to adopt in designing a code suitable to a generic smooth dark energy model and we compare our numerical results with analytic predictions for the EdS and the ΛCDM models. We further discuss the evolution of δ{sub c} for selected classes of dark energy models as a general test of the robustness of our implementation. We finally outline which modifications need to be taken into account to extend the code to more general classes of models, such as clustering dark energy models and non-minimally coupled models.« less

  7. The adaptive buffered force QM/MM method in the CP2K and AMBER software packages

    PubMed Central

    Mones, Letif; Jones, Andrew; Götz, Andreas W; Laino, Teodoro; Walker, Ross C; Leimkuhler, Ben; Csányi, Gábor; Bernstein, Noam

    2015-01-01

    The implementation and validation of the adaptive buffered force (AdBF) quantum-mechanics/molecular-mechanics (QM/MM) method in two popular packages, CP2K and AMBER are presented. The implementations build on the existing QM/MM functionality in each code, extending it to allow for redefinition of the QM and MM regions during the simulation and reducing QM-MM interface errors by discarding forces near the boundary according to the buffered force-mixing approach. New adaptive thermostats, needed by force-mixing methods, are also implemented. Different variants of the method are benchmarked by simulating the structure of bulk water, water autoprotolysis in the presence of zinc and dimethyl-phosphate hydrolysis using various semiempirical Hamiltonians and density functional theory as the QM model. It is shown that with suitable parameters, based on force convergence tests, the AdBF QM/MM scheme can provide an accurate approximation of the structure in the dynamical QM region matching the corresponding fully QM simulations, as well as reproducing the correct energetics in all cases. Adaptive unbuffered force-mixing and adaptive conventional QM/MM methods also provide reasonable results for some systems, but are more likely to suffer from instabilities and inaccuracies. © 2015 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. PMID:25649827

  8. The adaptive buffered force QM/MM method in the CP2K and AMBER software packages

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mones, Letif; Jones, Andrew; Götz, Andreas W.

    We present the implementation and validation of the adaptive buffered force (AdBF) quantum-mechanics/molecular-mechanics (QM/MM) method in two popular packages, CP2K and AMBER. The implementations build on the existing QM/MM functionality in each code, extending it to allow for redefinition of the QM and MM regions during the simulation and reducing QM-MM interface errors by discarding forces near the boundary according to the buffered force-mixing approach. New adaptive thermostats, needed by force-mixing methods, are also implemented. Different variants of the method are benchmarked by simulating the structure of bulk water, water autoprotolysis in the presence of zinc and dimethyl-phosphate hydrolysis usingmore » various semiempirical Hamiltonians and density functional theory as the QM model. It is shown that with suitable parameters, based on force convergence tests, the AdBF QM/MM scheme can provide an accurate approximation of the structure in the dynamical QM region matching the corresponding fully QM simulations, as well as reproducing the correct energetics in all cases. Adaptive unbuffered force-mixing and adaptive conventional QM/MM methods also provide reasonable results for some systems, but are more likely to suffer from instabilities and inaccuracies.« less

  9. Advanced RF and microwave functions based on an integrated optical frequency comb source.

    PubMed

    Xu, Xingyuan; Wu, Jiayang; Nguyen, Thach G; Shoeiby, Mehrdad; Chu, Sai T; Little, Brent E; Morandotti, Roberto; Mitchell, Arnan; Moss, David J

    2018-02-05

    We demonstrate advanced transversal radio frequency (RF) and microwave functions based on a Kerr optical comb source generated by an integrated micro-ring resonator. We achieve extremely high performance for an optical true time delay aimed at tunable phased array antenna applications, as well as reconfigurable microwave photonic filters. Our results agree well with theory. We show that our true time delay would yield a phased array antenna with features that include high angular resolution and a wide range of beam steering angles, while the microwave photonic filters feature high Q factors, wideband tunability, and highly reconfigurable filtering shapes. These results show that our approach is a competitive solution to implementing reconfigurable, high performance and potentially low cost RF and microwave signal processing functions for applications including radar and communication systems.

  10. Vanadium impurity effects on optical properties of Ti3N2 mono-layer: An ab-initio study

    NASA Astrophysics Data System (ADS)

    Babaeipour, Manuchehr; Eslam, Farzaneh Ghafari; Boochani, Arash; Nezafat, Negin Beryani

    2018-06-01

    The present work is investigated the effect of vanadium impurity on electronic and optical properties of Ti3N2 monolayer by using density function theory (DFT) implemented in Wien2k code. In order to study optical properties in two polarization directions of photons, namely E||x and E||z, dielectric function, absorption coefficient, optical conductivity, refraction index, extinction index, reflectivity, and energy loss function of Ti3N2 and Ti3N2-V monolayer have been evaluated within GGA (PBE) approximation. Although, Ti3N2 monolayer is a good infrared reflector and can be used as an infrared mirror, introducing V atom in the infrared area will decrease optical conductivity because optical conductivity of a pure form of a material is higher than its doped form.

  11. Linear-scaling density-functional simulations of charged point defects in Al2O3 using hierarchical sparse matrix algebra.

    PubMed

    Hine, N D M; Haynes, P D; Mostofi, A A; Payne, M C

    2010-09-21

    We present calculations of formation energies of defects in an ionic solid (Al(2)O(3)) extrapolated to the dilute limit, corresponding to a simulation cell of infinite size. The large-scale calculations required for this extrapolation are enabled by developments in the approach to parallel sparse matrix algebra operations, which are central to linear-scaling density-functional theory calculations. The computational cost of manipulating sparse matrices, whose sizes are determined by the large number of basis functions present, is greatly improved with this new approach. We present details of the sparse algebra scheme implemented in the ONETEP code using hierarchical sparsity patterns, and demonstrate its use in calculations on a wide range of systems, involving thousands of atoms on hundreds to thousands of parallel processes.

  12. Benchmark coupled-cluster g-tensor calculations with full inclusion of the two-particle spin-orbit contributions.

    PubMed

    Perera, Ajith; Gauss, Jürgen; Verma, Prakash; Morales, Jorge A

    2017-04-28

    We present a parallel implementation to compute electron spin resonance g-tensors at the coupled-cluster singles and doubles (CCSD) level which employs the ACES III domain-specific software tools for scalable parallel programming, i.e., the super instruction architecture language and processor (SIAL and SIP), respectively. A unique feature of the present implementation is the exact (not approximated) inclusion of the five one- and two-particle contributions to the g-tensor [i.e., the mass correction, one- and two-particle paramagnetic spin-orbit, and one- and two-particle diamagnetic spin-orbit terms]. Like in a previous implementation with effective one-electron operators [J. Gauss et al., J. Phys. Chem. A 113, 11541-11549 (2009)], our implementation utilizes analytic CC second derivatives and, therefore, classifies as a true CC linear-response treatment. Therefore, our implementation can unambiguously appraise the accuracy of less costly effective one-particle schemes and provide a rationale for their widespread use. We have considered a large selection of radicals used previously for benchmarking purposes including those studied in earlier work and conclude that at the CCSD level, the effective one-particle scheme satisfactorily captures the two-particle effects less costly than the rigorous two-particle scheme. With respect to the performance of density functional theory (DFT), we note that results obtained with the B3LYP functional exhibit the best agreement with our CCSD results. However, in general, the CCSD results agree better with the experimental data than the best DFT/B3LYP results, although in most cases within the rather large experimental error bars.

  13. Implementation of the diagonalization-free algorithm in the self-consistent field procedure within the four-component relativistic scheme.

    PubMed

    Hrdá, Marcela; Kulich, Tomáš; Repiský, Michal; Noga, Jozef; Malkina, Olga L; Malkin, Vladimir G

    2014-09-05

    A recently developed Thouless-expansion-based diagonalization-free approach for improving the efficiency of self-consistent field (SCF) methods (Noga and Šimunek, J. Chem. Theory Comput. 2010, 6, 2706) has been adapted to the four-component relativistic scheme and implemented within the program package ReSpect. In addition to the implementation, the method has been thoroughly analyzed, particularly with respect to cases for which it is difficult or computationally expensive to find a good initial guess. Based on this analysis, several modifications of the original algorithm, refining its stability and efficiency, are proposed. To demonstrate the robustness and efficiency of the improved algorithm, we present the results of four-component diagonalization-free SCF calculations on several heavy-metal complexes, the largest of which contains more than 80 atoms (about 6000 4-spinor basis functions). The diagonalization-free procedure is about twice as fast as the corresponding diagonalization. Copyright © 2014 Wiley Periodicals, Inc.

  14. A mathematical description of the inclusive fitness theory.

    PubMed

    Wakano, Joe Yuichiro; Ohtsuki, Hisashi; Kobayashi, Yutaka

    2013-03-01

    Recent developments in the inclusive fitness theory have revealed that the direction of evolution can be analytically predicted in a wider class of models than previously thought, such as those models dealing with network structure. This paper aims to provide a mathematical description of the inclusive fitness theory. Specifically, we provide a general framework based on a Markov chain that can implement basic models of inclusive fitness. Our framework is based on the probability distribution of "offspring-to-parent map", from which the key concepts of the theory, such as fitness function, relatedness and inclusive fitness, are derived in a straightforward manner. We prove theorems showing that inclusive fitness always provides a correct prediction on which of two competing genes more frequently appears in the long run in the Markov chain. As an application of the theorems, we prove a general formula of the optimal dispersal rate in the Wright's island model with recurrent mutations. We also show the existence of the critical mutation rate, which does not depend on the number of islands and below which a positive dispersal rate evolves. Our framework can also be applied to lattice or network structured populations. Copyright © 2012 Elsevier Inc. All rights reserved.

  15. Enhancing the accuracy of the Fowler method for monitoring non-constant work functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedl, R., E-mail: roland.friedl@physik.uni-augsburg.de

    2016-04-15

    The Fowler method is a prominent non-invasive technique to determine the absolute work function of a surface based on the photoelectric effect. The evaluation procedure relies on the correlation of the photocurrent with the incident photon energy hν which is mainly dependent on the surface work function χ. Applying Fowler’s theory of the photocurrent, the measurements can be fitted by the theoretical curve near the threshold hν⪆χ yielding the work function χ and a parameter A. The straightforward experimental implementation of the Fowler method is to use several particular photon energies, e.g. via interference filters. However, with a realization likemore » that the restriction hν ≈ χ can easily be violated, especially when the work function of the material is decreasing during the measurements as, for instance, with coating or adsorption processes. This can lead to an overestimation of the evaluated work function value of typically some 0.1 eV, reaching up to more than 0.5 eV in an unfavorable case. A detailed analysis of the Fowler theory now reveals the background of that effect and shows that the fit-parameter A can be used to assess the accuracy of the determined value of χ conveniently during the measurements. Moreover, a scheme is introduced to quantify a potential overestimation and to perform a correction to χ to a certain extent. The issues are demonstrated exemplarily at the monitoring of the work function reduction of a stainless steel sample surface due to caesiation.« less

  16. Prospect theory in the valuation of health.

    PubMed

    Moffett, Maurice L; Suarez-Almazor, Maria E

    2005-08-01

    Prospect theory is the prominent nonexpected utility theory in the estimation of health state preference scores for quality-adjusted life year calculation. Until recently, the theory was not considered to be developed to the point of implementation in economic analysis. This review focuses on the research and evidence that tests the implementation of prospect theory into health state valuation. The typical application of expected utility theory assumes that a decision maker has stable preferences under conditions of risk and uncertainty. Under prospect theory, preferences are dependent on whether the decision maker regards the outcome of a choice as a gain or loss, relative to a reference point. The conceptual preference for standard gamble utilities in the valuation of health states has led to the development of elicitation techniques. Empirical evidence using these techniques indicates that when individual preferences are elicited, a prospect theory consistent framework appears to be necessary for adequate representation of individual health utilities. The relevance of prospect theory to policy making and resource allocation remains to be established. Societal preferences may not need the same attitudes towards risks as individual preferences, and may remain largely risk neutral.

  17. Non-hard sphere thermodynamic perturbation theory.

    PubMed

    Zhou, Shiqi

    2011-08-21

    A non-hard sphere (HS) perturbation scheme, recently advanced by the present author, is elaborated for several technical matters, which are key mathematical details for implementation of the non-HS perturbation scheme in a coupling parameter expansion (CPE) thermodynamic perturbation framework. NVT-Monte Carlo simulation is carried out for a generalized Lennard-Jones (LJ) 2n-n potential to obtain routine thermodynamic quantities such as excess internal energy, pressure, excess chemical potential, excess Helmholtz free energy, and excess constant volume heat capacity. Then, these new simulation data, and available simulation data in literatures about a hard core attractive Yukawa fluid and a Sutherland fluid, are used to test the non-HS CPE 3rd-order thermodynamic perturbation theory (TPT) and give a comparison between the non-HS CPE 3rd-order TPT and other theoretical approaches. It is indicated that the non-HS CPE 3rd-order TPT is superior to other traditional TPT such as van der Waals/HS (vdW/HS), perturbation theory 2 (PT2)/HS, and vdW/Yukawa (vdW/Y) theory or analytical equation of state such as mean spherical approximation (MSA)-equation of state and is at least comparable to several currently the most accurate Ornstein-Zernike integral equation theories. It is discovered that three technical issues, i.e., opening up new bridge function approximation for the reference potential, choosing proper reference potential, and/or using proper thermodynamic route for calculation of f(ex-ref), chiefly decide the quality of the non-HS CPE TPT. Considering that the non-HS perturbation scheme applies for a wide variety of model fluids, and its implementation in the CPE thermodynamic perturbation framework is amenable to high-order truncation, the non-HS CPE 3rd-order or higher order TPT will be more promising once the above-mentioned three technological advances are established. © 2011 American Institute of Physics

  18. Implementation of a Smeared Crack Band Model in a Micromechanics Framework

    NASA Technical Reports Server (NTRS)

    Pineda, Evan J.; Bednarcyk, Brett A.; Waas, Anthony M.; Arnold, Steven M.

    2012-01-01

    The smeared crack band theory is implemented within the generalized method of cells and high-fidelity generalized method of cells micromechanics models to capture progressive failure within the constituents of a composite material while retaining objectivity with respect to the size of the discretization elements used in the model. An repeating unit cell containing 13 randomly arranged fibers is modeled and subjected to a combination of transverse tension/compression and transverse shear loading. The implementation is verified against experimental data (where available), and an equivalent finite element model utilizing the same implementation of the crack band theory. To evaluate the performance of the crack band theory within a repeating unit cell that is more amenable to a multiscale implementation, a single fiber is modeled with generalized method of cells and high-fidelity generalized method of cells using a relatively coarse subcell mesh which is subjected to the same loading scenarios as the multiple fiber repeating unit cell. The generalized method of cells and high-fidelity generalized method of cells models are validated against a very refined finite element model.

  19. Micromechanics-Based Progressive Failure Analysis of Composite Laminates Using Different Constituent Failure Theories

    NASA Technical Reports Server (NTRS)

    Moncada, Albert M.; Chattopadhyay, Aditi; Bednarcyk, Brett A.; Arnold, Steven M.

    2008-01-01

    Predicting failure in a composite can be done with ply level mechanisms and/or micro level mechanisms. This paper uses the Generalized Method of Cells and High-Fidelity Generalized Method of Cells micromechanics theories, coupled with classical lamination theory, as implemented within NASA's Micromechanics Analysis Code with Generalized Method of Cells. The code is able to implement different failure theories on the level of both the fiber and the matrix constituents within a laminate. A comparison is made among maximum stress, maximum strain, Tsai-Hill, and Tsai-Wu failure theories. To verify the failure theories the Worldwide Failure Exercise (WWFE) experiments have been used. The WWFE is a comprehensive study that covers a wide range of polymer matrix composite laminates. The numerical results indicate good correlation with the experimental results for most of the composite layups, but also point to the need for more accurate resin damage progression models.

  20. The role of theory in research to develop and evaluate the implementation of patient safety practices.

    PubMed

    Foy, Robbie; Ovretveit, John; Shekelle, Paul G; Pronovost, Peter J; Taylor, Stephanie L; Dy, Sydney; Hempel, Susanne; McDonald, Kathryn M; Rubenstein, Lisa V; Wachter, Robert M

    2011-05-01

    Theories provide a way of understanding and predicting the effects of patient safety practices (PSPs), interventions intended to prevent or mitigate harm caused by healthcare or risks of such harm. Yet most published evaluations make little or no explicit reference to theory, thereby hindering efforts to generalise findings from one context to another. Theories from a wide range of disciplines are potentially relevant to research on PSPs. Theory can be used in research to explain clinical and organisational behaviour, to guide the development and selection of PSPs, and in evaluating their implementation and mechanisms of action. One key recommendation from an expert consensus process is that researchers should describe the theoretical basis for chosen intervention components or provide an explicit logic model for 'why this PSP should work.' Future theory-driven evaluations would enhance generalisability and help build a cumulative understanding of the nature of change.

Top