Sample records for feynman variational principle

  1. Feynman’s clock, a new variational principle, and parallel-in-time quantum dynamics

    PubMed Central

    McClean, Jarrod R.; Parkhill, John A.; Aspuru-Guzik, Alán

    2013-01-01

    We introduce a discrete-time variational principle inspired by the quantum clock originally proposed by Feynman and use it to write down quantum evolution as a ground-state eigenvalue problem. The construction allows one to apply ground-state quantum many-body theory to quantum dynamics, extending the reach of many highly developed tools from this fertile research area. Moreover, this formalism naturally leads to an algorithm to parallelize quantum simulation over time. We draw an explicit connection between previously known time-dependent variational principles and the time-embedded variational principle presented. Sample calculations are presented, applying the idea to a hydrogen molecule and the spin degrees of freedom of a model inorganic compound, demonstrating the parallel speedup of our method as well as its flexibility in applying ground-state methodologies. Finally, we take advantage of the unique perspective of this variational principle to examine the error of basis approximations in quantum dynamics. PMID:24062428

  2. Huygens-Feynman-Fresnel principle as the basis of applied optics.

    PubMed

    Gitin, Andrey V

    2013-11-01

    The main relationships of wave optics are derived from a combination of the Huygens-Fresnel principle and the Feynman integral over all paths. The stationary-phase approximation of the wave relations gives the correspondent relations from the point of view of geometrical optics.

  3. Quantization of Non-Lagrangian Systems

    NASA Astrophysics Data System (ADS)

    Kochan, Denis

    A novel method for quantization of non-Lagrangian (open) systems is proposed. It is argued that the essential object, which provides both classical and quantum evolution, is a certain canonical two-form defined in extended velocity space. In this setting classical dynamics is recovered from the stringy-type variational principle, which employs umbilical surfaces instead of histories of the system. Quantization is then accomplished in accordance with the introduced variational principle. The path integral for the transition probability amplitude (propagator) is rearranged to a surface functional integral. In the standard case of closed (Lagrangian) systems the presented method reduces to the standard Feynman's approach. The inverse problem of the calculus of variation, the problem of quantization ambiguity and the quantum mechanics in the presence of friction are analyzed in detail.

  4. Evaluating Feynman integrals by the hypergeometry

    NASA Astrophysics Data System (ADS)

    Feng, Tai-Fu; Chang, Chao-Hsi; Chen, Jian-Bin; Gu, Zhi-Hua; Zhang, Hai-Bin

    2018-02-01

    The hypergeometric function method naturally provides the analytic expressions of scalar integrals from concerned Feynman diagrams in some connected regions of independent kinematic variables, also presents the systems of homogeneous linear partial differential equations satisfied by the corresponding scalar integrals. Taking examples of the one-loop B0 and massless C0 functions, as well as the scalar integrals of two-loop vacuum and sunset diagrams, we verify our expressions coinciding with the well-known results of literatures. Based on the multiple hypergeometric functions of independent kinematic variables, the systems of homogeneous linear partial differential equations satisfied by the mentioned scalar integrals are established. Using the calculus of variations, one recognizes the system of linear partial differential equations as stationary conditions of a functional under some given restrictions, which is the cornerstone to perform the continuation of the scalar integrals to whole kinematic domains numerically with the finite element methods. In principle this method can be used to evaluate the scalar integrals of any Feynman diagrams.

  5. Feynman diagrams and rooted maps

    NASA Astrophysics Data System (ADS)

    Prunotto, Andrea; Alberico, Wanda Maria; Czerski, Piotr

    2018-04-01

    The rooted maps theory, a branch of the theory of homology, is shown to be a powerful tool for investigating the topological properties of Feynman diagrams, related to the single particle propagator in the quantum many-body systems. The numerical correspondence between the number of this class of Feynman diagrams as a function of perturbative order and the number of rooted maps as a function of the number of edges is studied. A graphical procedure to associate Feynman diagrams and rooted maps is then stated. Finally, starting from rooted maps principles, an original definition of the genus of a Feynman diagram, which totally differs from the usual one, is given.

  6. What Feynman Could Not yet Use: The Generalised Hong-Ou-Mandel Experiment to Improve the QED Explanation of the Pauli Exclusion Principle

    ERIC Educational Resources Information Center

    Malgieri, Massimiliano; Tenni, Antonio; Onorato, Pasquale; De Ambrosis, Anna

    2016-01-01

    In this paper we present a reasoning line for introducing the Pauli exclusion principle in the context of an introductory course on quantum theory based on the sum over paths approach. We start from the argument originally introduced by Feynman in "QED: The Strange Theory of Light and Matter" and improve it by discussing with students…

  7. Feynman propagators on static spacetimes

    NASA Astrophysics Data System (ADS)

    Dereziński, Jan; Siemssen, Daniel

    We consider the Klein-Gordon equation on a static spacetime and minimally coupled to a static electromagnetic potential. We show that it is essentially self-adjoint on Cc∞. We discuss various distinguished inverses and bisolutions of the Klein-Gordon operator, focusing on the so-called Feynman propagator. We show that the Feynman propagator can be considered the boundary value of the resolvent of the Klein-Gordon operator, in the spirit of the limiting absorption principle known from the theory of Schrödinger operators. We also show that the Feynman propagator is the limit of the inverse of the Wick rotated Klein-Gordon operator.

  8. Matter-wave diffraction approaching limits predicted by Feynman path integrals for multipath interference

    NASA Astrophysics Data System (ADS)

    Barnea, A. Ronny; Cheshnovsky, Ori; Even, Uzi

    2018-02-01

    Interference experiments have been paramount in our understanding of quantum mechanics and are frequently the basis of testing the superposition principle in the framework of quantum theory. In recent years, several studies have challenged the nature of wave-function interference from the perspective of Born's rule—namely, the manifestation of so-called high-order interference terms in a superposition generated by diffraction of the wave functions. Here we present an experimental test of multipath interference in the diffraction of metastable helium atoms, with large-number counting statistics, comparable to photon-based experiments. We use a variation of the original triple-slit experiment and accurate single-event counting techniques to provide a new experimental bound of 2.9 ×10-5 on the statistical deviation from the commonly approximated null third-order interference term in Born's rule for matter waves. Our value is on the order of the maximal contribution predicted for multipath trajectories by Feynman path integrals.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flego, S.P.; Plastino, A.; Universitat de les Illes Balears and IFISC-CSIC, 07122 Palma de Mallorca

    We explore intriguing links connecting Hellmann-Feynman's theorem to a thermodynamics information-optimizing principle based on Fisher's information measure. - Highlights: > We link a purely quantum mechanical result, the Hellmann-Feynman theorem, with Jaynes' information theoretical reciprocity relations. > These relations involve the coefficients of a series expansion of the potential function. > We suggest the existence of a Legendre transform structure behind Schroedinger's equation, akin to the one characterizing thermodynamics.

  10. Variational Principles, Occam Razor and Simplicity Paradox

    NASA Astrophysics Data System (ADS)

    Berezin, Alexander A.

    2004-05-01

    Variational minimum principles (VMP) refer to energy (statics, Thomson and Earnshaw theorems in electrostatics), action (Maupertuis, Euler, Lagrange, Hamilton), light (Fermat), quantum paths (Feynman), etc. Historically, VMP appeal to some economy in nature, similarly to Occam Razor Parsimony (ORP) principle. Version of ORP are "best world" (Leibniz), Panglossianism (Voltaire), and "most interesting world" (Dyson). Conceptually, VMP exemplify curious fact that infinite set is often simpler than its subsets (e.g., set of all integers is simpler than set of primes). Algorithmically very simple number 0.1234567... (Champernowne constant) contains Library of Babel of "all books" (Borges) and codes (infinitely many times) everything countably possible. Likewise, full Megaverse (Everett, Deutsch, Guth, Linde) is simpler than our specific ("Big Bang") universe. Dynamically, VMP imply memory effects akin to hysteresis. Similar ideas are "water memory" (Benveniste, Josephson) and isotopic biology (Berezin). Paradoxically, while ORP calls for economy (simplicity), unfolding of ORP in VMP seemingly works in the opposite direction allowing for complexity emergence (e.g., symmetry breaking in Jahn-Teller effect). Metaphysical extrapolation of this complimentarity may lead to "it-from-bit" (Wheeler) reflection of why there is something rather than nothing.

  11. Entropy-variation with resistance in a quantized RLC circuit derived by the generalized Hellmann-Feynman theorem

    NASA Astrophysics Data System (ADS)

    Fan, Hong-Yi; Xu, Xue-Xiang; Hu, Li-Yun

    2010-06-01

    By virtue of the generalized Hellmann-Feynman theorem for the ensemble average, we obtain the internal energy and average energy consumed by the resistance R in a quantized resistance-inductance-capacitance (RLC) electric circuit. We also calculate the entropy-variation with R. The relation between entropy and R is also derived. By the use of figures we indeed see that the entropy increases with the increment of R.

  12. New Tools for Forecasting Old Physics at the LHC

    ScienceCinema

    Dixon, Lance

    2018-05-21

    For the LHC to uncover many types of new physics, the "old physics" produced by the Standard Model must be understood very well. For decades, the central theoretical tool for this job was the Feynman diagram expansion. However, Feynman diagrams are just too slow, even on fast computers, to allow adequate precision for complicated LHC events with many jets in the final state. Such events are already visible in the initial LHC data. Over the past few years, alternative methods to Feynman diagrams have come to fruition. These new "on-shell" methods are based on the old principles of unitarity and factorization. They can be much more efficient because they exploit the underlying simplicity of scattering amplitudes, and recycle lower-loop information. I will describe how and why these methods work, and present some of the recent state-of-the-art results that have been obtained with them.

  13. The quantum universe

    NASA Astrophysics Data System (ADS)

    Hey, Anthony J. G.; Walters, Patrick

    This book provides a descriptive, popular account of quantum physics. The basic topics addressed include: waves and particles, the Heisenberg uncertainty principle, the Schroedinger equation and matter waves, atoms and nuclei, quantum tunneling, the Pauli exclusion principle and the elements, quantum cooperation and superfluids, Feynman rules, weak photons, quarks, and gluons. The applications of quantum physics to astrophyics, nuclear technology, and modern electronics are addressed.

  14. Metaphysics of the principle of least action

    NASA Astrophysics Data System (ADS)

    Terekhovich, Vladislav

    2018-05-01

    Despite the importance of the variational principles of physics, there have been relatively few attempts to consider them for a realistic framework. In addition to the old teleological question, this paper continues the recent discussion regarding the modal involvement of the principle of least action and its relations with the Humean view of the laws of nature. The reality of possible paths in the principle of least action is examined from the perspectives of the contemporary metaphysics of modality and Leibniz's concept of essences or possibles striving for existence. I elaborate a modal interpretation of the principle of least action that replaces a classical representation of a system's motion along a single history in the actual modality by simultaneous motions along an infinite set of all possible histories in the possible modality. This model is based on an intuition that deep ontological connections exist between the possible paths in the principle of least action and possible quantum histories in the Feynman path integral. I interpret the action as a physical measure of the essence of every possible history. Therefore only one actual history has the highest degree of the essence and minimal action. To address the issue of necessity, I assume that the principle of least action has a general physical necessity and lies between the laws of motion with a limited physical necessity and certain laws with a metaphysical necessity.

  15. Bold Diagrammatic Monte Carlo for Fermionic and Fermionized Systems

    NASA Astrophysics Data System (ADS)

    Svistunov, Boris

    2013-03-01

    In three different fermionic cases--repulsive Hubbard model, resonant fermions, and fermionized spins-1/2 (on triangular lattice)--we observe the phenomenon of sign blessing: Feynman diagrammatic series features finite convergence radius despite factorial growth of the number of diagrams with diagram order. Bold diagrammatic Monte Carlo technique allows us to sample millions of skeleton Feynman diagrams. With the universal fermionization trick we can fermionize essentially any (bosonic, spin, mixed, etc.) lattice system. The combination of fermionization and Bold diagrammatic Monte Carlo yields a universal first-principle approach to strongly correlated lattice systems, provided the sign blessing is a generic fermionic phenomenon. Supported by NSF and DARPA

  16. Capturing nonlocal interaction effects in the Hubbard model: Optimal mappings and limits of applicability

    NASA Astrophysics Data System (ADS)

    van Loon, E. G. C. P.; Schüler, M.; Katsnelson, M. I.; Wehling, T. O.

    2016-10-01

    We investigate the Peierls-Feynman-Bogoliubov variational principle to map Hubbard models with nonlocal interactions to effective models with only local interactions. We study the renormalization of the local interaction induced by nearest-neighbor interaction and assess the quality of the effective Hubbard models in reproducing observables of the corresponding extended Hubbard models. We compare the renormalization of the local interactions as obtained from numerically exact determinant quantum Monte Carlo to approximate but more generally applicable calculations using dual boson, dynamical mean field theory, and the random phase approximation. These more approximate approaches are crucial for any application with real materials in mind. Furthermore, we use the dual boson method to calculate observables of the extended Hubbard models directly and benchmark these against determinant quantum Monte Carlo simulations of the effective Hubbard model.

  17. Destructive interferences results in bosons anti bunching: refining Feynman's argument

    NASA Astrophysics Data System (ADS)

    Marchewka, Avi; Granot, Er'el

    2014-09-01

    The effect of boson bunching is frequently mentioned and discussed in the literature. This effect is the manifestation of bosons tendency to "travel" in clusters. One of the core arguments for boson bunching was formulated by Feynman in his well-known lecture series and has been frequently used ever since. By comparing the scattering probabilities of two bosons and of two distinguishable particles, he concluded: "We have the result that it is twice as likely to find two identical Bose particles scattered into the same state as you would calculate assuming the particles were different" [R.P. Feynman, R.B. Leighton, M. Sands, The Feynman Lectures on Physics: Quantum mechanics (Addison-Wesley, 1965)]. This argument was rooted in the scientific community (see for example [C. Cohen-Tannoudji, B. Diu, F. Laloë, Quantum Mechanics (John Wiley & Sons, Paris, 1977); W. Pauli, Exclusion Principle and Quantum Mechanics, Nobel Lecture (1946)]), however, while this sentence is completely valid, as is proved in [C. Cohen-Tannoudji, B. Diu, F. Laloë, Quantum Mechanics (John Wiley & Sons, Paris, 1977)], it is not a synonym of bunching. In fact, as it is shown in this paper, wherever one of the wavefunctions has a zero, bosons can anti-bunch and fermions can bunch. It should be stressed that zeros in the wavefunctions are ubiquitous in Quantum Mechanics and therefore the effect should be common. Several scenarios are suggested to witness the effect.

  18. Importance sampling studies of helium using the Feynman-Kac path integral method

    NASA Astrophysics Data System (ADS)

    Datta, S.; Rejcek, J. M.

    2018-05-01

    In the Feynman-Kac path integral approach the eigenvalues of a quantum system can be computed using Wiener measure which uses Brownian particle motion. In our previous work on such systems we have observed that the Wiener process numerically converges slowly for dimensions greater than two because almost all trajectories will escape to infinity. One can speed up this process by using a generalized Feynman-Kac (GFK) method, in which the new measure associated with the trial function is stationary, so that the convergence rate becomes much faster. We thus achieve an example of "importance sampling" and, in the present work, we apply it to the Feynman-Kac (FK) path integrals for the ground and first few excited-state energies for He to speed up the convergence rate. We calculate the path integrals using space averaging rather than the time averaging as done in the past. The best previous calculations from variational computations report precisions of 10-16 Hartrees, whereas in most cases our path integral results obtained for the ground and first excited states of He are lower than these results by about 10-6 Hartrees or more.

  19. Quantum Theory of Jaynes' Principle, Bayes' Theorem, and Information

    NASA Astrophysics Data System (ADS)

    Haken, Hermann

    2014-12-01

    After a reminder of Jaynes' maximum entropy principle and of my quantum theoretical extension, I consider two coupled quantum systems A,B and formulate a quantum version of Bayes' theorem. The application of Feynman's disentangling theorem allows me to calculate the conditional density matrix ρ (A|B) , if system A is an oscillator (or a set of them), linearly coupled to an arbitrary quantum system B. Expectation values can simply be calculated by means of the normalization factor of ρ (A|B) that is derived.

  20. In Appreciation Julian Schwinger: From Nuclear Physics and Quantum Electrodynamics to Source Theory and Beyond

    NASA Astrophysics Data System (ADS)

    Milton, Kimball A.

    2007-01-01

    Julian Schwinger’s influence on twentieth-century science is profound and pervasive. He is most famous for his renormalization theory of quantum electrodynamics, for which he shared the Nobel Prize in Physics for 1965 with Richard Feynman and Sin-itiro Tomonaga. This triumph undoubtedly was his most heroic work, but his legacy lives on chiefly through subtle and elegant work in classical electrodynamics, quantum variational principles, proper-time methods, quantum anomalies, dynamical mass generation, partial symmetry, and much more. Starting as just a boy, he rapidly became one of the preeminent nuclear physicists in the world in the late 1930s, led the theoretical development of radar technology at the Massachusetts Institute of Technology during World War II, and soon after the war conquered quantum electrodynamics, becoming the leading quantum-field theorist for two decades, before taking a more iconoclastic route during the last quarter century of his life.

  1. Quantum Feynman Ratchet

    NASA Astrophysics Data System (ADS)

    Goyal, Ketan; Kawai, Ryoichi

    As nanotechnology advances, understanding of the thermodynamic properties of small systems becomes increasingly important. Such systems are found throughout physics, biology, and chemistry manifesting striking properties that are a direct result of their small dimensions where fluctuations become predominant. The standard theory of thermodynamics for macroscopic systems is powerless for such ever fluctuating systems. Furthermore, as small systems are inherently quantum mechanical, influence of quantum effects such as discreteness and quantum entanglement on their thermodynamic properties is of great interest. In particular, the quantum fluctuations due to quantum uncertainty principles may play a significant role. In this talk, we investigate thermodynamic properties of an autonomous quantum heat engine, resembling a quantum version of the Feynman Ratchet, in non-equilibrium condition based on the theory of open quantum systems. The heat engine consists of multiple subsystems individually contacted to different thermal environments.

  2. On the superposition principle in interference experiments.

    PubMed

    Sinha, Aninda; H Vijay, Aravind; Sinha, Urbasi

    2015-05-14

    The superposition principle is usually incorrectly applied in interference experiments. This has recently been investigated through numerics based on Finite Difference Time Domain (FDTD) methods as well as the Feynman path integral formalism. In the current work, we have derived an analytic formula for the Sorkin parameter which can be used to determine the deviation from the application of the principle. We have found excellent agreement between the analytic distribution and those that have been earlier estimated by numerical integration as well as resource intensive FDTD simulations. The analytic handle would be useful for comparing theory with future experiments. It is applicable both to physics based on classical wave equations as well as the non-relativistic Schrödinger equation.

  3. Feynman formulae and phase space Feynman path integrals for tau-quantization of some Lévy-Khintchine type Hamilton functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Butko, Yana A., E-mail: yanabutko@yandex.ru, E-mail: kinderknecht@math.uni-sb.de; Grothaus, Martin, E-mail: grothaus@mathematik.uni-kl.de; Smolyanov, Oleg G., E-mail: Smolyanov@yandex.ru

    2016-02-15

    Evolution semigroups generated by pseudo-differential operators are considered. These operators are obtained by different (parameterized by a number τ) procedures of quantization from a certain class of functions (or symbols) defined on the phase space. This class contains Hamilton functions of particles with variable mass in magnetic and potential fields and more general symbols given by the Lévy-Khintchine formula. The considered semigroups are represented as limits of n-fold iterated integrals when n tends to infinity. Such representations are called Feynman formulae. Some of these representations are constructed with the help of another pseudo-differential operator, obtained by the same procedure ofmore » quantization; such representations are called Hamiltonian Feynman formulae. Some representations are based on integral operators with elementary kernels; these are called Lagrangian Feynman formulae. Langrangian Feynman formulae provide approximations of evolution semigroups, suitable for direct computations and numerical modeling of the corresponding dynamics. Hamiltonian Feynman formulae allow to represent the considered semigroups by means of Feynman path integrals. In the article, a family of phase space Feynman pseudomeasures corresponding to different procedures of quantization is introduced. The considered evolution semigroups are represented as phase space Feynman path integrals with respect to these Feynman pseudomeasures, i.e., different quantizations correspond to Feynman path integrals with the same integrand but with respect to different pseudomeasures. This answers Berezin’s problem of distinguishing a procedure of quantization on the language of Feynman path integrals. Moreover, the obtained Lagrangian Feynman formulae allow also to calculate these phase space Feynman path integrals and to connect them with some functional integrals with respect to probability measures.« less

  4. Interference with electrons: from thought to real experiments

    NASA Astrophysics Data System (ADS)

    Matteucci, Giorgio

    2013-11-01

    The two-slit interference experiment is usually adopted to discuss the superposition principle applied to radiation and to show the peculiar wave behaviour of material particles. Diffraction and interference of electrons have been demonstrated using, as interferometry devices, a hole, a slit, double hole, two-slits, an electrostatic biprism etc. A number of books, short movies and lectures on the web try to popularize the mysterious behaviour of electrons on the basis of Feynman thought experiment which consists of a Young two-hole interferometer equipped with a detector to reveal single electrons. A short review is reported regarding, i) the pioneering attempts carried out to demonstrate that interference patterns could be obtained with single electrons through an interferometer and, ii) recent experiments, which can be considered as the realization of the thought electron interference experiments adopted by Einstein-Bohr and subsequently by Feynman to discuss key features of quantum physics.

  5. Test on the Effectiveness of the Sum over Paths Approach in Favoring the Construction of an Integrated Knowledge of Quantum Physics in High School

    ERIC Educational Resources Information Center

    Malgieri, Massimiliano; Onorato, Pasquale; De Ambrosis, Anna

    2017-01-01

    In this paper we present the results of a research-based teaching-learning sequence on introductory quantum physics based on Feynman's sum over paths approach in the Italian high school. Our study focuses on students' understanding of two founding ideas of quantum physics, wave particle duality and the uncertainty principle. In view of recent…

  6. A Note on the Stochastic Nature of Feynman Quantum Paths

    NASA Astrophysics Data System (ADS)

    Botelho, Luiz C. L.

    2016-11-01

    We propose a Fresnel stochastic white noise framework to analyze the stochastic nature of the Feynman paths entering on the Feynman Path Integral expression for the Feynman Propagator of a particle quantum mechanically moving under a time-independent potential.

  7. Chiral limit of N = 4 SYM and ABJM and integrable Feynman graphs

    NASA Astrophysics Data System (ADS)

    Caetano, João; Gürdoğan, Ömer; Kazakov, Vladimir

    2018-03-01

    We consider a special double scaling limit, recently introduced by two of the authors, combining weak coupling and large imaginary twist, for the γ-twisted N = 4 SYM theory. We also establish the analogous limit for ABJM theory. The resulting non-gauge chiral 4D and 3D theories of interacting scalars and fermions are integrable in the planar limit. In spite of the breakdown of conformality by double-trace interactions, most of the correlators for local operators of these theories are conformal, with non-trivial anomalous dimensions defined by specific, integrable Feynman diagrams. We discuss the details of this diagrammatics. We construct the doubly-scaled asymptotic Bethe ansatz (ABA) equations for multi-magnon states in these theories. Each entry of the mixing matrix of local conformal operators in the simplest of these theories — the bi-scalar model in 4D and tri-scalar model in 3D — is given by a single Feynman diagram at any given loop order. The related diagrams are in principle computable, up to a few scheme dependent constants, by integrability methods (quantum spectral curve or ABA). These constants should be fixed from direct computations of a few simplest graphs. This integrability-based method is advocated to be able to provide information about some high loop order graphs which are hardly computable by other known methods. We exemplify our approach with specific five-loop graphs.

  8. A Note on Feynman Path Integral for Electromagnetic External Fields

    NASA Astrophysics Data System (ADS)

    Botelho, Luiz C. L.

    2017-08-01

    We propose a Fresnel stochastic white noise framework to analyze the nature of the Feynman paths entering on the Feynman Path Integral expression for the Feynman Propagator of a particle quantum mechanically moving under an external electromagnetic time-independent potential.

  9. First principles molecular dynamics of molten NaCl

    NASA Astrophysics Data System (ADS)

    Galamba, N.; Costa Cabral, B. J.

    2007-03-01

    First principles Hellmann-Feynman molecular dynamics (HFMD) results for molten NaCl at a single state point are reported. The effect of induction forces on the structure and dynamics of the system is studied by comparison of the partial radial distribution functions and the velocity and force autocorrelation functions with those calculated from classical MD based on rigid-ion and shell-model potentials. The first principles results reproduce the main structural features of the molten salt observed experimentally, whereas they are incorrectly described by both rigid-ion and shell-model potentials. Moreover, HFMD Green-Kubo self-diffusion coefficients are in closer agreement with experimental data than those predicted by classical MD. A comprehensive discussion of MD results for molten NaCl based on different ab initio parametrized polarizable interionic potentials is also given.

  10. A Celebration of Richard Feynman

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feynman, Richard

    In honor of the 2005 World Year of Physics, on the birthday of Nobel Prize-winning physicist Richard Feynman, BSA sponsored this celebration. Actor Norman Parker reads from Feynman's bestselling books, and Ralph Leighton and Tom Rutishauser, who played bongos with Feynman, reminisce on what it was like to drum with him.

  11. A Celebration of Richard Feynman

    ScienceCinema

    Feynman, Richard

    2018-01-05

    In honor of the 2005 World Year of Physics, on the birthday of Nobel Prize-winning physicist Richard Feynman, BSA sponsored this celebration. Actor Norman Parker reads from Feynman's bestselling books, and Ralph Leighton and Tom Rutishauser, who played bongos with Feynman, reminisce on what it was like to drum with him.

  12. Richard Feynman and computation

    NASA Astrophysics Data System (ADS)

    Hey, Tony

    1999-04-01

    The enormous contribution of Richard Feynman to modern physics is well known, both to teaching through his famous Feynman Lectures on Physics, and to research with his Feynman diagram approach to quantum field theory and his path integral formulation of quantum mechanics. Less well known perhaps is his long-standing interest in the physics of computation and this is the subject of this paper. Feynman lectured on computation at Caltech for most of the last decade of his life, first with John Hopfield and Carver Mead, and then with Gerry Sussman. The story of how these lectures came to be written up as the Feynman Lectures on Computation is briefly recounted. Feynman also discussed the fundamentals of computation with other legendary figures of the computer science and physics community such as Ed Fredkin, Rolf Landauer, Carver Mead, Marvin Minsky and John Wheeler. He was also instrumental in stimulating developments in both nanotechnology and quantum computing. During the 1980s Feynman re-visited long-standing interests both in parallel computing with Geoffrey Fox and Danny Hillis, and in reversible computation and quantum computing with Charles Bennett, Norman Margolus, Tom Toffoli and Wojciech Zurek. This paper records Feynman's links with the computational community and includes some reminiscences about his involvement with the fundamentals of computing.

  13. Cyclic density functional theory: A route to the first principles simulation of bending in nanostructures

    NASA Astrophysics Data System (ADS)

    Banerjee, Amartya S.; Suryanarayana, Phanish

    2016-11-01

    We formulate and implement Cyclic Density Functional Theory (Cyclic DFT) - a self-consistent first principles simulation method for nanostructures with cyclic symmetries. Using arguments based on Group Representation Theory, we rigorously demonstrate that the Kohn-Sham eigenvalue problem for such systems can be reduced to a fundamental domain (or cyclic unit cell) augmented with cyclic-Bloch boundary conditions. Analogously, the equations of electrostatics appearing in Kohn-Sham theory can be reduced to the fundamental domain augmented with cyclic boundary conditions. By making use of this symmetry cell reduction, we show that the electronic ground-state energy and the Hellmann-Feynman forces on the atoms can be calculated using quantities defined over the fundamental domain. We develop a symmetry-adapted finite-difference discretization scheme to obtain a fully functional numerical realization of the proposed approach. We verify that our formulation and implementation of Cyclic DFT is both accurate and efficient through selected examples. The connection of cyclic symmetries with uniform bending deformations provides an elegant route to the ab-initio study of bending in nanostructures using Cyclic DFT. As a demonstration of this capability, we simulate the uniform bending of a silicene nanoribbon and obtain its energy-curvature relationship from first principles. A self-consistent ab-initio simulation of this nature is unprecedented and well outside the scope of any other systematic first principles method in existence. Our simulations reveal that the bending stiffness of the silicene nanoribbon is intermediate between that of graphene and molybdenum disulphide - a trend which can be ascribed to the variation in effective thickness of these materials. We describe several future avenues and applications of Cyclic DFT, including its extension to the study of non-uniform bending deformations and its possible use in the study of the nanoscale flexoelectric effect.

  14. A Representation for Fermionic Correlation Functions

    NASA Astrophysics Data System (ADS)

    Feldman, Joel; Knörrer, Horst; Trubowitz, Eugene

    Let dμS(a) be a Gaussian measure on the finitely generated Grassmann algebra A. Given an even W(a)∈A, we construct an operator R on A such that for all f(a)∈A. This representation of the Schwinger functional iteratively builds up Feynman graphs by successively appending lines farther and farther from f. It allows the Pauli exclusion principle to be implemented quantitatively by a simple application of Gram's inequality.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Filinov, A.V.; Golubnychiy, V.O.; Bonitz, M.

    Extending our previous work [A.V. Filinov et al., J. Phys. A 36, 5957 (2003)], we present a detailed discussion of accuracy and practical applications of finite-temperature pseudopotentials for two-component Coulomb systems. Different pseudopotentials are discussed: (i) the diagonal Kelbg potential, (ii) the off-diagonal Kelbg potential, (iii) the improved diagonal Kelbg potential, (iv) an effective potential obtained with the Feynman-Kleinert variational principle, and (v) the 'exact' quantum pair potential derived from the two-particle density matrix. For the improved diagonal Kelbg potential, a simple temperature-dependent fit is derived which accurately reproduces the 'exact' pair potential in the whole temperature range. The derivedmore » pseudopotentials are then used in path integral Monte Carlo and molecular-dynamics (MD) simulations to obtain thermodynamical properties of strongly coupled hydrogen. It is demonstrated that classical MD simulations with spin-dependent interaction potentials for the electrons allow for an accurate description of the internal energy of hydrogen in the difficult regime of partial ionization down to the temperatures of about 60 000 K. Finally, we point out an interesting relationship between the quantum potentials and the effective potentials used in density-functional theory.« less

  16. Computation of the properties of liquid neon, methane, and gas helium at low temperature by the Feynman-Hibbs approach.

    PubMed

    Tchouar, N; Ould-Kaddour, F; Levesque, D

    2004-10-15

    The properties of liquid methane, liquid neon, and gas helium are calculated at low temperatures over a large range of pressure from the classical molecular-dynamics simulations. The molecular interactions are represented by the Lennard-Jones pair potentials supplemented by quantum corrections following the Feynman-Hibbs approach. The equations of state, diffusion, and shear viscosity coefficients are determined for neon at 45 K, helium at 80 K, and methane at 110 K. A comparison is made with the existing experimental data and for thermodynamical quantities, with results computed from quantum numerical simulations when they are available. The theoretical variation of the viscosity coefficient with pressure is in good agreement with the experimental data when the quantum corrections are taken into account, thus reducing considerably the 60% discrepancy between the simulations and experiments in the absence of these corrections.

  17. Richard P. Feynman Center for Innovation

    Science.gov Websites

    Search Site submit About Us Los Alamos National LaboratoryRichard P. Feynman Center for Innovation Innovation protecting tomorrow Los Alamos National Laboratory The Richard P. Feynman Center for Innovation self-healing, self-forming mesh network of long range radios. READ MORE supercomputer Los Alamos

  18. A Model for Bilingual Physics Teaching: "The Feynman Lectures "

    NASA Astrophysics Data System (ADS)

    Metzner, Heqing W.

    2006-12-01

    Feynman was not only a great physicist but also a remarkably effective educator. The Feynman Lectures on Physics originally published in 1963 were designed to be GUIDES for teachers and for gifted students. More than 40 years later, his peculiar teaching ideas have special application to bilingual physics teaching in China because: (1) Each individual lecture provides a self contained unit for bilingual teaching; (2)The lectures broaden the physics understanding of students; and (3)Feynman's original thought in English is experienced through the bilingual teaching of physics.

  19. Quantum theory of multiscale coarse-graining.

    PubMed

    Han, Yining; Jin, Jaehyeok; Wagner, Jacob W; Voth, Gregory A

    2018-03-14

    Coarse-grained (CG) models serve as a powerful tool to simulate molecular systems at much longer temporal and spatial scales. Previously, CG models and methods have been built upon classical statistical mechanics. The present paper develops a theory and numerical methodology for coarse-graining in quantum statistical mechanics, by generalizing the multiscale coarse-graining (MS-CG) method to quantum Boltzmann statistics. A rigorous derivation of the sufficient thermodynamic consistency condition is first presented via imaginary time Feynman path integrals. It identifies the optimal choice of CG action functional and effective quantum CG (qCG) force field to generate a quantum MS-CG (qMS-CG) description of the equilibrium system that is consistent with the quantum fine-grained model projected onto the CG variables. A variational principle then provides a class of algorithms for optimally approximating the qMS-CG force fields. Specifically, a variational method based on force matching, which was also adopted in the classical MS-CG theory, is generalized to quantum Boltzmann statistics. The qMS-CG numerical algorithms and practical issues in implementing this variational minimization procedure are also discussed. Then, two numerical examples are presented to demonstrate the method. Finally, as an alternative strategy, a quasi-classical approximation for the thermal density matrix expressed in the CG variables is derived. This approach provides an interesting physical picture for coarse-graining in quantum Boltzmann statistical mechanics in which the consistency with the quantum particle delocalization is obviously manifest, and it opens up an avenue for using path integral centroid-based effective classical force fields in a coarse-graining methodology.

  20. A Many-Body Formalism of ΔSCF Approach for Simulating X-Ray Spectra from First-Principles

    NASA Astrophysics Data System (ADS)

    Liang, Yufeng; Vinson, John; Pemmaraju, Sri; Drisdell, Walter; Shirley, Eric; Prendegast, David

    Accurately reproducing X-ray spectral fingerprints for materials characterization relies heavily on how to correctly model the many-electron response to the generation of an X-ray core hole. In this talk, we present a novel first-principles theory for simulating X-ray spectra that is based on many-electron wavefunctions. The proposed theory go beyond the electron-hole correlations within the Bethe-Saltpeter Equation and consider higher-order vertex corrections up to the level of Mahan-Noziéres-De Dominicis (MND) theory. An efficient algorithm is invented to incorporate these many-electron processes by using linear algebra rather than iterating over all Feynman diag United States Department of Energy under Contact No. DE-AC02-05CH11231, No. DE-SC0004993.

  1. Revisiting Feynman's ratchet with thermoelectric transport theory.

    PubMed

    Apertet, Y; Ouerdane, H; Goupil, C; Lecoeur, Ph

    2014-07-01

    We show how the formalism used for thermoelectric transport may be adapted to Smoluchowski's seminal thought experiment, also known as Feynman's ratchet and pawl system. Our analysis rests on the notion of useful flux, which for a thermoelectric system is the electrical current and for Feynman's ratchet is the effective jump frequency. Our approach yields original insight into the derivation and analysis of the system's properties. In particular we define an entropy per tooth in analogy with the entropy per carrier or Seebeck coefficient, and we derive the analog to Kelvin's second relation for Feynman's ratchet. Owing to the formal similarity between the heat fluxes balance equations for a thermoelectric generator (TEG) and those for Feynman's ratchet, we introduce a distribution parameter γ that quantifies the amount of heat that flows through the cold and hot sides of both heat engines. While it is well established that γ = 1/2 for a TEG, it is equal to 1 for Feynman's ratchet. This implies that no heat may be rejected in the cold reservoir for the latter case. Further, the analysis of the efficiency at maximum power shows that the so-called Feynman efficiency corresponds to that of an exoreversible engine, with γ = 1. Then, turning to the nonlinear regime, we generalize the approach based on the convection picture and introduce two different types of resistance to distinguish the dynamical behavior of the considered system from its ability to dissipate energy. We finally put forth the strong similarity between the original Feynman ratchet and a mesoscopic thermoelectric generator with a single conducting channel.

  2. Feynman's and Ohta's Models of a Josephson Junction

    ERIC Educational Resources Information Center

    De Luca, R.

    2012-01-01

    The Josephson equations are derived by means of the weakly coupled two-level quantum system model given by Feynman. Adopting a simplified version of Ohta's model, starting from Feynman's model, the strict voltage-frequency Josephson relation is derived. The contribution of Ohta's approach to the comprehension of the additional term given by the…

  3. Spin wave Feynman diagram vertex computation package

    NASA Astrophysics Data System (ADS)

    Price, Alexander; Javernick, Philip; Datta, Trinanjan

    Spin wave theory is a well-established theoretical technique that can correctly predict the physical behavior of ordered magnetic states. However, computing the effects of an interacting spin wave theory incorporating magnons involve a laborious by hand derivation of Feynman diagram vertices. The process is tedious and time consuming. Hence, to improve productivity and have another means to check the analytical calculations, we have devised a Feynman Diagram Vertex Computation package. In this talk, we will describe our research group's effort to implement a Mathematica based symbolic Feynman diagram vertex computation package that computes spin wave vertices. Utilizing the non-commutative algebra package NCAlgebra as an add-on to Mathematica, symbolic expressions for the Feynman diagram vertices of a Heisenberg quantum antiferromagnet are obtained. Our existing code reproduces the well-known expressions of a nearest neighbor square lattice Heisenberg model. We also discuss the case of a triangular lattice Heisenberg model where non collinear terms contribute to the vertex interactions.

  4. New method of computing the contributions of graphs without lepton loops to the electron anomalous magnetic moment in QED

    NASA Astrophysics Data System (ADS)

    Volkov, Sergey

    2017-11-01

    This paper presents a new method of numerical computation of the mass-independent QED contributions to the electron anomalous magnetic moment which arise from Feynman graphs without closed electron loops. The method is based on a forestlike subtraction formula that removes all ultraviolet and infrared divergences in each Feynman graph before integration in Feynman-parametric space. The integration is performed by an importance sampling Monte-Carlo algorithm with the probability density function that is constructed for each Feynman graph individually. The method is fully automated at any order of the perturbation series. The results of applying the method to 2-loop, 3-loop, 4-loop Feynman graphs, and to some individual 5-loop graphs are presented, as well as the comparison of this method with other ones with respect to Monte Carlo convergence speed.

  5. Wars of the holographic world

    NASA Astrophysics Data System (ADS)

    Preskill, John

    2008-12-01

    In the popular imagination, the iconic American theoretical physicist is Richard Feynman, in all his safe-cracking, bongo-thumping, woman-chasing glory. I suspect that many physicists, if asked to name a living colleague who best captures the spirit of Feynman, would give the same answer as me: Leonard Susskind. As far as I know, Susskind does not crack safes, thump bongos, or chase women, yet he shares Feynman's brash cockiness (which in Susskind's case is leavened by occasional redeeming flashes of self-deprecation) and Feynman's gift for spinning fascinating anecdotes. If you are having a group of physicists over for dinner and want to be sure to have a good time, invite Susskind.

  6. Electrostatic Hellmann-Feynman theorem applied to long-range interatomic forces - The hydrogen molecule.

    NASA Technical Reports Server (NTRS)

    Steiner, E.

    1973-01-01

    The use of the electrostatic Hellmann-Feynman theorem for the calculation of the leading term in the 1/R expansion of the force of interaction between two well-separated hydrogen atoms is discussed. Previous work has suggested that whereas this term is determined wholly by the first-order wavefunction when calculated by perturbation theory, the use of the Hellmann-Feynman theorem apparently requires the wavefunction through second order. It is shown how the two results may be reconciled and that the Hellmann-Feynman theorem may be reformulated in such a way that only the first-order wavefunction is required.

  7. Counting the number of Feynman graphs in QCD

    NASA Astrophysics Data System (ADS)

    Kaneko, T.

    2018-05-01

    Information about the number of Feynman graphs for a given physical process in a given field theory is especially useful for confirming the result of a Feynman graph generator used in an automatic system of perturbative calculations. A method of counting the number of Feynman graphs with weight of symmetry factor was established based on zero-dimensional field theory, and was used in scalar theories and QED. In this article this method is generalized to more complicated models by direct calculation of generating functions on a computer algebra system. This method is applied to QCD with and without counter terms, where many higher order are being calculated automatically.

  8. Subtractive procedure for calculating the anomalous electron magnetic moment in QED and its application for numerical calculation at the three-loop level

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Volkov, S. A., E-mail: volkoff-sergey@mail.ru

    2016-06-15

    A new subtractive procedure for canceling ultraviolet and infrared divergences in the Feynman integrals described here is developed for calculating QED corrections to the electron anomalous magnetic moment. The procedure formulated in the form of a forest expression with linear operators applied to Feynman amplitudes of UV-diverging subgraphs makes it possible to represent the contribution of each Feynman graph containing only electron and photon propagators in the form of a converging integral with respect to Feynman parameters. The application of the developed method for numerical calculation of two- and threeloop contributions is described.

  9. Thinking in Pictures: John Wheeler, Richard Feynman and the Diagrammatic Approach to Problem Solving

    NASA Astrophysics Data System (ADS)

    Halpern, Paul

    While classical mechanics readily lends itself to sketches, many fields of modern physics, particularly quantum mechanics, quantum field theory, and general relativity, are notoriously hard to envision. Nevertheless, John Wheeler and Richard Feynman, who obtained his PhD under Wheeler, each insisted that diagrams were the most effective way to tackle modern physics questions as well. Beginning with Wheeler and Feynman's work together at Princeton, I'll show how the two influenced each other and encouraged each other's diagrammatic methods. I'll explore the influence on Feynman of not just Wheeler, but also of his first wife Arline, an aspiring artist. I'll describe how Feynman diagrams, introduced in the late 1940s, while first seen as `heretical' in the face of Bohr's complementarity, became standard, essential methods. I'll detail Wheeler's encouragement of his colleague Martin Kruskal's use of special diagrams to elucidate the properties of black holes. Finally, I'll show how each physicist supported art later in life: Wheeler helping to arrange the Putnam Collection of 20th century sculpture at Princeton and Feynman, in a kind of `second career,' becoming an artist himself.

  10. The second-order interference of two independent single-mode He-Ne lasers

    NASA Astrophysics Data System (ADS)

    Liu, Jianbin; Le, Mingnan; Bai, Bin; Wang, Wentao; Chen, Hui; Zhou, Yu; Li, Fu-li; Xu, Zhuo

    2015-09-01

    The second-order spatial and temporal interference patterns with two independent single-mode continuous-wave He-Ne lasers are observed when these two lasers are incident to two adjacent input ports of a 1:1 non-polarizing beam splitter, respectively. Two-photon interference based on the superposition principle in Feynman's path integral theory is employed to interpret the experimental results. The conditions to observe the second-order interference pattern with two independent single-mode continuous-wave lasers are discussed. It is concluded that frequency stability is important to observe the second-order interference pattern with two independent light beams.

  11. A Cameron-Storvick Theorem for Analytic Feynman Integrals on Product Abstract Wiener Space and Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Jae Gil, E-mail: jgchoi@dankook.ac.kr; Chang, Seung Jun, E-mail: sejchang@dankook.ac.kr

    In this paper we derive a Cameron-Storvick theorem for the analytic Feynman integral of functionals on product abstract Wiener space B{sup 2}. We then apply our result to obtain an evaluation formula for the analytic Feynman integral of unbounded functionals on B{sup 2}. We also present meaningful examples involving functionals which arise naturally in quantum mechanics.

  12. On the Path Integral in Non-Commutative (nc) Qft

    NASA Astrophysics Data System (ADS)

    Dehne, Christoph

    2008-09-01

    As is generally known, different quantization schemes applied to field theory on NC spacetime lead to Feynman rules with different physical properties, if time does not commute with space. In particular, the Feynman rules that are derived from the path integral corresponding to the T*-product (the so-called naïve Feynman rules) violate the causal time ordering property. Within the Hamiltonian approach to quantum field theory, we show that we can (formally) modify the time ordering encoded in the above path integral. The resulting Feynman rules are identical to those obtained in the canonical approach via the Gell-Mann-Low formula (with T-ordering). They preserve thus unitarity and causal time ordering.

  13. Fuchsia : A tool for reducing differential equations for Feynman master integrals to epsilon form

    NASA Astrophysics Data System (ADS)

    Gituliar, Oleksandr; Magerya, Vitaly

    2017-10-01

    We present Fuchsia - an implementation of the Lee algorithm, which for a given system of ordinary differential equations with rational coefficients ∂x J(x , ɛ) = A(x , ɛ) J(x , ɛ) finds a basis transformation T(x , ɛ) , i.e., J(x , ɛ) = T(x , ɛ) J‧(x , ɛ) , such that the system turns into the epsilon form : ∂xJ‧(x , ɛ) = ɛ S(x) J‧(x , ɛ) , where S(x) is a Fuchsian matrix. A system of this form can be trivially solved in terms of polylogarithms as a Laurent series in the dimensional regulator ɛ. That makes the construction of the transformation T(x , ɛ) crucial for obtaining solutions of the initial system. In principle, Fuchsia can deal with any regular systems, however its primary task is to reduce differential equations for Feynman master integrals. It ensures that solutions contain only regular singularities due to the properties of Feynman integrals. Program Files doi:http://dx.doi.org/10.17632/zj6zn9vfkh.1 Licensing provisions: MIT Programming language:Python 2.7 Nature of problem: Feynman master integrals may be calculated from solutions of a linear system of differential equations with rational coefficients. Such a system can be easily solved as an ɛ-series when its epsilon form is known. Hence, a tool which is able to find the epsilon form transformations can be used to evaluate Feynman master integrals. Solution method: The solution method is based on the Lee algorithm (Lee, 2015) which consists of three main steps: fuchsification, normalization, and factorization. During the fuchsification step a given system of differential equations is transformed into the Fuchsian form with the help of the Moser method (Moser, 1959). Next, during the normalization step the system is transformed to the form where eigenvalues of all residues are proportional to the dimensional regulator ɛ. Finally, the system is factorized to the epsilon form by finding an unknown transformation which satisfies a system of linear equations. Additional comments including Restrictions and Unusual features: Systems of single-variable differential equations are considered. A system needs to be reducible to Fuchsian form and eigenvalues of its residues must be of the form n + m ɛ, where n is integer. Performance depends upon the input matrix, its size, number of singular points and their degrees. It takes around an hour to reduce an example 74 × 74 matrix with 20 singular points on a PC with a 1.7 GHz Intel Core i5 CPU. An additional slowdown is to be expected for matrices with complex and/or irrational singular point locations, as these are particularly difficult for symbolic algebra software to handle.

  14. Mean energy of some interacting bosonic systems derived by virtue of the generalized Hellmann-Feynman theorem

    NASA Astrophysics Data System (ADS)

    Fan, Hong-yi; Xu, Xue-xiang

    2009-06-01

    By virtue of the generalized Hellmann-Feynman theorem [H. Y. Fan and B. Z. Chen, Phys. Lett. A 203, 95 (1995)], we derive the mean energy of some interacting bosonic systems for some Hamiltonian models without proceeding with diagonalizing the Hamiltonians. Our work extends the field of applications of the Hellmann-Feynman theorem and may enrich the theory of quantum statistics.

  15. The ε-form of the differential equations for Feynman integrals in the elliptic case

    NASA Astrophysics Data System (ADS)

    Adams, Luise; Weinzierl, Stefan

    2018-06-01

    Feynman integrals are easily solved if their system of differential equations is in ε-form. In this letter we show by the explicit example of the kite integral family that an ε-form can even be achieved, if the Feynman integrals do not evaluate to multiple polylogarithms. The ε-form is obtained by a (non-algebraic) change of basis for the master integrals.

  16. Simplifying Differential Equations for Multiscale Feynman Integrals beyond Multiple Polylogarithms.

    PubMed

    Adams, Luise; Chaubey, Ekta; Weinzierl, Stefan

    2017-04-07

    In this Letter we exploit factorization properties of Picard-Fuchs operators to decouple differential equations for multiscale Feynman integrals. The algorithm reduces the differential equations to blocks of the size of the order of the irreducible factors of the Picard-Fuchs operator. As a side product, our method can be used to easily convert the differential equations for Feynman integrals which evaluate to multiple polylogarithms to an ϵ form.

  17. Temperature dependence of the Urbach optical absorption edge: A theory of multiple phonon absorption and emission sidebands

    NASA Astrophysics Data System (ADS)

    Grein, C. H.; John, Sajeev

    1989-01-01

    The optical absorption coefficient for subgap electronic transitions in crystalline and disordered semiconductors is calculated by first-principles means with use of a variational principle based on the Feynman path-integral representation of the transition amplitude. This incorporates the synergetic interplay of static disorder and the nonadiabatic quantum dynamics of the coupled electron-phonon system. Over photon-energy ranges of experimental interest, this method predicts accurate linear exponential Urbach behavior of the absorption coefficient. At finite temperatures the nonlinear electron-phonon interaction gives rise to multiple phonon emission and absorption sidebands which accompany the optically induced electronic transition. These sidebands dominate the absorption in the Urbach regime and account for the temperature dependence of the Urbach slope and energy gap. The physical picture which emerges is that the phonons absorbed from the heat bath are then reemitted into a dynamical polaronlike potential well which localizes the electron. At zero temperature we recover the usual polaron theory. At high temperatures the calculated tail is qualitatively similar to that of a static Gaussian random potential. This leads to a linear relationship between the Urbach slope and the downshift of the extrapolated continuum band edge as well as a temperature-independent Urbach focus. At very low temperatures, deviations from these rules are predicted arising from the true quantum dynamics of the lattice. Excellent agreement is found with experimental data on c-Si, a-Si:H, a-As2Se3, and a-As2S3. Results are compared with a simple physical argument based on the most-probable-potential-well method.

  18. Covariant path integrals on hyperbolic surfaces

    NASA Astrophysics Data System (ADS)

    Schaefer, Joe

    1997-11-01

    DeWitt's covariant formulation of path integration [B. De Witt, "Dynamical theory in curved spaces. I. A review of the classical and quantum action principles," Rev. Mod. Phys. 29, 377-397 (1957)] has two practical advantages over the traditional methods of "lattice approximations;" there is no ordering problem, and classical symmetries are manifestly preserved at the quantum level. Applying the spectral theorem for unbounded self-adjoint operators, we provide a rigorous proof of the convergence of certain path integrals on Riemann surfaces of constant curvature -1. The Pauli-DeWitt curvature correction term arises, as in DeWitt's work. Introducing a Fuchsian group Γ of the first kind, and a continuous, bounded, Γ-automorphic potential V, we obtain a Feynman-Kac formula for the automorphic Schrödinger equation on the Riemann surface ΓH. We analyze the Wick rotation and prove the strong convergence of the so-called Feynman maps [K. D. Elworthy, Path Integration on Manifolds, Mathematical Aspects of Superspace, edited by Seifert, Clarke, and Rosenblum (Reidel, Boston, 1983), pp. 47-90] on a dense set of states. Finally, we give a new proof of some results in C. Grosche and F. Steiner, "The path integral on the Poincare upper half plane and for Liouville quantum mechanics," Phys. Lett. A 123, 319-328 (1987).

  19. Analytic solution of the lifeguard problem

    NASA Astrophysics Data System (ADS)

    De Luca, Roberto; Di Mauro, Marco; Naddeo, Adele

    2018-03-01

    A simple version due to Feynman of Fermat’s principle is analyzed. It deals with the path a lifeguard on a beach must follow to reach a drowning swimmer. The solution for the exact point, P(x, 0) , at the beach-sea boundary, corresponding to the fastest path to the swimmer, is worked out in detail and the analogy with light traveling at the air-water boundary is described. The results agree with the known conclusion that the shortest path does not coincide with the fastest one. The relevance of the subject for a basic physics course, at an advanced high school level, is pointed out.

  20. Path integrals, the ABL rule and the three-box paradox

    NASA Astrophysics Data System (ADS)

    Sokolovski, D.; Puerto Giménez, I.; Sala Mayato, R.

    2008-10-01

    The three-box problem is analysed in terms of virtual pathways, interference between which is destroyed by a number of intermediate measurements. The Aharonov-Bergmann-Lebowitz (ABL) rule is shown to be a particular case of Feynman's recipe for assigning probabilities to exclusive alternatives. The ‘paradoxical’ features of the three box case arise in an attempt to attribute, in contradiction to the uncertainty principle, properties pertaining to different ensembles produced by different intermediate measurements to the same particle. The effect can be mimicked by a classical system, provided an observation is made to perturb the system in a non-local manner.

  1. Equivalence between the Arquès-Walsh sequence formula and the number of connected Feynman diagrams for every perturbation order in the fermionic many-body problem

    NASA Astrophysics Data System (ADS)

    Castro, E.

    2018-02-01

    From the perturbative expansion of the exact Green function, an exact counting formula is derived to determine the number of different types of connected Feynman diagrams. This formula coincides with the Arquès-Walsh sequence formula in the rooted map theory, supporting the topological connection between Feynman diagrams and rooted maps. A classificatory summing-terms approach is used, in connection to discrete mathematical theory.

  2. An automated integration-free path-integral method based on Kleinert's variational perturbation theory

    NASA Astrophysics Data System (ADS)

    Wong, Kin-Yiu; Gao, Jiali

    2007-12-01

    Based on Kleinert's variational perturbation (KP) theory [Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets, 3rd ed. (World Scientific, Singapore, 2004)], we present an analytic path-integral approach for computing the effective centroid potential. The approach enables the KP theory to be applied to any realistic systems beyond the first-order perturbation (i.e., the original Feynman-Kleinert [Phys. Rev. A 34, 5080 (1986)] variational method). Accurate values are obtained for several systems in which exact quantum results are known. Furthermore, the computed kinetic isotope effects for a series of proton transfer reactions, in which the potential energy surfaces are evaluated by density-functional theory, are in good accordance with experiments. We hope that our method could be used by non-path-integral experts or experimentalists as a "black box" for any given system.

  3. A Staged Reading of the Play: Moving Bodies

    NASA Astrophysics Data System (ADS)

    Schwartz, Brian

    Moving Bodies is about Nobel Prize-winning physicist Richard Feynman as he explores nature, science, sex, anti-Semitism, and the world around him. This epic, comic journey portrays Feynman as an iconoclastic young man, a physicist with the Manhattan Project and confronting the mystery of the Challenger disaster. The Atomic Bomb is central to the play, but it is also very much about human loves and losses. We learn about his (Feynman's) eccentricities: his bongo playing, his penchant for picking locks, and most notably his appreciation for women. Through playwright Arthur Giron's eyes, we see how Feynman became one of the most important scientists of our time. The playwright, Arthur Giron, is the co-playwright of the recent 2015 Broadway Musical, Amazing Grace. The staged reading is performed by the Southern Rep Theatre. http://www.southernrep.com/ The play director and actors as well as a historian-scientist who knew Feynman will be available for a talk-back discussion after the play reading. Produced by Brian Schwartz, CUNY and Gregory Mack, APS. Sponsored by: The Forum on the History of Physics, The Forum on Outreach and Engaging the Public and The Forum on Physics and Society.

  4. Feynman rules for a whole Abelian model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chauca, J.; Doria, R.; Soares, W.

    2012-09-24

    Feynman rules for an abelian extension of gauge theories are discussed and explicitly derived. Vertices with three and four abelian gauge bosons are obtained. A discussion on an eventual structure for the photon is presented.

  5. Particles, Feynman Diagrams and All That

    ERIC Educational Resources Information Center

    Daniel, Michael

    2006-01-01

    Quantum fields are introduced in order to give students an accurate qualitative understanding of the origin of Feynman diagrams as representations of particle interactions. Elementary diagrams are combined to produce diagrams representing the main features of the Standard Model.

  6. Richard P. Feynman and the Feynman Diagrams

    Science.gov Websites

    available in full-text and on the Web. Documents: A Theorem and Its Application to Finite Tampers, DOE Fermi-Thomas Theory; DOE Technical Report, April 28, 1947 Mathematical Formulation of the Quantum Theory

  7. Probing finite coarse-grained virtual Feynman histories with sequential weak values

    NASA Astrophysics Data System (ADS)

    Georgiev, Danko; Cohen, Eliahu

    2018-05-01

    Feynman's sum-over-histories formulation of quantum mechanics has been considered a useful calculational tool in which virtual Feynman histories entering into a coherent quantum superposition cannot be individually measured. Here we show that sequential weak values, inferred by consecutive weak measurements of projectors, allow direct experimental probing of individual virtual Feynman histories, thereby revealing the exact nature of quantum interference of coherently superposed histories. Because the total sum of sequential weak values of multitime projection operators for a complete set of orthogonal quantum histories is unity, complete sets of weak values could be interpreted in agreement with the standard quantum mechanical picture. We also elucidate the relationship between sequential weak values of quantum histories with different coarse graining in time and establish the incompatibility of weak values for nonorthogonal quantum histories in history Hilbert space. Bridging theory and experiment, the presented results may enhance our understanding of both weak values and quantum histories.

  8. Feynman-Kac formula for stochastic hybrid systems.

    PubMed

    Bressloff, Paul C

    2017-01-01

    We derive a Feynman-Kac formula for functionals of a stochastic hybrid system evolving according to a piecewise deterministic Markov process. We first derive a stochastic Liouville equation for the moment generator of the stochastic functional, given a particular realization of the underlying discrete Markov process; the latter generates transitions between different dynamical equations for the continuous process. We then analyze the stochastic Liouville equation using methods recently developed for diffusion processes in randomly switching environments. In particular, we obtain dynamical equations for the moment generating function, averaged with respect to realizations of the discrete Markov process. The resulting Feynman-Kac formula takes the form of a differential Chapman-Kolmogorov equation. We illustrate the theory by calculating the occupation time for a one-dimensional velocity jump process on the infinite or semi-infinite real line. Finally, we present an alternative derivation of the Feynman-Kac formula based on a recent path-integral formulation of stochastic hybrid systems.

  9. Clocks in Feynman's computer and Kitaev's local Hamiltonian: Bias, gaps, idling, and pulse tuning

    NASA Astrophysics Data System (ADS)

    Caha, Libor; Landau, Zeph; Nagaj, Daniel

    2018-06-01

    We present a collection of results about the clock in Feynman's computer construction and Kitaev's local Hamiltonian problem. First, by analyzing the spectra of quantum walks on a line with varying end-point terms, we find a better lower bound on the gap of the Feynman Hamiltonian, which translates into a less strict promise gap requirement for the quantum-Merlin-Arthur-complete local Hamiltonian problem. We also translate this result into the language of adiabatic quantum computation. Second, introducing an idling clock construction with a large state space but fast Cesaro mixing, we provide a way for achieving an arbitrarily high success probability of computation with Feynman's computer with only a logarithmic increase in the number of clock qubits. Finally, we tune and thus improve the costs (locality and gap scaling) of implementing a (pulse) clock with a single excitation.

  10. Orbit-averaged quantities, the classical Hellmann-Feynman theorem, and the magnetic flux enclosed by gyro-motion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perkins, R. J., E-mail: rperkins@pppl.gov; Bellan, P. M.

    Action integrals are often used to average a system over fast oscillations and obtain reduced dynamics. It is not surprising, then, that action integrals play a central role in the Hellmann-Feynman theorem of classical mechanics, which furnishes the values of certain quantities averaged over one period of rapid oscillation. This paper revisits the classical Hellmann-Feynman theorem, rederiving it in connection to an analogous theorem involving the time-averaged evolution of canonical coordinates. We then apply a modified version of the Hellmann-Feynman theorem to obtain a new result: the magnetic flux enclosed by one period of gyro-motion of a charged particle inmore » a non-uniform magnetic field. These results further demonstrate the utility of the action integral in regards to obtaining orbit-averaged quantities and the usefulness of this formalism in characterizing charged particle motion.« less

  11. Measurement of hadron azimuthal distributions in deep inelastic muon proton scattering

    NASA Astrophysics Data System (ADS)

    Arneodo, M.; Arvidson, A.; Aubert, J. J.; Badelek, B.; Beaufays, J.; Bee, C. P.; Benchouk, C.; Berghoff, G.; Bird, I.; Blum, D.; Böhm, E.; de Bouard, X.; Brasse, F. W.; Braun, H.; Broll, C.; Brown, S.; Brück, H.; Calen, H.; Chima, J. S.; Ciborowski, J.; Clifft, R.; Coignet, G.; Combley, F.; Conrad, J.; Coughlan, J.; D'Agostini, G.; Dahlgren, S.; Dengler, F.; Derado, I.; Dreyer, T.; Drees, J.; Düren, M.; Eckardt, V.; Edwards, A.; Edwards, M.; Ernst, T.; Eszes, G.; Favier, J.; Ferrero, M. I.; Figiel, J.; Flauger, W.; Foster, J.; Gabathuler, E.; Gajewski, J.; Gamet, R.; Gayler, J.; Geddes, N.; Grafström, P.; Grard, F.; Haas, J.; Hagberg, E.; Hasert, F. J.; Hayman, P.; Heusse, P.; Jaffre, M.; Jacholkowska, A.; Janata, F.; Jancso, G.; Johnson, A. S.; Kabuss, E. M.; Kellner, G.; Korbel, V.; Krüger, J.; Kullander, S.; Landgraf, U.; Lanske, D.; Loken, J.; Long, K.; Maire, M.; Malecki, P.; Manz, A.; Maselli, S.; Mohr, W.; Montanet, F.; Montgomery, H. E.; Nagy, E.; Nassalski, J.; Norton, P. R.; Oakham, F. G.; Osborne, A. M.; Pascaud, C.; Pavel, N.; Pawlik, B.; Payre, P.; Peroni, C.; Peschel, H.; Pessard, H.; Pettingale, J.; Pietrzyk, B.; Pönsgen, B.; Pötsch, M.; Renton, P.; Ribarics, P.; Rith, K.; Rondio, E.; Scheer, M.; Sandacz, A.; Schlagböhmer, A.; Schiemann, H.; Schmitz, N.; Schneegans, M.; Scholz, M.; Schröder, T.; Schultze, K.; Sloan, T.; Stier, H. E.; Studt, M.; Taylor, G. N.; Thénard, J. M.; Thompson, J. C.; de La Torre, A.; Toth, J.; Urban, L.; Wallucks, W.; Whalley, M.; Wheeler, S.; Williams, W. S. C.; Wimpenny, S. J.; Windmolders, R.; Wolf, G.

    1987-09-01

    A study of the distribution of the azimuthal angle ϕ of charged hadrons in deep inelastic μ- p scattering is presented. The dependence of the moments of this distribution on the Feynman x variable and the momentum transverse to the virtual photon indicates that non-zero moments arise mainly from the effects of the intrinsic K T of the struck quark with < K {/T 2}>>≳(0.44 GeV)2, and to a lesser extent from QCD processes. No significant variation with Q 2 or W 2 is observed.

  12. Bold Diagrammatic Monte Carlo Method Applied to Fermionized Frustrated Spins

    NASA Astrophysics Data System (ADS)

    Kulagin, S. A.; Prokof'ev, N.; Starykh, O. A.; Svistunov, B.; Varney, C. N.

    2013-02-01

    We demonstrate, by considering the triangular lattice spin-1/2 Heisenberg model, that Monte Carlo sampling of skeleton Feynman diagrams within the fermionization framework offers a universal first-principles tool for strongly correlated lattice quantum systems. We observe the fermionic sign blessing—cancellation of higher order diagrams leading to a finite convergence radius of the series. We calculate the magnetic susceptibility of the triangular-lattice quantum antiferromagnet in the correlated paramagnet regime and reveal a surprisingly accurate microscopic correspondence with its classical counterpart at all accessible temperatures. The extrapolation of the observed relation to zero temperature suggests the absence of the magnetic order in the ground state. We critically examine the implications of this unusual scenario.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Venkatesan, R.C., E-mail: ravi@systemsresearchcorp.com; Plastino, A., E-mail: plastino@fisica.unlp.edu.ar

    The (i) reciprocity relations for the relative Fisher information (RFI, hereafter) and (ii) a generalized RFI–Euler theorem are self-consistently derived from the Hellmann–Feynman theorem. These new reciprocity relations generalize the RFI–Euler theorem and constitute the basis for building up a mathematical Legendre transform structure (LTS, hereafter), akin to that of thermodynamics, that underlies the RFI scenario. This demonstrates the possibility of translating the entire mathematical structure of thermodynamics into a RFI-based theoretical framework. Virial theorems play a prominent role in this endeavor, as a Schrödinger-like equation can be associated to the RFI. Lagrange multipliers are determined invoking the RFI–LTS linkmore » and the quantum mechanical virial theorem. An appropriate ansatz allows for the inference of probability density functions (pdf’s, hereafter) and energy-eigenvalues of the above mentioned Schrödinger-like equation. The energy-eigenvalues obtained here via inference are benchmarked against established theoretical and numerical results. A principled theoretical basis to reconstruct the RFI-framework from the FIM framework is established. Numerical examples for exemplary cases are provided. - Highlights: • Legendre transform structure for the RFI is obtained with the Hellmann–Feynman theorem. • Inference of the energy-eigenvalues of the SWE-like equation for the RFI is accomplished. • Basis for reconstruction of the RFI framework from the FIM-case is established. • Substantial qualitative and quantitative distinctions with prior studies are discussed.« less

  14. A global solution to the Schrödinger equation: From Henstock to Feynman

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nathanson, Ekaterina S., E-mail: enathanson@ggc.edu; Jørgensen, Palle E. T., E-mail: palle-jorgensen@uiowa.edu

    2015-09-15

    One of the key elements of Feynman’s formulation of non-relativistic quantum mechanics is a so-called Feynman path integral. It plays an important role in the theory, but it appears as a postulate based on intuition, rather than a well-defined object. All previous attempts to supply Feynman’s theory with rigorous mathematics underpinning, based on the physical requirements, have not been satisfactory. The difficulty comes from the need to define a measure on the infinite dimensional space of paths and to create an integral that would possess all of the properties requested by Feynman. In the present paper, we consider a newmore » approach to defining the Feynman path integral, based on the theory developed by Muldowney [A Modern Theory of Random Variable: With Applications in Stochastic Calcolus, Financial Mathematics, and Feynman Integration (John Wiley & Sons, Inc., New Jersey, 2012)]. Muldowney uses the Henstock integration technique and deals with non-absolute integrability of the Fresnel integrals, in order to obtain a representation of the Feynman path integral as a functional. This approach offers a mathematically rigorous definition supporting Feynman’s intuitive derivations. But in his work, Muldowney gives only local in space-time solutions. A physical solution to the non-relativistic Schrödinger equation must be global, and it must be given in the form of a unitary one-parameter group in L{sup 2}(ℝ{sup n}). The purpose of this paper is to show that a system of one-dimensional local Muldowney’s solutions may be extended to yield a global solution. Moreover, the global extension can be represented by a unitary one-parameter group acting in L{sup 2}(ℝ{sup n})« less

  15. Weak values, 'negative probability', and the uncertainty principle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sokolovski, D.

    2007-10-15

    A quantum transition can be seen as a result of interference between various pathways (e.g., Feynman paths), which can be labeled by a variable f. An attempt to determine the value of f without destroying the coherence between the pathways produces a weak value of f. We show f to be an average obtained with an amplitude distribution which can, in general, take negative values, which, in accordance with the uncertainty principle, need not contain information about the actual range of f which contributes to the transition. It is also demonstrated that the moments of such alternating distributions have amore » number of unusual properties which may lead to a misinterpretation of the weak-measurement results. We provide a detailed analysis of weak measurements with and without post-selection. Examples include the double-slit diffraction experiment, weak von Neumann and von Neumann-like measurements, traversal time for an elastic collision, phase time, and local angular momentum.« less

  16. Research Capabilities

    Science.gov Websites

    Search Site submit About Us Los Alamos National LaboratoryRichard P. Feynman Center for Innovation Innovation protecting tomorrow Los Alamos National Laboratory The Richard P. Feynman Center for Innovation . thumbnail of Energy and Subsurface Laura Barber, Business Development Laura Barber Energy: Los Alamos is

  17. Regional Economic Development

    Science.gov Websites

    Search Site submit About Us Los Alamos National LaboratoryRichard P. Feynman Center for Innovation Innovation protecting tomorrow Los Alamos National Laboratory The Richard P. Feynman Center for Innovation key programs to achieve regional technology commercialization from Los Alamos. The programs below help

  18. The signed permutation group on Feynman graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Purkart, Julian, E-mail: purkart@physik.hu-berlin.de

    2016-08-15

    The Feynman rules assign to every graph an integral which can be written as a function of a scaling parameter L. Assuming L for the process under consideration is very small, so that contributions to the renormalization group are small, we can expand the integral and only consider the lowest orders in the scaling. The aim of this article is to determine specific combinations of graphs in a scalar quantum field theory that lead to a remarkable simplification of the first non-trivial term in the perturbation series. It will be seen that the result is independent of the renormalization schememore » and the scattering angles. To achieve that goal we will utilize the parametric representation of scalar Feynman integrals as well as the Hopf algebraic structure of the Feynman graphs under consideration. Moreover, we will present a formula which reduces the effort of determining the first-order term in the perturbation series for the specific combination of graphs to a minimum.« less

  19. Navigating around the algebraic jungle of QCD: efficient evaluation of loop helicity amplitudes

    NASA Astrophysics Data System (ADS)

    Lam, C. S.

    1993-05-01

    A method is developed whereby spinor helicity techniques can be used to simlify the calculation of loop amplitudes. This is achieved by using the Feynman-parameter representation where the offending off-shell loop momenta do not appear. Other shortcuts motivated by the Bern-Kosower one-loop string calculations can be incorporated into the formalism. This includes color reorganization into Chan-Paton factors and the use of background Feynman gauge. This method is applicable to any Feynman diagram with any number of loops as long as the external masses can be ignored. In order to minimize the very considerable algebra encountered in non-abelian gauge theories, graphical methods are developed for most of the calculations. This enables the large number of terms encountered to be organized implicitly in the Feynman diagram without the necessity of writing down any of them algebraically. A one-loop four-gluon amplitude in a particular helicity configuration is computed explicitly to illustrate the method.

  20. Quantum walks in brain microtubules--a biomolecular basis for quantum cognition?

    PubMed

    Hameroff, Stuart

    2014-01-01

    Cognitive decisions are best described by quantum mathematics. Do quantum information devices operate in the brain? What would they look like? Fuss and Navarro () describe quantum lattice registers in which quantum superpositioned pathways interact (compute/integrate) as 'quantum walks' akin to Feynman's path integral in a lattice (e.g. the 'Feynman quantum chessboard'). Simultaneous alternate pathways eventually reduce (collapse), selecting one particular pathway in a cognitive decision, or choice. This paper describes how quantum walks in a Feynman chessboard are conceptually identical to 'topological qubits' in brain neuronal microtubules, as described in the Penrose-Hameroff 'Orch OR' theory of consciousness. Copyright © 2013 Cognitive Science Society, Inc.

  1. Application of the Feynman-tree theorem together with BCFW recursion relations

    NASA Astrophysics Data System (ADS)

    Maniatis, M.

    2018-03-01

    Recently, it has been shown that on-shell scattering amplitudes can be constructed by the Feynman-tree theorem combined with the BCFW recursion relations. Since the BCFW relations are restricted to tree diagrams, the preceding application of the Feynman-tree theorem is essential. In this way, amplitudes can be constructed by on-shell and gauge-invariant tree amplitudes. Here, we want to apply this method to the electron-photon vertex correction. We present all the single, double, and triple phase-space tensor integrals explicitly and show that the sum of amplitudes coincides with the result of the conventional calculation of a virtual loop correction.

  2. Exact Maximum-Entropy Estimation with Feynman Diagrams

    NASA Astrophysics Data System (ADS)

    Netser Zernik, Amitai; Schlank, Tomer M.; Tessler, Ran J.

    2018-02-01

    A longstanding open problem in statistics is finding an explicit expression for the probability measure which maximizes entropy with respect to given constraints. In this paper a solution to this problem is found, using perturbative Feynman calculus. The explicit expression is given as a sum over weighted trees.

  3. Teaching Basic Quantum Mechanics in Secondary School Using Concepts of Feynman Path Integrals Method

    ERIC Educational Resources Information Center

    Fanaro, Maria de los Angeles; Otero, Maria Rita; Arlego, Marcelo

    2012-01-01

    This paper discusses the teaching of basic quantum mechanics in high school. Rather than following the usual formalism, our approach is based on Feynman's path integral method. Our presentation makes use of simulation software and avoids sophisticated mathematical formalism. (Contains 3 figures.)

  4. Nanotechnology: From Feynman to Funding

    ERIC Educational Resources Information Center

    Drexler, K. Eric

    2004-01-01

    The revolutionary Feynman vision of a powerful and general nanotechnology, based on nanomachines that build with atom-by-atom control, promises great opportunities and, if abused, great dangers. This vision made nanotechnology a buzzword and launched the global nanotechnology race. Along the way, however, the meaning of the word has shifted. A…

  5. Feynman-Kac equations for reaction and diffusion processes

    NASA Astrophysics Data System (ADS)

    Hou, Ru; Deng, Weihua

    2018-04-01

    This paper provides a theoretical framework for deriving the forward and backward Feynman-Kac equations for the distribution of functionals of the path of a particle undergoing both diffusion and reaction processes. Once given the diffusion type and reaction rate, a specific forward or backward Feynman-Kac equation can be obtained. The results in this paper include those for normal/anomalous diffusions and reactions with linear/nonlinear rates. Using the derived equations, we apply our findings to compute some physical (experimentally measurable) statistics, including the occupation time in half-space, the first passage time, and the occupation time in half-interval with an absorbing or reflecting boundary, for the physical system with anomalous diffusion and spontaneous evanescence.

  6. From Loops to Trees By-passing Feynman's Theorem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Catani, Stefano; Gleisberg, Tanju; Krauss, Frank

    2008-04-22

    We derive a duality relation between one-loop integrals and phase-space integrals emerging from them through single cuts. The duality relation is realized by a modification of the customary + i0 prescription of the Feynman propagators. The new prescription regularizing the propagators, which we write in a Lorentz covariant form, compensates for the absence of multiple cut contributions that appear in the Feynman Tree Theorem. The duality relation can be applied to generic one-loop quantities in any relativistic, local and unitary field theories. It is suitable for applications to the analytical calculation of one-loop scattering amplitudes, and to the numerical evaluationmore » of cross-sections at next-to-leading order.« less

  7. General consequences of the violated Feynman scaling

    NASA Technical Reports Server (NTRS)

    Kamberov, G.; Popova, L.

    1985-01-01

    The problem of scaling of the hadronic production cross sections represents an outstanding question in high energy physics especially for interpretation of cosmic ray data. A comprehensive analysis of the accelerator data leads to the conclusion of the existence of breaked Feynman scaling. It was proposed that the Lorentz invariant inclusive cross sections for secondaries of a given type approaches constant in respect to a breaked scaling variable x sub s. Thus, the differential cross sections measured in accelerator energy can be extrapolated to higher cosmic ray energies. This assumption leads to some important consequences. The distribution of secondary multiplicity that follows from the violated Feynman scaling using a similar method of Koba et al is discussed.

  8. The Pleasure of Finding Things out

    ERIC Educational Resources Information Center

    Loxley, Peter

    2005-01-01

    "The pleasure of finding things out" is a collection of short works by the Nobel Prize winning scientist Richard Feynman. The book provides insights into his infectious enthusiasm for science and his love of sharing ideas about the subject with anyone who wanted to listen. Feynman has been widely acknowledged as one of the greatest physicists of…

  9. The static hard-loop gluon propagator to all orders in anisotropy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nopoush, Mohammad; Guo, Yun; Strickland, Michael

    We calculate the (semi-)static hard-loop self-energy and propagator using the Keldysh formalism in a momentum-space anisotropic quark-gluon plasma. The static retarded, advanced, and Feynman (symmetric) self-energies and propagators are calculated to all orders in the momentum-space anisotropy parameter ξ. For the retarded and advanced self-energies/propagators, we present a concise derivation and comparison with previouslyobtained results and extend the calculation of the self-energies to next-to-leading order in the gluon energy, ω. For the Feynman self-energy/propagator, we present new results which are accurate to all orders in ξ. We compare our exact results with prior expressions for the Feynman self-energy/propagator which weremore » obtained using Taylor-expansions around an isotropic state. Here, we show that, unlike the Taylor-expanded results, the all-orders expression for the Feynman propagator is free from infrared singularities. Finally, we discuss the application of our results to the calculation of the imaginary-part of the heavy-quark potential in an anisotropic quark-gluon plasma.« less

  10. Feynman-like rules for calculating n-point correlators of the primordial curvature perturbation

    NASA Astrophysics Data System (ADS)

    Valenzuela-Toledo, César A.; Rodríguez, Yeinzon; Beltrán Almeida, Juan P.

    2011-10-01

    A diagrammatic approach to calculate n-point correlators of the primordial curvature perturbation ζ was developed a few years ago following the spirit of the Feynman rules in Quantum Field Theory. The methodology is very useful and time-saving, as it is for the case of the Feynman rules in the particle physics context, but, unfortunately, is not very well known by the cosmology community. In the present work, we extend such an approach in order to include not only scalar field perturbations as the generators of ζ, but also vector field perturbations. The purpose is twofold: first, we would like the diagrammatic approach (which we would call the Feynman-like rules) to become widespread among the cosmology community; second, we intend to give an easy tool to formulate any correlator of ζ for those cases that involve vector field perturbations and that, therefore, may generate prolonged stages of anisotropic expansion and/or important levels of statistical anisotropy. Indeed, the usual way of formulating such correlators, using the Wick's theorem, may become very clutter and time-consuming.

  11. Feynman formulas for semigroups generated by an iterated Laplace operator

    NASA Astrophysics Data System (ADS)

    Buzinov, M. S.

    2017-04-01

    In the present paper, we find representations of a one-parameter semigroup generated by a finite sum of iterated Laplace operators and an additive perturbation (the potential). Such semigroups and the evolution equations corresponding to them find applications in the field of physics, chemistry, biology, and pattern recognition. The representations mentioned above are obtained in the form of Feynman formulas, i.e., in the form of a limit of multiple integrals as the multiplicity tends to infinity. The term "Feynman formula" was proposed by Smolyanov. Smolyanov's approach uses Chernoff's theorems. A simple form of representations thus obtained enables one to use them for numerical modeling the dynamics of the evolution system as a method for the approximation of solutions of equations. The problems considered in this note can be treated using the approach suggested by Remizov (see also the monograph of Smolyanov and Shavgulidze on path integrals). The representations (of semigroups) obtained in this way are more complicated than those given by the Feynman formulas; however, it is possible to bypass some analytical difficulties.

  12. The static hard-loop gluon propagator to all orders in anisotropy

    DOE PAGES

    Nopoush, Mohammad; Guo, Yun; Strickland, Michael

    2017-09-15

    We calculate the (semi-)static hard-loop self-energy and propagator using the Keldysh formalism in a momentum-space anisotropic quark-gluon plasma. The static retarded, advanced, and Feynman (symmetric) self-energies and propagators are calculated to all orders in the momentum-space anisotropy parameter ξ. For the retarded and advanced self-energies/propagators, we present a concise derivation and comparison with previouslyobtained results and extend the calculation of the self-energies to next-to-leading order in the gluon energy, ω. For the Feynman self-energy/propagator, we present new results which are accurate to all orders in ξ. We compare our exact results with prior expressions for the Feynman self-energy/propagator which weremore » obtained using Taylor-expansions around an isotropic state. Here, we show that, unlike the Taylor-expanded results, the all-orders expression for the Feynman propagator is free from infrared singularities. Finally, we discuss the application of our results to the calculation of the imaginary-part of the heavy-quark potential in an anisotropic quark-gluon plasma.« less

  13. Toward Efficient Design of Reversible Logic Gates in Quantum-Dot Cellular Automata with Power Dissipation Analysis

    NASA Astrophysics Data System (ADS)

    Sasamal, Trailokya Nath; Singh, Ashutosh Kumar; Ghanekar, Umesh

    2018-04-01

    Nanotechnologies, remarkably Quantum-dot Cellular Automata (QCA), offer an attractive perspective for future computing technologies. In this paper, QCA is investigated as an implementation method for designing area and power efficient reversible logic gates. The proposed designs achieve superior performance by incorporating a compact 2-input XOR gate. The proposed design for Feynman, Toffoli, and Fredkin gates demonstrates 28.12, 24.4, and 7% reduction in cell count and utilizes 46, 24.4, and 7.6% less area, respectively over previous best designs. Regarding the cell count (area cover) that of the proposed Peres gate and Double Feynman gate are 44.32% (21.5%) and 12% (25%), respectively less than the most compact previous designs. Further, the delay of Fredkin and Toffoli gates is 0.75 clock cycles, which is equal to the delay of the previous best designs. While the Feynman and Double Feynman gates achieve a delay of 0.5 clock cycles, equal to the least delay previous one. Energy analysis confirms that the average energy dissipation of the developed Feynman, Toffoli, and Fredkin gates is 30.80, 18.08, and 4.3% (for 1.0 E k energy level), respectively less compared to best reported designs. This emphasizes the beneficial role of using proposed reversible gates to design complex and power efficient QCA circuits. The QCADesigner tool is used to validate the layout of the proposed designs, and the QCAPro tool is used to evaluate the energy dissipation.

  14. Is Electromagnetic Gravity Control Possible?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vargas, Jose G.; Torr, Douglas G.

    2004-02-04

    We study the interplay of Einstein's Gravitation (GR) and Maxwell's Electromagnetism, where the distribution of energy-momentum is not presently known (The Feynman Lectures, Vol 2, Chapter 27, section 4). As Feynman himself stated, one might in principle use Einstein's equations of GR to find such a distribution. GR (born in 1915) presently uses the Levi-Civita connection, LCC (the LCC was born two years after GR as a new concept, and not just as the pre-existing Christoffel symbols that represent it). Around 1927, Einstein proposed for physics an alternative to the LCC that constitutes a far more sensible and powerful affinemore » enrichment of metric Riemannian geometry. It is called teleparallelism (TP). Its Finslerian version (i.e. in the space-time-velocity arena) permits an unequivocal identification of the EM field as a geometric quantity. This in turn permits one to identify a completely geometric set of Einstein equations from curvature equations. From their right hand side, one may obtain the actual distribution of EM energy-momentum. It is consistent with Maxwell's equations, since these also are implied by the equations of structure of TP. We find that the so-far-unknown terms in this distribution amount to a total differential and do not, therefore, alter the value of the total EM energy-momentum. And yet these extra terms are at macroscopic distances enormously larger than the standard quadratic terms. This allows for the generation of measurable gravitational fields by EM fields. We thus answer affirmatively the question of the title.« less

  15. On the Presentation of Wave Phenomena of Electrons with the Young-Feynman Experiment

    ERIC Educational Resources Information Center

    Matteucci, Giorgio

    2011-01-01

    The Young-Feynman two-hole interferometer is widely used to present electron wave-particle duality and, in particular, the buildup of interference fringes with single electrons. The teaching approach consists of two steps: (i) electrons come through only one hole but diffraction effects are disregarded and (ii) electrons come through both holes…

  16. Teaching Electron--Positron--Photon Interactions with Hands-on Feynman Diagrams

    ERIC Educational Resources Information Center

    Kontokostas, George; Kalkanis, George

    2013-01-01

    Feynman diagrams are introduced in many physics textbooks, such as those by Alonso and Finn and Serway, and their use in physics education has been discussed by various authors. They have an appealing simplicity and can give insight into events in the microworld. Yet students often do not understand their significance and often cannot combine the…

  17. Feynman Diagrams as Metaphors: Borrowing the Particle Physicist's Imagery for Science Communication Purposes

    ERIC Educational Resources Information Center

    Pascolini, A.; Pietroni, M.

    2002-01-01

    We report on an educational project in particle physics based on Feynman diagrams. By dropping the mathematical aspect of the method and keeping just the iconic one, it is possible to convey many different concepts from the world of elementary particles, such as antimatter, conservation laws, particle creation and destruction, real and virtual…

  18. Solving differential equations for Feynman integrals by expansions near singular points

    NASA Astrophysics Data System (ADS)

    Lee, Roman N.; Smirnov, Alexander V.; Smirnov, Vladimir A.

    2018-03-01

    We describe a strategy to solve differential equations for Feynman integrals by powers series expansions near singular points and to obtain high precision results for the corresponding master integrals. We consider Feynman integrals with two scales, i.e. non-trivially depending on one variable. The corresponding algorithm is oriented at situations where canonical form of the differential equations is impossible. We provide a computer code constructed with the help of our algorithm for a simple example of four-loop generalized sunset integrals with three equal non-zero masses and two zero masses. Our code gives values of the master integrals at any given point on the real axis with a required accuracy and a given order of expansion in the regularization parameter ɛ.

  19. The neutron-gamma Feynman variance to mean approach: Gamma detection and total neutron-gamma detection (theory and practice)

    NASA Astrophysics Data System (ADS)

    Chernikova, Dina; Axell, Kåre; Avdic, Senada; Pázsit, Imre; Nordlund, Anders; Allard, Stefan

    2015-05-01

    Two versions of the neutron-gamma variance to mean (Feynman-alpha method or Feynman-Y function) formula for either gamma detection only or total neutron-gamma detection, respectively, are derived and compared in this paper. The new formulas have particular importance for detectors of either gamma photons or detectors sensitive to both neutron and gamma radiation. If applied to a plastic or liquid scintillation detector, the total neutron-gamma detection Feynman-Y expression corresponds to a situation where no discrimination is made between neutrons and gamma particles. The gamma variance to mean formulas are useful when a detector of only gamma radiation is used or when working with a combined neutron-gamma detector at high count rates. The theoretical derivation is based on the Chapman-Kolmogorov equation with the inclusion of general reactions and corresponding intensities for neutrons and gammas, but with the inclusion of prompt reactions only. A one energy group approximation is considered. The comparison of the two different theories is made by using reaction intensities obtained in MCNPX simulations with a simplified geometry for two scintillation detectors and a 252Cf-source. In addition, the variance to mean ratios, neutron, gamma and total neutron-gamma are evaluated experimentally for a weak 252Cf neutron-gamma source, a 137Cs random gamma source and a 22Na correlated gamma source. Due to the focus being on the possibility of using neutron-gamma variance to mean theories for both reactor and safeguards applications, we limited the present study to the general analytical expressions for Feynman-alpha formulas.

  20. Low-Dimensional Nanostructures and a Semiclassical Approach for Teaching Feynman's Sum-over-Paths Quantum Theory

    ERIC Educational Resources Information Center

    Onorato, P.

    2011-01-01

    An introduction to quantum mechanics based on the sum-over-paths (SOP) method originated by Richard P. Feynman and developed by E. F. Taylor and coworkers is presented. The Einstein-Brillouin-Keller (EBK) semiclassical quantization rules are obtained following the SOP approach for bounded systems, and a general approach to the calculation of…

  1. Feynman amplitudes and limits of heights

    NASA Astrophysics Data System (ADS)

    Amini, O.; Bloch, S. J.; Burgos Gil, J. I.; Fresán, J.

    2016-10-01

    We investigate from a mathematical perspective how Feynman amplitudes appear in the low-energy limit of string amplitudes. In this paper, we prove the convergence of the integrands. We derive this from results describing the asymptotic behaviour of the height pairing between degree-zero divisors, as a family of curves degenerates. These are obtained by means of the nilpotent orbit theorem in Hodge theory.

  2. Derivation of the Schrodinger Equation from the Hamilton-Jacobi Equation in Feynman's Path Integral Formulation of Quantum Mechanics

    ERIC Educational Resources Information Center

    Field, J. H.

    2011-01-01

    It is shown how the time-dependent Schrodinger equation may be simply derived from the dynamical postulate of Feynman's path integral formulation of quantum mechanics and the Hamilton-Jacobi equation of classical mechanics. Schrodinger's own published derivations of quantum wave equations, the first of which was also based on the Hamilton-Jacobi…

  3. Coupled oscillators and Feynman's three papers

    NASA Astrophysics Data System (ADS)

    Kim, Y. S.

    2007-05-01

    According to Richard Feynman, the adventure of our science of physics is a perpetual attempt to recognize that the different aspects of nature are really different aspects of the same thing. It is therefore interesting to combine some, if not all, of Feynman's papers into one. The first of his three papers is on the "rest of the universe" contained in his 1972 book on statistical mechanics. The second idea is Feynman's parton picture which he presented in 1969 at the Stony Brook conference on high-energy physics. The third idea is contained in the 1971 paper he published with his students, where they show that the hadronic spectra on Regge trajectories are manifestations of harmonic-oscillator degeneracies. In this report, we formulate these three ideas using the mathematics of two coupled oscillators. It is shown that the idea of entanglement is contained in his rest of the universe, and can be extended to a space-time entanglement. It is shown also that his parton model and the static quark model can be combined into one Lorentz-covariant entity. Furthermore, Einstein's special relativity, based on the Lorentz group, can also be formulated within the mathematical framework of two coupled oscillators.

  4. The Generalized Hellmann-Feynman Theorem Approach to Quantum Effects of Mesoscopic Complicated Coupling Circuit at Finite Temperature

    NASA Astrophysics Data System (ADS)

    Wang, Xiu-Xia

    2016-02-01

    By employing the generalized Hellmann-Feynman theorem, the quantization of mesoscopic complicated coupling circuit is proposed. The ensemble average energy, the energy fluctuation and the energy distribution are investigated at finite temperature. It is shown that the generalized Hellmann-Feynman theorem plays the key role in quantizing a mesoscopic complicated coupling circuit at finite temperature, and when the temperature is lower than the specific temperature, the value of (\\vartriangle {hat {H}})2 is almost zero and the values of e and (\\vartriangle hat {{H}})2are basically constant, but while the temperature rises to the specific temperature, both of them move upward rapidly. The energy fluctuation of the system becomes larger when the coupling inductance is larger or the coupling capacitance is smaller.

  5. Generalizations of polylogarithms for Feynman integrals

    NASA Astrophysics Data System (ADS)

    Bogner, Christian

    2016-10-01

    In this talk, we discuss recent progress in the application of generalizations of polylogarithms in the symbolic computation of multi-loop integrals. We briefly review the Maple program MPL which supports a certain approach for the computation of Feynman integrals in terms of multiple polylogarithms. Furthermore we discuss elliptic generalizations of polylogarithms which have shown to be useful in the computation of the massive two-loop sunrise integral.

  6. A Didactic Proposed for Teaching the Concepts of Electrons and Light in Secondary School Using Feynman's Path Sum Method

    ERIC Educational Resources Information Center

    Fanaro, Maria de los Angeles; Arlego, Marcelo; Otero, Maria Rita

    2012-01-01

    This work comprises an investigation about basic Quantum Mechanics (QM) teaching in the high school. The organization of the concepts does not follow a historical line. The Path Integrals method of Feynman has been adopted as a Reference Conceptual Structure that is an alternative to the canonical formalism. We have designed a didactic sequence…

  7. Mathematical interpretation of Brownian motor model: Limit cycles and directed transport phenomena

    NASA Astrophysics Data System (ADS)

    Yang, Jianqiang; Ma, Hong; Zhong, Suchuang

    2018-03-01

    In this article, we first suggest that the attractor of Brownian motor model is one of the reasons for the directed transport phenomenon of Brownian particle. We take the classical Smoluchowski-Feynman (SF) ratchet model as an example to investigate the relationship between limit cycles and directed transport phenomenon of the Brownian particle. We study the existence and variation rule of limit cycles of SF ratchet model at changing parameters through mathematical methods. The influences of these parameters on the directed transport phenomenon of a Brownian particle are then analyzed through numerical simulations. Reasonable mathematical explanations for the directed transport phenomenon of Brownian particle in SF ratchet model are also formulated on the basis of the existence and variation rule of the limit cycles and numerical simulations. These mathematical explanations provide a theoretical basis for applying these theories in physics, biology, chemistry, and engineering.

  8. Feynman-Kac equation for anomalous processes with space- and time-dependent forces

    NASA Astrophysics Data System (ADS)

    Cairoli, Andrea; Baule, Adrian

    2017-04-01

    Functionals of a stochastic process Y(t) model many physical time-extensive observables, for instance particle positions, local and occupation times or accumulated mechanical work. When Y(t) is a normal diffusive process, their statistics are obtained as the solution of the celebrated Feynman-Kac equation. This equation provides the crucial link between the expected values of diffusion processes and the solutions of deterministic second-order partial differential equations. When Y(t) is non-Brownian, e.g. an anomalous diffusive process, generalizations of the Feynman-Kac equation that incorporate power-law or more general waiting time distributions of the underlying random walk have recently been derived. A general representation of such waiting times is provided in terms of a Lévy process whose Laplace exponent is directly related to the memory kernel appearing in the generalized Feynman-Kac equation. The corresponding anomalous processes have been shown to capture nonlinear mean square displacements exhibiting crossovers between different scaling regimes, which have been observed in numerous experiments on biological systems like migrating cells or diffusing macromolecules in intracellular environments. However, the case where both space- and time-dependent forces drive the dynamics of the generalized anomalous process has not been solved yet. Here, we present the missing derivation of the Feynman-Kac equation in such general case by using the subordination technique. Furthermore, we discuss its extension to functionals explicitly depending on time, which are of particular relevance for the stochastic thermodynamics of anomalous diffusive systems. Exact results on the work fluctuations of a simple non-equilibrium model are obtained. An additional aim of this paper is to provide a pedagogical introduction to Lévy processes, semimartingales and their associated stochastic calculus, which underlie the mathematical formulation of anomalous diffusion as a subordinated process.

  9. Supermanifolds from Feynman graphs

    NASA Astrophysics Data System (ADS)

    Marcolli, Matilde; Rej, Abhijnan

    2008-08-01

    We generalize the computation of Feynman integrals of log divergent graphs in terms of the Kirchhoff polynomial to the case of graphs with both fermionic and bosonic edges, to which we assign a set of ordinary and Grassmann variables. This procedure gives a computation of the Feynman integrals in terms of a period on a supermanifold, for graphs admitting a basis of the first homology satisfying a condition generalizing the log divergence in this context. The analog in this setting of the graph hypersurfaces is a graph supermanifold given by the divisor of zeros and poles of the Berezinian of a matrix associated with the graph, inside a superprojective space. We introduce a Grothendieck group for supermanifolds and identify the subgroup generated by the graph supermanifolds. This can be seen as a general procedure for constructing interesting classes of supermanifolds with associated periods.

  10. New graph polynomials in parametric QED Feynman integrals

    NASA Astrophysics Data System (ADS)

    Golz, Marcel

    2017-10-01

    In recent years enormous progress has been made in perturbative quantum field theory by applying methods of algebraic geometry to parametric Feynman integrals for scalar theories. The transition to gauge theories is complicated not only by the fact that their parametric integrand is much larger and more involved. It is, moreover, only implicitly given as the result of certain differential operators applied to the scalar integrand exp(-ΦΓ /ΨΓ) , where ΨΓ and ΦΓ are the Kirchhoff and Symanzik polynomials of the Feynman graph Γ. In the case of quantum electrodynamics we find that the full parametric integrand inherits a rich combinatorial structure from ΨΓ and ΦΓ. In the end, it can be expressed explicitly as a sum over products of new types of graph polynomials which have a combinatoric interpretation via simple cycle subgraphs of Γ.

  11. Application of Generalized Feynman-Hellmann Theorem in Quantization of LC Circuit in Thermo Bath

    NASA Astrophysics Data System (ADS)

    Fan, Hong-Yi; Tang, Xu-Bing

    For the quantized LC electric circuit, when taking the Joule thermal effect into account, we think that physical observables should be evaluated in the context of ensemble average. We then use the generalized Feynman-Hellmann theorem for ensemble average to calculate them, which seems convenient. Fluctuation of observables in various LC electric circuits in the presence of thermo bath growing with temperature is exhibited.

  12. Geometry, Heat Equation and Path Integrals on the Poincaré Upper Half-Plane

    NASA Astrophysics Data System (ADS)

    Kubo, R.

    1988-01-01

    Geometry, heat equation and Feynman's path integrals are studied on the Poincaré upper half-plane. The fundamental solution to the heat equation partial f/partial t = Delta_{H} f is expressed in terms of a path integral defined on the upper half-plane. It is shown that Kac's statement that Feynman's path integral satisfies the Schrödinger equation is also valid for our case.

  13. A test of the Feynman scaling in the fragmentation region

    NASA Technical Reports Server (NTRS)

    Doke, T.; Innocente, V.; Kasahara, K.; Kikuchi, J.; Kashiwagi, T.; Lanzano, S.; Masuda, K.; Murakami, H.; Muraki, Y.; Nakada, T.

    1985-01-01

    The result of the direct measurement of the fragmentation region will be presented. The result will be obtained at the CERN proton-antiproton collider, being exposured the Silicon calorimeters inside beam pipe. This experiment clarifies a long riddle of cosmic ray physics, whether the Feynman scaling does villate at the fragmentation region or the Iron component is increasing at 10 to the 15th power eV.

  14. On the correspondence between quantum and classical variational principles

    DOE PAGES

    Ruiz, D. E.; Dodin, I. Y.

    2015-06-10

    Here, classical variational principles can be deduced from quantum variational principles via formal reparameterization of the latter. It is shown that such reparameterization is possible without invoking any assumptions other than classicality and without appealing to dynamical equations. As examples, first principle variational formulations of classical point-particle and cold-fluid motion are derived from their quantum counterparts for Schrodinger, Pauli, and Klein-Gordon particles.

  15. Automated generation of lattice QCD Feynman rules

    NASA Astrophysics Data System (ADS)

    Hart, A.; von Hippel, G. M.; Horgan, R. R.; Müller, E. H.

    2009-12-01

    The derivation of the Feynman rules for lattice perturbation theory from actions and operators is complicated, especially for highly improved actions such as HISQ. This task is, however, both important and particularly suitable for automation. We describe a suite of software to generate and evaluate Feynman rules for a wide range of lattice field theories with gluons and (relativistic and/or heavy) quarks. Our programs are capable of dealing with actions as complicated as (m)NRQCD and HISQ. Automated differentiation methods are used to calculate also the derivatives of Feynman diagrams. Program summaryProgram title: HiPPY, HPsrc Catalogue identifier: AEDX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPLv2 (see Additional comments below) No. of lines in distributed program, including test data, etc.: 513 426 No. of bytes in distributed program, including test data, etc.: 4 893 707 Distribution format: tar.gz Programming language: Python, Fortran95 Computer: HiPPy: Single-processor workstations. HPsrc: Single-processor workstations and MPI-enabled multi-processor systems Operating system: HiPPy: Any for which Python v2.5.x is available. HPsrc: Any for which a standards-compliant Fortran95 compiler is available Has the code been vectorised or parallelised?: Yes RAM: Problem specific, typically less than 1 GB for either code Classification: 4.4, 11.5 Nature of problem: Derivation and use of perturbative Feynman rules for complicated lattice QCD actions. Solution method: An automated expansion method implemented in Python (HiPPy) and code to use expansions to generate Feynman rules in Fortran95 (HPsrc). Restrictions: No general restrictions. Specific restrictions are discussed in the text. Additional comments: The HiPPy and HPsrc codes are released under the second version of the GNU General Public Licence (GPL v2). Therefore anyone is free to use or modify the code for their own calculations. As part of the licensing, we ask that any publications including results from the use of this code or of modifications of it cite Refs. [1,2] as well as this paper. Finally, we also ask that details of these publications, as well as of any bugs or required or useful improvements of this core code, would be communicated to us. Running time: Very problem specific, depending on the complexity of the Feynman rules and the number of integration points. Typically between a few minutes and several weeks. The installation tests provided with the program code take only a few seconds to run. References:A. Hart, G.M. von Hippel, R.R. Horgan, L.C. Storoni, Automatically generating Feynman rules for improved lattice eld theories, J. Comput. Phys. 209 (2005) 340-353, doi:10.1016/j.jcp.2005.03.010, arXiv:hep-lat/0411026. M. Lüscher, P. Weisz, Efficient Numerical Techniques for Perturbative Lattice Gauge Theory Computations, Nucl. Phys. B 266 (1986) 309, doi:10.1016/0550-3213(86)90094-5.

  16. A massive Feynman integral and some reduction relations for Appell functions

    NASA Astrophysics Data System (ADS)

    Shpot, M. A.

    2007-12-01

    New explicit expressions are derived for the one-loop two-point Feynman integral with arbitrary external momentum and masses m12 and m22 in D dimensions. The results are given in terms of Appell functions, manifestly symmetric with respect to the masses mi2. Equating our expressions with previously known results in terms of Gauss hypergeometric functions yields reduction relations for the involved Appell functions that are apparently new mathematical results.

  17. Theoretical principles for biology: Variation.

    PubMed

    Montévil, Maël; Mossio, Matteo; Pocheville, Arnaud; Longo, Giuseppe

    2016-10-01

    Darwin introduced the concept that random variation generates new living forms. In this paper, we elaborate on Darwin's notion of random variation to propose that biological variation should be given the status of a fundamental theoretical principle in biology. We state that biological objects such as organisms are specific objects. Specific objects are special in that they are qualitatively different from each other. They can undergo unpredictable qualitative changes, some of which are not defined before they happen. We express the principle of variation in terms of symmetry changes, where symmetries underlie the theoretical determination of the object. We contrast the biological situation with the physical situation, where objects are generic (that is, different objects can be assumed to be identical) and evolve in well-defined state spaces. We derive several implications of the principle of variation, in particular, biological objects show randomness, historicity and contextuality. We elaborate on the articulation between this principle and the two other principles proposed in this special issue: the principle of default state and the principle of organization. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Cook-Levin Theorem Algorithmic-Reducibility/Completeness = Wilson Renormalization-(Semi)-Group Fixed-Points; ``Noise''-Induced Phase-Transitions (NITs) to Accelerate Algorithmics (``NIT-Picking'') REPLACING CRUTCHES!!!: Models: Turing-machine, finite-state-models, finite-automata

    NASA Astrophysics Data System (ADS)

    Young, Frederic; Siegel, Edward

    Cook-Levin theorem theorem algorithmic computational-complexity(C-C) algorithmic-equivalence reducibility/completeness equivalence to renormalization-(semi)-group phase-transitions critical-phenomena statistical-physics universality-classes fixed-points, is exploited via Siegel FUZZYICS =CATEGORYICS = ANALOGYICS =PRAGMATYICS/CATEGORY-SEMANTICS ONTOLOGY COGNITION ANALYTICS-Aristotle ``square-of-opposition'' tabular list-format truth-table matrix analytics predicts and implements ''noise''-induced phase-transitions (NITs) to accelerate versus to decelerate Harel [Algorithmics (1987)]-Sipser[Intro.Thy. Computation(`97)] algorithmic C-C: ''NIT-picking''(!!!), to optimize optimization-problems optimally(OOPO). Versus iso-''noise'' power-spectrum quantitative-only amplitude/magnitude-only variation stochastic-resonance, ''NIT-picking'' is ''noise'' power-spectrum QUALitative-type variation via quantitative critical-exponents variation. Computer-''science''/SEANCE algorithmic C-C models: Turing-machine, finite-state-models, finite-automata,..., discrete-maths graph-theory equivalence to physics Feynman-diagrams are identified as early-days once-workable valid but limiting IMPEDING CRUTCHES(!!!), ONLY IMPEDE latter-days new-insights!!!

  19. The Feynman-Y Statistic in Relation to Shift-Register Neutron Coincidence Counting: Precision and Dead Time

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Croft, Stephen; Santi, Peter A.; Henzlova, Daniela

    The Feynman-Y statistic is a type of autocorrelation analysis. It is defined as the excess variance-to-mean ratio, Y = VMR - 1, of the number count distribution formed by sampling a pulse train using a series of non-overlapping gates. It is a measure of the degree of correlation present on the pulse train with Y = 0 for Poisson data. In the context of neutron coincidence counting we show that the same information can be obtained from the accidentals histogram acquired using the multiplicity shift-register method, which is currently the common autocorrelation technique applied in nuclear safeguards. In the casemore » of multiplicity shift register analysis however, overlapping gates, either triggered by the incoming pulse stream or by a periodic clock, are used. The overlap introduces additional covariance but does not alter the expectation values. In this paper we discuss, for a particular data set, the relative merit of the Feynman and shift-register methods in terms of both precision and dead time correction. Traditionally the Feynman approach is applied with a relatively long gate width compared to the dieaway time. The main reason for this is so that the gate utilization factor can be taken as unity rather than being treated as a system parameter to be determined at characterization/calibration. But because the random trigger interval gate utilization factor is slow to saturate this procedure requires a gate width many times the effective 1/e dieaway time. In the traditional approach this limits the number of gates that can be fitted into a given assay duration. We empirically show that much shorter gates, similar in width to those used in traditional shift register analysis can be used. Because the way in which the correlated information present on the pulse train is extracted is different for the moments based method of Feynman and the various shift register based approaches, the dead time losses are manifested differently for these two approaches. The resulting estimates for the dead time corrected first and second order reduced factorial moments should be independent of the method however and this allows the respective dead time formalism to be checked. We discuss how to make dead time corrections in both the shift register and the Feynman approaches.« less

  20. A topological extension of GR: Black holes induce dark energy

    NASA Astrophysics Data System (ADS)

    Spaans, M.

    2013-02-01

    A topological extension of general relativity is presented. The superposition principle of quantum mechanics, as formulated by the Feynman path integral, is taken as a starting point. It is argued that the trajectories that enter this path integral are distinct and thus that space-time topology is multiply connected. Specifically, space-time at the Planck scale consists of a lattice of three-tori that facilitates many distinct paths for particles to travel along. To add gravity, mini black holes are attached to this lattice. These mini black holes represent Wheeler's quantum foam and result from the fact that GR is not conformally invariant. The number of such mini black holes in any time-slice through four-space is found to be equal to the number of macroscopic (so long-lived) black holes in the entire universe. This connection, by which macroscopic black holes induce mini black holes, is a topological expression of Mach's principle. The proposed topological extension of GR can be tested because, if correct, the dark energy density of the universe should be proportional the total number of macroscopic black holes in the universe at any time. This prediction, although strange, agrees with current astrophysical observations.

  1. Gravity and decoherence: the double slit experiment revisited

    NASA Astrophysics Data System (ADS)

    Samuel, Joseph

    2018-02-01

    The double slit experiment is iconic and widely used in classrooms to demonstrate the fundamental mystery of quantum physics. The puzzling feature is that the probability of an electron arriving at the detector when both slits are open is not the sum of the probabilities when the slits are open separately. The superposition principle of quantum mechanics tells us to add amplitudes rather than probabilities and this results in interference. This experiment defies our classical intuition that the probabilities of exclusive events add. In understanding the emergence of the classical world from the quantum one, there have been suggestions by Feynman, Diosi and Penrose that gravity is responsible for suppressing interference. This idea has been pursued in many different forms ever since, predominantly within Newtonian approaches to gravity. In this paper, we propose and theoretically analyse two ‘gedanken’ or thought experiments which lend strong support to the idea that gravity is responsible for decoherence. The first makes the point that thermal radiation can suppress interference. The second shows that in an accelerating frame, Unruh radiation does the same. Invoking the Einstein equivalence principle to relate acceleration to gravity, we support the view that gravity is responsible for decoherence.

  2. JaxoDraw: A graphical user interface for drawing Feynman diagrams

    NASA Astrophysics Data System (ADS)

    Binosi, D.; Theußl, L.

    2004-08-01

    JaxoDraw is a Feynman graph plotting tool written in Java. It has a complete graphical user interface that allows all actions to be carried out via mouse click-and-drag operations in a WYSIWYG fashion. Graphs may be exported to postscript/EPS format and can be saved in XML files to be used for later sessions. One of JaxoDraw's main features is the possibility to create ? code that may be used to generate graphics output, thus combining the powers of ? with those of a modern day drawing program. With JaxoDraw it becomes possible to draw even complicated Feynman diagrams with just a few mouse clicks, without the knowledge of any programming language. Program summaryTitle of program: JaxoDraw Catalogue identifier: ADUA Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADUA Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar gzip file Operating system: Any Java-enabled platform, tested on Linux, Windows ME, XP, Mac OS X Programming language used: Java License: GPL Nature of problem: Existing methods for drawing Feynman diagrams usually require some 'hard-coding' in one or the other programming or scripting language. It is not very convenient and often time consuming, to generate relatively simple diagrams. Method of solution: A program is provided that allows for the interactive drawing of Feynman diagrams with a graphical user interface. The program is easy to learn and use, produces high quality output in several formats and runs on any operating system where a Java Runtime Environment is available. Number of bytes in distributed program, including test data: 2 117 863 Number of lines in distributed program, including test data: 60 000 Restrictions: Certain operations (like internal latex compilation, Postscript preview) require the execution of external commands that might not work on untested operating systems. Typical running time: As an interactive program, the running time depends on the complexity of the diagram to be drawn.

  3. Atomic Manipulation on Metal Surfaces

    NASA Astrophysics Data System (ADS)

    Ternes, Markus; Lutz, Christopher P.; Heinrich, Andreas J.

    Half a century ago, Nobel Laureate Richard Feynman asked in a now-famous lecture what would happen if we could precisely position individual atoms at will [R.P. Feynman, Eng. Sci. 23, 22 (1960)]. This dream became a reality some 30 years later when Eigler and Schweizer were the first to position individual Xe atoms at will with the probe tip of a low-temperature scanning tunneling microscope (STM) on a Ni surface [D.M. Eigler, E.K. Schweizer, Nature 344, 524 (1990)].

  4. Schrödinger problem, Lévy processes, and noise in relativistic quantum mechanics

    NASA Astrophysics Data System (ADS)

    Garbaczewski, Piotr; Klauder, John R.; Olkiewicz, Robert

    1995-05-01

    The main purpose of the paper is an essentially probabilistic analysis of relativistic quantum mechanics. It is based on the assumption that whenever probability distributions arise, there exists a stochastic process that is either responsible for the temporal evolution of a given measure or preserves the measure in the stationary case. Our departure point is the so-called Schrödinger problem of probabilistic evolution, which provides for a unique Markov stochastic interpolation between any given pair of boundary probability densities for a process covering a fixed, finite duration of time, provided we have decided a priori what kind of primordial dynamical semigroup transition mechanism is involved. In the nonrelativistic theory, including quantum mechanics, Feynman-Kac-like kernels are the building blocks for suitable transition probability densities of the process. In the standard ``free'' case (Feynman-Kac potential equal to zero) the familiar Wiener noise is recovered. In the framework of the Schrödinger problem, the ``free noise'' can also be extended to any infinitely divisible probability law, as covered by the Lévy-Khintchine formula. Since the relativistic Hamiltonians ||∇|| and √-Δ+m2 -m are known to generate such laws, we focus on them for the analysis of probabilistic phenomena, which are shown to be associated with the relativistic wave (D'Alembert) and matter-wave (Klein-Gordon) equations, respectively. We show that such stochastic processes exist and are spatial jump processes. In general, in the presence of external potentials, they do not share the Markov property, except for stationary situations. A concrete example of the pseudodifferential Cauchy-Schrödinger evolution is analyzed in detail. The relativistic covariance of related wave equations is exploited to demonstrate how the associated stochastic jump processes comply with the principles of special relativity.

  5. A new class of ensemble conserving algorithms for approximate quantum dynamics: Theoretical formulation and model problems.

    PubMed

    Smith, Kyle K G; Poulsen, Jens Aage; Nyman, Gunnar; Rossky, Peter J

    2015-06-28

    We develop two classes of quasi-classical dynamics that are shown to conserve the initial quantum ensemble when used in combination with the Feynman-Kleinert approximation of the density operator. These dynamics are used to improve the Feynman-Kleinert implementation of the classical Wigner approximation for the evaluation of quantum time correlation functions known as Feynman-Kleinert linearized path-integral. As shown, both classes of dynamics are able to recover the exact classical and high temperature limits of the quantum time correlation function, while a subset is able to recover the exact harmonic limit. A comparison of the approximate quantum time correlation functions obtained from both classes of dynamics is made with the exact results for the challenging model problems of the quartic and double-well potentials. It is found that these dynamics provide a great improvement over the classical Wigner approximation, in which purely classical dynamics are used. In a special case, our first method becomes identical to centroid molecular dynamics.

  6. Cálculo del esfuerzo ideal de metales nobles mediante primeros principios en la dirección <100>

    NASA Astrophysics Data System (ADS)

    Bautista-Hernández, A.; López-Fuentes, M.; Pacheco-Espejel, V.; Rivas-Silva, J. F.

    2005-04-01

    We present calculations of the ideal strength on the < 100 > direction for noble metals (Cu, Ag and Au), by means of first principles calculations. First, we obtain the structural parameters (cell parameters, bulk modulus) for each studied metal. We deform on the < 100 > direction calculating the total energy and the stress tensor through the Hellman-Feynman theorem, by the relaxation of the unit cell in the perpendicular directions to the deformation one. The calculated cell constants differ 1.3 % from experimental data. The maximum ideal strength are 29.6, 17 and 19 GPa for Cu, Ag and Au respectively. Meanwhile, the calculated elastic modulus are 106 (Cu), 71 (Ag), and 45 GPa (Au) and are in agreement with the experimental values for polycrystalline samples. The values of maximum strength are explained by the optimum volume values due to the atomic radius size for each element.

  7. CalcHEP 3.4 for collider physics within and beyond the Standard Model

    NASA Astrophysics Data System (ADS)

    Belyaev, Alexander; Christensen, Neil D.; Pukhov, Alexander

    2013-07-01

    We present version 3.4 of the CalcHEP software package which is designed for effective evaluation and simulation of high energy physics collider processes at parton level. The main features of CalcHEP are the computation of Feynman diagrams, integration over multi-particle phase space and event simulation at parton level. The principle attractive key-points along these lines are that it has: (a) an easy startup and usage even for those who are not familiar with CalcHEP and programming; (b) a friendly and convenient graphical user interface (GUI); (c) the option for the user to easily modify a model or introduce a new model by either using the graphical interface or by using an external package with the possibility of cross checking the results in different gauges; (d) a batch interface which allows to perform very complicated and tedious calculations connecting production and decay modes for processes with many particles in the final state. With this features set, CalcHEP can efficiently perform calculations with a high level of automation from a theory in the form of a Lagrangian down to phenomenology in the form of cross sections, parton level event simulation and various kinematical distributions. In this paper we report on the new features of CalcHEP 3.4 which improves the power of our package to be an effective tool for the study of modern collider phenomenology. Program summaryProgram title: CalcHEP Catalogue identifier: AEOV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 78535 No. of bytes in distributed program, including test data, etc.: 818061 Distribution format: tar.gz Programming language: C. Computer: PC, MAC, Unix Workstations. Operating system: Unix. RAM: Depends on process under study Classification: 4.4, 5. External routines: X11 Nature of problem: Implement new models of particle interactions. Generate Feynman diagrams for a physical process in any implemented theoretical model. Integrate phase space for Feynman diagrams to obtain cross sections or particle widths taking into account kinematical cuts. Simulate collisions at modern colliders and generate respective unweighted events. Mix events for different subprocesses and connect them with the decays of unstable particles. Solution method: Symbolic calculations. Squared Feynman diagram approach Vegas Monte Carlo algorithm. Restrictions: Up to 2→4 production (1→5 decay) processes are realistic on typical computers. Higher multiplicities sometimes possible for specific 2→5 and 2→6 processes. Unusual features: Graphical user interface, symbolic algebra calculation of squared matrix element, parallelization on a pbs cluster. Running time: Depends strongly on the process. For a typical 2→2 process it takes seconds. For 2→3 processes the typical running time is of the order of minutes. For higher multiplicities it could take much longer.

  8. Renormalized asymptotic enumeration of Feynman diagrams

    NASA Astrophysics Data System (ADS)

    Borinsky, Michael

    2017-10-01

    A method to obtain all-order asymptotic results for the coefficients of perturbative expansions in zero-dimensional quantum field is described. The focus is on the enumeration of the number of skeleton or primitive diagrams of a certain QFT and its asymptotics. The procedure heavily applies techniques from singularity analysis. To utilize singularity analysis, a representation of the zero-dimensional path integral as a generalized hyperelliptic curve is deduced. As applications the full asymptotic expansions of the number of disconnected, connected, 1PI and skeleton Feynman diagrams in various theories are given.

  9. Ground-state densities from the Rayleigh-Ritz variation principle and from density-functional theory.

    PubMed

    Kvaal, Simen; Helgaker, Trygve

    2015-11-14

    The relationship between the densities of ground-state wave functions (i.e., the minimizers of the Rayleigh-Ritz variation principle) and the ground-state densities in density-functional theory (i.e., the minimizers of the Hohenberg-Kohn variation principle) is studied within the framework of convex conjugation, in a generic setting covering molecular systems, solid-state systems, and more. Having introduced admissible density functionals as functionals that produce the exact ground-state energy for a given external potential by minimizing over densities in the Hohenberg-Kohn variation principle, necessary and sufficient conditions on such functionals are established to ensure that the Rayleigh-Ritz ground-state densities and the Hohenberg-Kohn ground-state densities are identical. We apply the results to molecular systems in the Born-Oppenheimer approximation. For any given potential v ∈ L(3/2)(ℝ(3)) + L(∞)(ℝ(3)), we establish a one-to-one correspondence between the mixed ground-state densities of the Rayleigh-Ritz variation principle and the mixed ground-state densities of the Hohenberg-Kohn variation principle when the Lieb density-matrix constrained-search universal density functional is taken as the admissible functional. A similar one-to-one correspondence is established between the pure ground-state densities of the Rayleigh-Ritz variation principle and the pure ground-state densities obtained using the Hohenberg-Kohn variation principle with the Levy-Lieb pure-state constrained-search functional. In other words, all physical ground-state densities (pure or mixed) are recovered with these functionals and no false densities (i.e., minimizing densities that are not physical) exist. The importance of topology (i.e., choice of Banach space of densities and potentials) is emphasized and illustrated. The relevance of these results for current-density-functional theory is examined.

  10. Noether's Theorem and its Inverse of Birkhoffian System in Event Space Based on Herglotz Variational Problem

    NASA Astrophysics Data System (ADS)

    Tian, X.; Zhang, Y.

    2018-03-01

    Herglotz variational principle, in which the functional is defined by a differential equation, generalizes the classical ones defining the functional by an integral. The principle gives a variational principle description of nonconservative systems even when the Lagrangian is independent of time. This paper focuses on studying the Noether's theorem and its inverse of a Birkhoffian system in event space based on the Herglotz variational problem. Firstly, according to the Herglotz variational principle of a Birkhoffian system, the principle of a Birkhoffian system in event space is established. Secondly, its parametric equations and two basic formulae for the variation of Pfaff-Herglotz action of a Birkhoffian system in event space are obtained. Furthermore, the definition and criteria of Noether symmetry of the Birkhoffian system in event space based on the Herglotz variational problem are given. Then, according to the relationship between the Noether symmetry and conserved quantity, the Noether's theorem is derived. Under classical conditions, Noether's theorem of a Birkhoffian system in event space based on the Herglotz variational problem reduces to the classical ones. In addition, Noether's inverse theorem of the Birkhoffian system in event space based on the Herglotz variational problem is also obtained. In the end of the paper, an example is given to illustrate the application of the results.

  11. Advances in the computation of the Sjöstrand, Rossi, and Feynman distributions

    DOE PAGES

    Talamo, A.; Gohar, Y.; Gabrielli, F.; ...

    2017-02-01

    This study illustrates recent computational advances in the application of the Sjöstrand (area), Rossi, and Feynman methods to estimate the effective multiplication factor of a subcritical system driven by an external neutron source. The methodologies introduced in this study have been validated with the experimental results from the KUKA facility of Japan by Monte Carlo (MCNP6 and MCNPX) and deterministic (ERANOS, VARIANT, and PARTISN) codes. When the assembly is driven by a pulsed neutron source generated by a particle accelerator and delayed neutrons are at equilibrium, the Sjöstrand method becomes extremely fast if the integral of the reaction rate frommore » a single pulse is split into two parts. These two integrals distinguish between the neutron counts during and after the pulse period. To conclude, when the facility is driven by a spontaneous fission neutron source, the timestamps of the detector neutron counts can be obtained up to the nanosecond precision using MCNP6, which allows obtaining the Rossi and Feynman distributions.« less

  12. Infinities in Quantum Field Theory and in Classical Computing: Renormalization Program

    NASA Astrophysics Data System (ADS)

    Manin, Yuri I.

    Introduction. The main observable quantities in Quantum Field Theory, correlation functions, are expressed by the celebrated Feynman path integrals. A mathematical definition of them involving a measure and actual integration is still lacking. Instead, it is replaced by a series of ad hoc but highly efficient and suggestive heuristic formulas such as perturbation formalism. The latter interprets such an integral as a formal series of finite-dimensional but divergent integrals, indexed by Feynman graphs, the list of which is determined by the Lagrangian of the theory. Renormalization is a prescription that allows one to systematically "subtract infinities" from these divergent terms producing an asymptotic series for quantum correlation functions. On the other hand, graphs treated as "flowcharts", also form a combinatorial skeleton of the abstract computation theory. Partial recursive functions that according to Church's thesis exhaust the universe of (semi)computable maps are generally not everywhere defined due to potentially infinite searches and loops. In this paper I argue that such infinities can be addressed in the same way as Feynman divergences. More details can be found in [9,10].

  13. Global Estimates of Errors in Quantum Computation by the Feynman-Vernon Formalism

    NASA Astrophysics Data System (ADS)

    Aurell, Erik

    2018-06-01

    The operation of a quantum computer is considered as a general quantum operation on a mixed state on many qubits followed by a measurement. The general quantum operation is further represented as a Feynman-Vernon double path integral over the histories of the qubits and of an environment, and afterward tracing out the environment. The qubit histories are taken to be paths on the two-sphere S^2 as in Klauder's coherent-state path integral of spin, and the environment is assumed to consist of harmonic oscillators initially in thermal equilibrium, and linearly coupled to to qubit operators \\hat{S}_z. The environment can then be integrated out to give a Feynman-Vernon influence action coupling the forward and backward histories of the qubits. This representation allows to derive in a simple way estimates that the total error of operation of a quantum computer without error correction scales linearly with the number of qubits and the time of operation. It also allows to discuss Kitaev's toric code interacting with an environment in the same manner.

  14. Global Estimates of Errors in Quantum Computation by the Feynman-Vernon Formalism

    NASA Astrophysics Data System (ADS)

    Aurell, Erik

    2018-04-01

    The operation of a quantum computer is considered as a general quantum operation on a mixed state on many qubits followed by a measurement. The general quantum operation is further represented as a Feynman-Vernon double path integral over the histories of the qubits and of an environment, and afterward tracing out the environment. The qubit histories are taken to be paths on the two-sphere S^2 as in Klauder's coherent-state path integral of spin, and the environment is assumed to consist of harmonic oscillators initially in thermal equilibrium, and linearly coupled to to qubit operators \\hat{S}_z . The environment can then be integrated out to give a Feynman-Vernon influence action coupling the forward and backward histories of the qubits. This representation allows to derive in a simple way estimates that the total error of operation of a quantum computer without error correction scales linearly with the number of qubits and the time of operation. It also allows to discuss Kitaev's toric code interacting with an environment in the same manner.

  15. Maximal cuts and differential equations for Feynman integrals. An application to the three-loop massive banana graph

    NASA Astrophysics Data System (ADS)

    Primo, Amedeo; Tancredi, Lorenzo

    2017-08-01

    We consider the calculation of the master integrals of the three-loop massive banana graph. In the case of equal internal masses, the graph is reduced to three master integrals which satisfy an irreducible system of three coupled linear differential equations. The solution of the system requires finding a 3 × 3 matrix of homogeneous solutions. We show how the maximal cut can be used to determine all entries of this matrix in terms of products of elliptic integrals of first and second kind of suitable arguments. All independent solutions are found by performing the integration which defines the maximal cut on different contours. Once the homogeneous solution is known, the inhomogeneous solution can be obtained by use of Euler's variation of constants.

  16. Squeezed states, time-energy uncertainty relation, and Feynman's rest of the universe

    NASA Technical Reports Server (NTRS)

    Han, D.; Kim, Y. S.; Noz, Marilyn E.

    1992-01-01

    Two illustrative examples are given for Feynman's rest of the universe. The first example is the two-mode squeezed state of light where no measurement is taken for one of the modes. The second example is the relativistic quark model where no measurement is possible for the time-like separation fo quarks confined in a hadron. It is possible to illustrate these examples using the covariant oscillator formalism. It is shown that the lack of symmetry between the position-momentum and time-energy uncertainty relations leads to an increase in entropy when the system is different Lorentz frames.

  17. On critical exponents without Feynman diagrams

    NASA Astrophysics Data System (ADS)

    Sen, Kallol; Sinha, Aninda

    2016-11-01

    In order to achieve a better analytic handle on the modern conformal bootstrap program, we re-examine and extend the pioneering 1974 work of Polyakov’s, which was based on consistency between the operator product expansion and unitarity. As in the bootstrap approach, this method does not depend on evaluating Feynman diagrams. We show how this approach can be used to compute the anomalous dimensions of certain operators in the O(n) model at the Wilson-Fisher fixed point in 4-ɛ dimensions up to O({ɛ }2). AS dedicates this work to the loving memory of his mother.

  18. Parallel Implementation of Numerical Solution of Few-Body Problem Using Feynman's Continual Integrals

    NASA Astrophysics Data System (ADS)

    Naumenko, Mikhail; Samarin, Viacheslav

    2018-02-01

    Modern parallel computing algorithm has been applied to the solution of the few-body problem. The approach is based on Feynman's continual integrals method implemented in C++ programming language using NVIDIA CUDA technology. A wide range of 3-body and 4-body bound systems has been considered including nuclei described as consisting of protons and neutrons (e.g., 3,4He) and nuclei described as consisting of clusters and nucleons (e.g., 6He). The correctness of the results was checked by the comparison with the exactly solvable 4-body oscillatory system and experimental data.

  19. Onsager's variational principle in soft matter.

    PubMed

    Doi, Masao

    2011-07-20

    In the celebrated paper on the reciprocal relation for the kinetic coefficients in irreversible processes, Onsager (1931 Phys. Rev. 37 405) extended Rayleigh's principle of the least energy dissipation to general irreversible processes. In this paper, I shall show that this variational principle gives us a very convenient framework for deriving many established equations which describe the nonlinear and non-equilibrium phenomena in soft matter, such as phase separation kinetics in solutions, gel dynamics, molecular modeling for viscoelasticity nemato-hydrodynamics, etc. Onsager's variational principle can therefore be regarded as a solid general basis for soft matter physics.

  20. Fermion systems in discrete space-time

    NASA Astrophysics Data System (ADS)

    Finster, Felix

    2007-05-01

    Fermion systems in discrete space-time are introduced as a model for physics on the Planck scale. We set up a variational principle which describes a non-local interaction of all fermions. This variational principle is symmetric under permutations of the discrete space-time points. We explain how for minimizers of the variational principle, the fermions spontaneously break this permutation symmetry and induce on space-time a discrete causal structure.

  1. Perturbation theory in light-cone quantization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Langnau, A.

    1992-01-01

    A thorough investigation of light-cone properties which are characteristic for higher dimensions is very important. The easiest way of addressing these issues is by analyzing the perturbative structure of light-cone field theories first. Perturbative studies cannot be substituted for an analysis of problems related to a nonperturbative approach. However, in order to lay down groundwork for upcoming nonperturbative studies, it is indispensable to validate the renormalization methods at the perturbative level, i.e., to gain control over the perturbative treatment first. A clear understanding of divergences in perturbation theory, as well as their numerical treatment, is a necessary first step towardsmore » formulating such a program. The first objective of this dissertation is to clarify this issue, at least in second and fourth-order in perturbation theory. The work in this dissertation can provide guidance for the choice of counterterms in Discrete Light-Cone Quantization or the Tamm-Dancoff approach. A second objective of this work is the study of light-cone perturbation theory as a competitive tool for conducting perturbative Feynman diagram calculations. Feynman perturbation theory has become the most practical tool for computing cross sections in high energy physics and other physical properties of field theory. Although this standard covariant method has been applied to a great range of problems, computations beyond one-loop corrections are very difficult. Because of the algebraic complexity of the Feynman calculations in higher-order perturbation theory, it is desirable to automatize Feynman diagram calculations so that algebraic manipulation programs can carry out almost the entire calculation. This thesis presents a step in this direction. The technique we are elaborating on here is known as light-cone perturbation theory.« less

  2. Accelerated Simulation of Kinetic Transport Using Variational Principles and Sparsity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caflisch, Russel

    This project is centered on the development and application of techniques of sparsity and compressed sensing for variational principles, PDEs and physics problems, in particular for kinetic transport. This included derivation of sparse modes for elliptic and parabolic problems coming from variational principles. The research results of this project are on methods for sparsity in differential equations and their applications and on application of sparsity ideas to kinetic transport of plasmas.

  3. Poisson equation for the Mercedes diagram in string theory at genus one

    NASA Astrophysics Data System (ADS)

    Basu, Anirban

    2016-03-01

    The Mercedes diagram has four trivalent vertices which are connected by six links such that they form the edges of a tetrahedron. This three-loop Feynman diagram contributes to the {D}12{{ R }}4 amplitude at genus one in type II string theory, where the vertices are the points of insertion of the graviton vertex operators, and the links are the scalar propagators on the toroidal worldsheet. We obtain a modular invariant Poisson equation satisfied by the Mercedes diagram, where the source terms involve one- and two-loop Feynman diagrams. We calculate its contribution to the {D}12{{ R }}4 amplitude.

  4. Electron propagator calculations on the ionization energies of CrH -, MnH - and FeH -

    NASA Astrophysics Data System (ADS)

    Lin, Jyh-Shing; Ortiz, J. V.

    1990-08-01

    Electron propagator calculations with unrestricted Hartree-Fock reference states yield the ionization energies of the title anions. Spin contamination in the anionic reference state is small, enabling the use of second-and third-order self-energies in the Dyson equation. Feynman-Dyson amplitudes for these ionizations are essentially identical to canonical spin-orbitals. For most of the final states, these consist of an antibonding combination of an sp metal hybrid, polarized away from the hydrogen, and hydroegen s functions. In one case, the Feynman-Dyson amplitude consists of nonbonding d functions. Calculated ionization energies are within 0.5 eV of experiment.

  5. Gravity, Time, and Lagrangians

    NASA Astrophysics Data System (ADS)

    Huggins, Elisha

    2010-11-01

    Feynman mentioned to us that he understood a topic in physics if he could explain it to a college freshman, a high school student, or a dinner guest. Here we will discuss two topics that took us a while to get to that level. One is the relationship between gravity and time. The other is the minus sign that appears in the Lagrangian. (Why would one subtract potential energy from kinetic energy?) In this paper we discuss a thought experiment that relates gravity and time. Then we use a Feynman thought experiment to explain the minus sign in the Lagrangian. Our surprise was that these two topics are related.

  6. Extended Hellmann-Feynman theorem for degenerate eigenstates

    NASA Astrophysics Data System (ADS)

    Zhang, G. P.; George, Thomas F.

    2004-04-01

    In a previous paper, we reported a failure of the traditional Hellmann-Feynman theorem (HFT) for degenerate eigenstates. This has generated enormous interest among different groups. In four independent papers by Fernandez, by Balawender, Hola, and March, by Vatsya, and by Alon and Cederbaum, an elegant method to solve the problem was devised. The main idea is that one has to construct and diagonalize the force matrix for the degenerate case, and only the eigenforces are well defined. We believe this is an important extension to HFT. Using our previous example for an energy level of fivefold degeneracy, we find that those eigenforces correctly reflect the symmetry of the molecule.

  7. Role of vertex corrections in the matrix formulation of the random phase approximation for the multiorbital Hubbard model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Altmeyer, Michaela; Guterding, Daniel; Hirschfeld, P. J.

    2016-12-21

    In the framework of a multiorbital Hubbard model description of superconductivity, a matrix formulation of the superconducting pairing interaction that has been widely used is designed to treat spin, charge, and orbital fluctuations within a random phase approximation (RPA). In terms of Feynman diagrams, this takes into account particle-hole ladder and bubble contributions as expected. It turns out, however, that this matrix formulation also generates additional terms which have the diagrammatic structure of vertex corrections. Furthermore we examine these terms and discuss the relationship between the matrix-RPA superconducting pairing interaction and the Feynman diagrams that it sums.

  8. Feynman rules for the Standard Model Effective Field Theory in R ξ -gauges

    NASA Astrophysics Data System (ADS)

    Dedes, A.; Materkowska, W.; Paraskevas, M.; Rosiek, J.; Suxho, K.

    2017-06-01

    We assume that New Physics effects are parametrized within the Standard Model Effective Field Theory (SMEFT) written in a complete basis of gauge invariant operators up to dimension 6, commonly referred to as "Warsaw basis". We discuss all steps necessary to obtain a consistent transition to the spontaneously broken theory and several other important aspects, including the BRST-invariance of the SMEFT action for linear R ξ -gauges. The final theory is expressed in a basis characterized by SM-like propagators for all physical and unphysical fields. The effect of the non-renormalizable operators appears explicitly in triple or higher multiplicity vertices. In this mass basis we derive the complete set of Feynman rules, without resorting to any simplifying assumptions such as baryon-, lepton-number or CP conservation. As it turns out, for most SMEFT vertices the expressions are reasonably short, with a noticeable exception of those involving 4, 5 and 6 gluons. We have also supplemented our set of Feynman rules, given in an appendix here, with a publicly available Mathematica code working with the FeynRules package and producing output which can be integrated with other symbolic algebra or numerical codes for automatic SMEFT amplitude calculations.

  9. Algorithms Bridging Quantum Computation and Chemistry

    NASA Astrophysics Data System (ADS)

    McClean, Jarrod Ryan

    The design of new materials and chemicals derived entirely from computation has long been a goal of computational chemistry, and the governing equation whose solution would permit this dream is known. Unfortunately, the exact solution to this equation has been far too expensive and clever approximations fail in critical situations. Quantum computers offer a novel solution to this problem. In this work, we develop not only new algorithms to use quantum computers to study hard problems in chemistry, but also explore how such algorithms can help us to better understand and improve our traditional approaches. In particular, we first introduce a new method, the variational quantum eigensolver, which is designed to maximally utilize the quantum resources available in a device to solve chemical problems. We apply this method in a real quantum photonic device in the lab to study the dissociation of the helium hydride (HeH+) molecule. We also enhance this methodology with architecture specific optimizations on ion trap computers and show how linear-scaling techniques from traditional quantum chemistry can be used to improve the outlook of similar algorithms on quantum computers. We then show how studying quantum algorithms such as these can be used to understand and enhance the development of classical algorithms. In particular we use a tool from adiabatic quantum computation, Feynman's Clock, to develop a new discrete time variational principle and further establish a connection between real-time quantum dynamics and ground state eigenvalue problems. We use these tools to develop two novel parallel-in-time quantum algorithms that outperform competitive algorithms as well as offer new insights into the connection between the fermion sign problem of ground states and the dynamical sign problem of quantum dynamics. Finally we use insights gained in the study of quantum circuits to explore a general notion of sparsity in many-body quantum systems. In particular we use developments from the field of compressed sensing to find compact representations of ground states. As an application we study electronic systems and find solutions dramatically more compact than traditional configuration interaction expansions, offering hope to extend this methodology to challenging systems in chemical and material design.

  10. Enhanced Molecular Dynamics Methods Applied to Drug Design Projects.

    PubMed

    Ziada, Sonia; Braka, Abdennour; Diharce, Julien; Aci-Sèche, Samia; Bonnet, Pascal

    2018-01-01

    Nobel Laureate Richard P. Feynman stated: "[…] everything that living things do can be understood in terms of jiggling and wiggling of atoms […]." The importance of computer simulations of macromolecules, which use classical mechanics principles to describe atom behavior, is widely acknowledged and nowadays, they are applied in many fields such as material sciences and drug discovery. With the increase of computing power, molecular dynamics simulations can be applied to understand biological mechanisms at realistic timescales. In this chapter, we share our computational experience providing a global view of two of the widely used enhanced molecular dynamics methods to study protein structure and dynamics through the description of their characteristics, limits and we provide some examples of their applications in drug design. We also discuss the appropriate choice of software and hardware. In a detailed practical procedure, we describe how to set up, run, and analyze two main molecular dynamics methods, the umbrella sampling (US) and the accelerated molecular dynamics (aMD) methods.

  11. Particle Physics, 2nd Edition

    NASA Astrophysics Data System (ADS)

    Martin, B. R.; Shaw, G.

    1998-01-01

    Particle Physics, Second Edition is a concise and lucid account of the fundamental constituents of matter. The standard model of particle physics is developed carefully and systematically, without heavy mathematical formalism, to make this stimulating subject accessible to undergraduate students. Throughout, the emphasis is on the interpretation of experimental data in terms of the basic properties of quarks and leptons, and extensive use is made of symmetry principles and Feynman diagrams, which are introduced early in the book. The Second Edition brings the book fully up to date, including the discovery of the top quark and the search for the Higgs boson. A final short chapter is devoted to the continuing search for new physics beyond the standard model. Particle Physics, Second Edition features: * A carefully structured and written text to help students understand this exciting and demanding subject. * Many worked examples and problems to aid student learning. Hints for solving the problems are given in an Appendix. * Optional "starred" sections and appendices, containing more specialised and advanced material for the more ambitious reader.

  12. Why Do the Relativistic Masses and Momenta of Faster-than-Light Particles Decrease as their Speeds Increase? 5M Madarász, Judit X.; Stannett, Mike; Székely, Gergely

    NASA Astrophysics Data System (ADS)

    2014-01-01

    It has recently been shown within a formal axiomatic framework using a definition of four-momentum based on the Stückelberg-Feynman-Sudarshan-Recami ''switching principle'' that Einstein's relativistic dynamics is logically consistent with the existence of interacting faster-than-light inertial particles. Our results here show, using only basic natural assumptions on dynamics, that this definition is the only possible way to get a consistent theory of such particles moving within the geometry of Minkowskian spacetime. We present a strictly formal proof from a streamlined axiom system that given any slow or fast inertial particle, all inertial observers agree on the value of {m}\\cdot √{|1-v^2|}, where {m} is the particle's relativistic mass and vits speed. This confirms formally the widely held belief that the relativistic mass and momentum of a positive-mass faster-than-light particle must decrease as its speed increases.

  13. The double slit experiment and the time reversed fire alarm

    NASA Astrophysics Data System (ADS)

    Halabi, Tarek

    2011-03-01

    When both slits of the double slit experiment are open, closing one paradoxically increases the detection rate at some points on the detection screen. Feynman famously warned that temptation to "understand" such a puzzling feature only draws us into blind alleys. Nevertheless, we gain insight into this feature by drawing an analogy between the double slit experiment and a time reversed fire alarm. Much as closing the slit increases probability of a future detection, ruling out fire drill scenarios, having heard the fire alarm, increases probability of a past fire (using Bayesian inference). Classically, Bayesian inference is associated with computing probabilities of past events. We therefore identify this feature of the double slit experiment with a time reversed thermodynamic arrow. We believe that much of the enigma of quantum mechanics is simply due to some variation of time's arrow.

  14. Mixed variational formulations of finite element analysis of elastoacoustic/slosh fluid-structure interaction

    NASA Technical Reports Server (NTRS)

    Felippa, Carlos A.; Ohayon, Roger

    1991-01-01

    A general three-field variational principle is obtained for the motion of an acoustic fluid enclosed in a rigid or flexible container by the method of canonical decomposition applied to a modified form of the wave equation in the displacement potential. The general principle is specialized to a mixed two-field principle that contains the fluid displacement potential and pressure as independent fields. This principle contains a free parameter alpha. Semidiscrete finite-element equations of motion based on this principle are displayed and applied to the transient response and free-vibrations of the coupled fluid-structure problem. It is shown that a particular setting of alpha yields a rich set of formulations that can be customized to fit physical and computational requirements. The variational principle is then extended to handle slosh motions in a uniform gravity field, and used to derive semidiscrete equations of motion that account for such effects.

  15. Minimization principles for the coupled problem of Darcy-Biot-type fluid transport in porous media linked to phase field modeling of fracture

    NASA Astrophysics Data System (ADS)

    Miehe, Christian; Mauthe, Steffen; Teichtmeister, Stephan

    2015-09-01

    This work develops new minimization and saddle point principles for the coupled problem of Darcy-Biot-type fluid transport in porous media at fracture. It shows that the quasi-static problem of elastically deforming, fluid-saturated porous media is related to a minimization principle for the evolution problem. This two-field principle determines the rate of deformation and the fluid mass flux vector. It provides a canonically compact model structure, where the stress equilibrium and the inverse Darcy's law appear as the Euler equations of a variational statement. A Legendre transformation of the dissipation potential relates the minimization principle to a characteristic three field saddle point principle, whose Euler equations determine the evolutions of deformation and fluid content as well as Darcy's law. A further geometric assumption results in modified variational principles for a simplified theory, where the fluid content is linked to the volumetric deformation. The existence of these variational principles underlines inherent symmetries of Darcy-Biot theories of porous media. This can be exploited in the numerical implementation by the construction of time- and space-discrete variational principles, which fully determine the update problems of typical time stepping schemes. Here, the proposed minimization principle for the coupled problem is advantageous with regard to a new unconstrained stable finite element design, while space discretizations of the saddle point principles are constrained by the LBB condition. The variational principles developed provide the most fundamental approach to the discretization of nonlinear fluid-structure interactions, showing symmetric systems in algebraic update procedures. They also provide an excellent starting point for extensions towards more complex problems. This is demonstrated by developing a minimization principle for a phase field description of fracture in fluid-saturated porous media. It is designed for an incorporation of alternative crack driving forces, such as a convenient criterion in terms of the effective stress. The proposed setting provides a modeling framework for the analysis of complex problems such as hydraulic fracture. This is demonstrated by a spectrum of model simulations.

  16. Variational symmetries, conserved quantities and identities for several equations of mathematical physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Donchev, Veliko, E-mail: velikod@ie.bas.bg

    2014-03-15

    We find variational symmetries, conserved quantities and identities for several equations: envelope equation, Böcher equation, the propagation of sound waves with losses, flow of a gas with losses, and the nonlinear Schrödinger equation with losses or gains, and an electro-magnetic interaction. Most of these equations do not have a variational description with the classical variational principle and we find such a description with the generalized variational principle of Herglotz.

  17. Neutrino oscillation processes in a quantum-field-theoretical approach

    NASA Astrophysics Data System (ADS)

    Egorov, Vadim O.; Volobuev, Igor P.

    2018-05-01

    It is shown that neutrino oscillation processes can be consistently described in the framework of quantum field theory using only the plane wave states of the particles. Namely, the oscillating electron survival probabilities in experiments with neutrino detection by charged-current and neutral-current interactions are calculated in the quantum field-theoretical approach to neutrino oscillations based on a modification of the Feynman propagator in the momentum representation. The approach is most similar to the standard Feynman diagram technique. It is found that the oscillating distance-dependent probabilities of detecting an electron in experiments with neutrino detection by charged-current and neutral-current interactions exactly coincide with the corresponding probabilities calculated in the standard approach.

  18. Fourier transform of the multicenter product of 1s hydrogenic orbitals and Coulomb or Yukawa potentials and the analytically reduced form for subsequent integrals that include plane waves

    NASA Technical Reports Server (NTRS)

    Straton, Jack C.

    1989-01-01

    The Fourier transform of the multicenter product of N 1s hydrogenic orbitals and M Coulomb or Yukawa potentials is given as an (M+N-1)-dimensional Feynman integral with external momenta and shifted coordinates. This is accomplished through the introduction of an integral transformation, in addition to the standard Feynman transformation for the denominators of the momentum representation of the terms in the product, which moves the resulting denominator into an exponential. This allows the angular dependence of the denominator to be combined with the angular dependence in the plane waves.

  19. Bosonic Loop Diagrams as Perturbative Solutions of the Classical Field Equations in ϕ4-Theory

    NASA Astrophysics Data System (ADS)

    Finster, Felix; Tolksdorf, Jürgen

    2012-05-01

    Solutions of the classical ϕ4-theory in Minkowski space-time are analyzed in a perturbation expansion in the nonlinearity. Using the language of Feynman diagrams, the solution of the Cauchy problem is expressed in terms of tree diagrams which involve the retarded Green's function and have one outgoing leg. In order to obtain general tree diagrams, we set up a "classical measurement process" in which a virtual observer of a scattering experiment modifies the field and detects suitable energy differences. By adding a classical stochastic background field, we even obtain all loop diagrams. The expansions are compared with the standard Feynman diagrams of the corresponding quantum field theory.

  20. Non-planar one-loop Parke-Taylor factors in the CHY approach for quadratic propagators

    NASA Astrophysics Data System (ADS)

    Ahmadiniaz, Naser; Gomez, Humberto; Lopez-Arcos, Cristhiam

    2018-05-01

    In this work we have studied the Kleiss-Kuijf relations for the recently introduced Parke-Taylor factors at one-loop in the CHY approach, that reproduce quadratic Feynman propagators. By doing this, we were able to identify the non-planar one-loop Parke-Taylor factors. In order to check that, in fact, these new factors can describe non-planar amplitudes, we applied them to the bi-adjoint Φ3 theory. As a byproduct, we found a new type of graphs that we called the non-planar CHY-graphs. These graphs encode all the information for the subleading order at one-loop, and there is not an equivalent of these in the Feynman formalism.

  1. Toward a theory of organisms: Three founding principles in search of a useful integration

    PubMed Central

    SOTO, ANA M.; LONGO, GIUSEPPE; MIQUEL, PAUL-ANTOINE; MONTEVIL, MAËL; MOSSIO, MATTEO; PERRET, NICOLE; POCHEVILLE, ARNAUD; SONNENSCHEIN, CARLOS

    2016-01-01

    Organisms, be they uni- or multi-cellular, are agents capable of creating their own norms; they are continuously harmonizing their ability to create novelty and stability, that is, they combine plasticity with robustness. Here we articulate the three principles for a theory of organisms proposed in this issue, namely: the default state of proliferation with variation and motility, the principle of variation and the principle of organization. These principles profoundly change both biological observables and their determination with respect to the theoretical framework of physical theories. This radical change opens up the possibility of anchoring mathematical modeling in biologically proper principles. PMID:27498204

  2. Quasi-static responses and variational principles in gradient plasticity

    NASA Astrophysics Data System (ADS)

    Nguyen, Quoc-Son

    2016-12-01

    Gradient models have been much discussed in the literature for the study of time-dependent or time-independent processes such as visco-plasticity, plasticity and damage. This paper is devoted to the theory of Standard Gradient Plasticity at small strain. A general and consistent mathematical description available for common time-independent behaviours is presented. Our attention is focussed on the derivation of general results such as the description of the governing equations for the global response and the derivation of related variational principles in terms of the energy and the dissipation potentials. It is shown that the quasi-static response under a loading path is a solution of an evolution variational inequality as in classical plasticity. The rate problem and the rate minimum principle are revisited. A time-discretization by the implicit scheme of the evolution equation leads to the increment problem. An increment of the response associated with a load increment is a solution of a variational inequality and satisfies also a minimum principle if the energy potential is convex. The increment minimum principle deals with stables solutions of the variational inequality. Some numerical methods are discussed in view of the numerical simulation of the quasi-static response.

  3. Variational formulation of high performance finite elements: Parametrized variational principles

    NASA Technical Reports Server (NTRS)

    Felippa, Carlos A.; Militello, Carmello

    1991-01-01

    High performance elements are simple finite elements constructed to deliver engineering accuracy with coarse arbitrary grids. This is part of a series on the variational basis of high-performance elements, with emphasis on those constructed with the free formulation (FF) and assumed natural strain (ANS) methods. Parametrized variational principles that provide a foundation for the FF and ANS methods, as well as for a combination of both are presented.

  4. Analysis of magnetic fields using variational principles and CELAS2 elements

    NASA Technical Reports Server (NTRS)

    Frye, J. W.; Kasper, R. G.

    1977-01-01

    Prospective techniques for analyzing magnetic fields using NASTRAN are reviewed. A variational principle utilizing a vector potential function is presented which has as its Euler equations, the required field equations and boundary conditions for static magnetic fields including current sources. The need for an addition to this variational principle of a constraint condition is discussed. Some results using the Lagrange multiplier method to apply the constraint and CELAS2 elements to simulate the matrices are given. Practical considerations of using large numbers of CELAS2 elements are discussed.

  5. Canonical fluid thermodynamics. [variational principles of stability for compressible adiabatic flow

    NASA Technical Reports Server (NTRS)

    Schmid, L. A.

    1974-01-01

    The space-time integral of the thermodynamic pressure plays in a certain sense the role of the thermodynamic potential for compressible adiabatic flow. The stability criterion can be converted into a variational minimum principle by requiring the molar free-enthalpy and temperature to be generalized velocities. In the fluid context, the definition of proper-time differentiation involves the fluid velocity expressed in terms of three particle identity parameters. The pressure function is then converted into a functional which is the Lagrangian density of the variational principle. Being also a minimum principle, the variational principle provides a means for comparing the relative stability of different flows. For boundary conditions with a high degree of symmetry, as in the case of a uniformly expanding spherical gas box, the most stable flow is a rectilinear flow for which the world-trajectory of each particle is a straight line. Since the behavior of the interior of a freely expanding cosmic cloud may be expected to be similar to that of the fluid in the spherical box of gas, this suggests that the cosmic principle is a consequence of the laws of thermodynamics, rather than just an ad hoc postulate.

  6. Efficient numerical evaluation of Feynman integrals

    NASA Astrophysics Data System (ADS)

    Li, Zhao; Wang, Jian; Yan, Qi-Shu; Zhao, Xiaoran

    2016-03-01

    Feynman loop integrals are a key ingredient for the calculation of higher order radiation effects, and are responsible for reliable and accurate theoretical prediction. We improve the efficiency of numerical integration in sector decomposition by implementing a quasi-Monte Carlo method associated with the CUDA/GPU technique. For demonstration we present the results of several Feynman integrals up to two loops in both Euclidean and physical kinematic regions in comparison with those obtained from FIESTA3. It is shown that both planar and non-planar two-loop master integrals in the physical kinematic region can be evaluated in less than half a minute with accuracy, which makes the direct numerical approach viable for precise investigation of higher order effects in multi-loop processes, e.g. the next-to-leading order QCD effect in Higgs pair production via gluon fusion with a finite top quark mass. Supported by the Natural Science Foundation of China (11305179 11475180), Youth Innovation Promotion Association, CAS, IHEP Innovation (Y4545170Y2), State Key Lab for Electronics and Particle Detectors, Open Project Program of State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, China (Y4KF061CJ1), Cluster of Excellence Precision Physics, Fundamental Interactions and Structure of Matter (PRISMA-EXC 1098)

  7. A white noise approach to the Feynman integrand for electrons in random media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grothaus, M., E-mail: grothaus@mathematik.uni-kl.de; Riemann, F., E-mail: riemann@mathematik.uni-kl.de; Suryawan, H. P., E-mail: suryawan@mathematik.uni-kl.de

    2014-01-15

    Using the Feynman path integral representation of quantum mechanics it is possible to derive a model of an electron in a random system containing dense and weakly coupled scatterers [see F. Edwards and Y. B. Gulyaev, “The density of states of a highly impure semiconductor,” Proc. Phys. Soc. 83, 495–496 (1964)]. The main goal of this paper is to give a mathematically rigorous realization of the corresponding Feynman integrand in dimension one based on the theory of white noise analysis. We refine and apply a Wick formula for the product of a square-integrable function with Donsker's delta functions and usemore » a method of complex scaling. As an essential part of the proof we also establish the existence of the exponential of the self-intersection local times of a one-dimensional Brownian bridge. As a result we obtain a neat formula for the propagator with identical start and end point. Thus, we obtain a well-defined mathematical object which is used to calculate the density of states [see, e.g., F. Edwards and Y. B. Gulyaev, “The density of states of a highly impure semiconductor,” Proc. Phys. Soc. 83, 495–496 (1964)].« less

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brizard, Alain J.; Tronci, Cesare

    The variational formulations of guiding-center Vlasov-Maxwell theory based on Lagrange, Euler, and Euler-Poincaré variational principles are presented. Each variational principle yields a different approach to deriving guiding-center polarization and magnetization effects into the guiding-center Maxwell equations. The conservation laws of energy, momentum, and angular momentum are also derived by Noether method, where the guiding-center stress tensor is now shown to be explicitly symmetric.

  9. Correlation energy for elementary bosons: Physics of the singularity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shiau, Shiue-Yuan, E-mail: syshiau@mail.ncku.edu.tw; Combescot, Monique; Chang, Yia-Chung, E-mail: yiachang@gate.sinica.edu.tw

    2016-04-15

    We propose a compact perturbative approach that reveals the physical origin of the singularity occurring in the density dependence of correlation energy: like fermions, elementary bosons have a singular correlation energy which comes from the accumulation, through Feynman “bubble” diagrams, of the same non-zero momentum transfer excitations from the free particle ground state, that is, the Fermi sea for fermions and the Bose–Einstein condensate for bosons. This understanding paves the way toward deriving the correlation energy of composite bosons like atomic dimers and semiconductor excitons, by suggesting Shiva diagrams that have similarity with Feynman “bubble” diagrams, the previous elementary bosonmore » approaches, which hide this physics, being inappropriate to do so.« less

  10. Hopf algebras of rooted forests, cocyles, and free Rota-Baxter algebras

    NASA Astrophysics Data System (ADS)

    Zhang, Tianjie; Gao, Xing; Guo, Li

    2016-10-01

    The Hopf algebra and the Rota-Baxter algebra are the two algebraic structures underlying the algebraic approach of Connes and Kreimer to renormalization of perturbative quantum field theory. In particular, the Hopf algebra of rooted trees serves as the "baby model" of Feynman graphs in their approach and can be characterized by certain universal properties involving a Hochschild 1-cocycle. Decorated rooted trees have also been applied to study Feynman graphs. We will continue the study of universal properties of various spaces of decorated rooted trees with such a 1-cocycle, leading to the concept of a cocycle Hopf algebra. We further apply the universal properties to equip a free Rota-Baxter algebra with the structure of a cocycle Hopf algebra.

  11. Feynman perturbation expansion for the price of coupon bond options and swaptions in quantum finance. I. Theory

    NASA Astrophysics Data System (ADS)

    Baaquie, Belal E.

    2007-01-01

    European options on coupon bonds are studied in a quantum field theory model of forward interest rates. Swaptions are briefly reviewed. An approximation scheme for the coupon bond option price is developed based on the fact that the volatility of the forward interest rates is a small quantity. The field theory for the forward interest rates is Gaussian, but when the payoff function for the coupon bond option is included it makes the field theory nonlocal and nonlinear. A perturbation expansion using Feynman diagrams gives a closed form approximation for the price of coupon bond option. A special case of the approximate bond option is shown to yield the industry standard one-factor HJM formula with exponential volatility.

  12. Solution of a cauchy problem for a diffusion equation in a Hilbert space by a Feynman formula

    NASA Astrophysics Data System (ADS)

    Remizov, I. D.

    2012-07-01

    The Cauchy problem for a class of diffusion equations in a Hilbert space is studied. It is proved that the Cauchy problem in well posed in the class of uniform limits of infinitely smooth bounded cylindrical functions on the Hilbert space, and the solution is presented in the form of the so-called Feynman formula, i.e., a limit of multiple integrals against a gaussian measure as the multiplicity tends to infinity. It is also proved that the solution of the Cauchy problem depends continuously on the diffusion coefficient. A process reducing an approximate solution of an infinite-dimensional diffusion equation to finding a multiple integral of a real function of finitely many real variables is indicated.

  13. Feynman propagator for spin foam quantum gravity.

    PubMed

    Oriti, Daniele

    2005-03-25

    We link the notion causality with the orientation of the spin foam 2-complex. We show that all current spin foam models are orientation independent. Using the technology of evolution kernels for quantum fields on Lie groups, we construct a generalized version of spin foam models, introducing an extra proper time variable. We prove that different ranges of integration for this variable lead to different classes of spin foam models: the usual ones, interpreted as the quantum gravity analogue of the Hadamard function of quantum field theory (QFT) or as inner products between quantum gravity states; and a new class of causal models, the quantum gravity analogue of the Feynman propagator in QFT, nontrivial function of the orientation data, and implying a notion of "timeless ordering".

  14. Feynman perturbation expansion for the price of coupon bond options and swaptions in quantum finance. I. Theory.

    PubMed

    Baaquie, Belal E

    2007-01-01

    European options on coupon bonds are studied in a quantum field theory model of forward interest rates. Swaptions are briefly reviewed. An approximation scheme for the coupon bond option price is developed based on the fact that the volatility of the forward interest rates is a small quantity. The field theory for the forward interest rates is Gaussian, but when the payoff function for the coupon bond option is included it makes the field theory nonlocal and nonlinear. A perturbation expansion using Feynman diagrams gives a closed form approximation for the price of coupon bond option. A special case of the approximate bond option is shown to yield the industry standard one-factor HJM formula with exponential volatility.

  15. Alternative to the Palatini method: A new variational principle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goenner, Hubert

    2010-06-15

    A variational principle is suggested within Riemannian geometry, in which an auxiliary metric and the Levi Civita connection are varied independently. The auxiliary metric plays the role of a Lagrange multiplier and introduces nonminimal coupling of matter to the curvature scalar. The field equations are 2nd order PDEs and easier to handle than those following from the so-called Palatini method. Moreover, in contrast to the latter method, no gradients of the matter variables appear. In cosmological modeling, the physics resulting from the alternative variational principle will differ from the modeling using the standard Palatini method.

  16. A variational principle for compressible fluid mechanics: Discussion of the multi-dimensional theory

    NASA Technical Reports Server (NTRS)

    Prozan, R. J.

    1982-01-01

    The variational principle for compressible fluid mechanics previously introduced is extended to two dimensional flow. The analysis is stable, exactly conservative, adaptable to coarse or fine grids, and very fast. Solutions for two dimensional problems are included. The excellent behavior and results lend further credence to the variational concept and its applicability to the numerical analysis of complex flow fields.

  17. A coupled mode formulation by reciprocity and a variational principle

    NASA Technical Reports Server (NTRS)

    Chuang, Shun-Lien

    1987-01-01

    A coupled mode formulation for parallel dielectric waveguides is presented via two methods: a reciprocity theorem and a variational principle. In the first method, a generalized reciprocity relation for two sets of field solutions satisfying Maxwell's equations and the boundary conditions in two different media, respectively, is derived. Based on the generalized reciprocity theorem, the coupled mode equations can then be formulated. The second method using a variational principle is also presented for a general waveguide system which can be lossy. The results of the variational principle can also be shown to be identical to those from the reciprocity theorem. The exact relations governing the 'conventional' and the new coupling coefficients are derived. It is shown analytically that the present formulation satisfies the reciprocity theorem and power conservation exactly, while the conventional theory violates the power conservation and reciprocity theorem by as much as 55 percent and the Hardy-Streifer (1985, 1986) theory by 0.033 percent, for example.

  18. An approach toward the numerical evaluation of multi-loop Feynman diagrams

    NASA Astrophysics Data System (ADS)

    Passarino, Giampiero

    2001-12-01

    A scheme for systematically achieving accurate numerical evaluation of multi-loop Feynman diagrams is developed. This shows the feasibility of a project aimed to produce a complete calculation for two-loop predictions in the Standard Model. As a first step an algorithm, proposed by F.V. Tkachov and based on the so-called generalized Bernstein functional relation, is applied to one-loop multi-leg diagrams with particular emphasis to the presence of infrared singularities, to the problem of tensorial reduction and to the classification of all singularities of a given diagram. Successively, the extension of the algorithm to two-loop diagrams is examined. The proposed solution consists in applying the functional relation to the one-loop sub-diagram which has the largest number of internal lines. In this way the integrand can be made smooth, a part from a factor which is a polynomial in xS, the vector of Feynman parameters needed for the complementary sub-diagram with the smallest number of internal lines. Since the procedure does not introduce new singularities one can distort the xS-integration hyper-contour into the complex hyper-plane, thus achieving numerical stability. The algorithm is then modified to deal with numerical evaluation around normal thresholds. Concise and practical formulas are assembled and presented, numerical results and comparisons with the available literature are shown and discussed for the so-called sunset topology.

  19. Reduze - Feynman integral reduction in C++

    NASA Astrophysics Data System (ADS)

    Studerus, C.

    2010-07-01

    Reduze is a computer program for reducing Feynman integrals to master integrals employing a Laporta algorithm. The program is written in C++ and uses classes provided by the GiNaC library to perform the simplifications of the algebraic prefactors in the system of equations. Reduze offers the possibility to run reductions in parallel. Program summaryProgram title:Reduze Catalogue identifier: AEGE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGE_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions:: yes No. of lines in distributed program, including test data, etc.: 55 433 No. of bytes in distributed program, including test data, etc.: 554 866 Distribution format: tar.gz Programming language: C++ Computer: All Operating system: Unix/Linux Number of processors used: The number of processors is problem dependent. More than one possible but not arbitrary many. RAM: Depends on the complexity of the system. Classification: 4.4, 5 External routines: CLN ( http://www.ginac.de/CLN/), GiNaC ( http://www.ginac.de/) Nature of problem: Solving large systems of linear equations with Feynman integrals as unknowns and rational polynomials as prefactors. Solution method: Using a Gauss/Laporta algorithm to solve the system of equations. Restrictions: Limitations depend on the complexity of the system (number of equations, number of kinematic invariants). Running time: Depends on the complexity of the system.

  20. Optical vortex knots – one photon at a time

    PubMed Central

    Tempone-Wiltshire, Sebastien J.; Johnstone, Shaun P.; Helmerson, Kristian

    2016-01-01

    Feynman described the double slit experiment as “a phenomenon which is impossible, absolutely impossible, to explain in any classical way and which has in it the heart of quantum mechanics”. The double-slit experiment, performed one photon at a time, dramatically demonstrates the particle-wave duality of quantum objects by generating a fringe pattern corresponding to the interference of light (a wave phenomenon) from two slits, even when there is only one photon (a particle) at a time passing through the apparatus. The particle-wave duality of light should also apply to complex three dimensional optical fields formed by multi-path interference, however, this has not been demonstrated. Here we observe particle-wave duality of a three dimensional field by generating a trefoil optical vortex knot – one photon at a time. This result demonstrates a fundamental physical principle, that particle-wave duality implies interference in both space (between spatially distinct modes) and time (through the complex evolution of the superposition of modes), and has implications for topologically entangled single photon states, orbital angular momentum multiplexing and topological quantum computing. PMID:27087642

  1. Theories of Matter, Space and Time, Volume 2; Quantum theories

    NASA Astrophysics Data System (ADS)

    Evans, N.; King, S. F.

    2018-06-01

    This book and its prequel Theories of Matter Space and Time: Classical Theories grew out of courses that we have both taught as part of the undergraduate degree program in Physics at Southampton University, UK. Our goal was to guide the full MPhys undergraduate cohort through some of the trickier areas of theoretical physics that we expect our undergraduates to master. Here we teach the student to understand first quantized relativistic quantum theories. We first quickly review the basics of quantum mechanics which should be familiar to the reader from a prior course. Then we will link the Schrödinger equation to the principle of least action introducing Feynman's path integral methods. Next, we present the relativistic wave equations of Klein, Gordon and Dirac. Finally, we convert Maxwell's equations of electromagnetism to a wave equation for photons and make contact with quantum electrodynamics (QED) at a first quantized level. Between the two volumes we hope to move a student's understanding from their prior courses to a place where they are ready, beyond, to embark on graduate level courses on quantum field theory.

  2. Light, Imaging, Vision: An interdisciplinary undergraduate course

    NASA Astrophysics Data System (ADS)

    Nelson, Philip

    2015-03-01

    The vertebrate eye is fantastically sensitive instrument, capable of registering the absorption of a single photon, and yet generating very low noise. Using eyes as a common thread helps motivate undergraduates to learn a lot of physics, both fundamental and applied to scientific imaging and neuroscience. I'll describe an undergraduate course, for students in several science and engineering majors, that takes students from the rudiments of probability theory to the quantum character of light, including modern experimental methods like fluorescence imaging and Förster resonance energy transfer. After a digression into color vision, we then see how the Feynman principle explains the apparently wavelike phenomena associated to light, including applications like diffraction, subdiffraction imaging, total internal reflection and TIRF microscopy. Then we see how scientists documented the single-quantum sensitivity of the eye seven decades earlier than ``ought'' to have been possible, and finally close with the remarkable signaling cascade that delivers such outstanding performance. Parts of this story are now embodied in a new textbook (WH Freeman and Co, 1/2015); additional course materials are available upon request. Work supported by NSF Grants EF-0928048 and DMR-0832802.

  3. Single-slit electron diffraction with Aharonov-Bohm phase: Feynman's thought experiment with quantum point contacts.

    PubMed

    Khatua, Pradip; Bansal, Bhavtosh; Shahar, Dan

    2014-01-10

    In a "thought experiment," now a classic in physics pedagogy, Feynman visualizes Young's double-slit interference experiment with electrons in magnetic field. He shows that the addition of an Aharonov-Bohm phase is equivalent to shifting the zero-field wave interference pattern by an angle expected from the Lorentz force calculation for classical particles. We have performed this experiment with one slit, instead of two, where ballistic electrons within two-dimensional electron gas diffract through a small orifice formed by a quantum point contact (QPC). As the QPC width is comparable to the electron wavelength, the observed intensity profile is further modulated by the transverse waveguide modes present at the injector QPC. Our experiments open the way to realizing diffraction-based ideas in mesoscopic physics.

  4. Using an atom interferometer to take the Gedanken out of Feynman's Gedankenexperiment

    NASA Astrophysics Data System (ADS)

    Pritchard, David E.; Hammond, Troy D.; Lenef, Alan; Rubenstein, Richard A.; Smith, Edward T.; Chapman, Michael S.; Schmiedmayer, Jörg

    1997-01-01

    We give a description of two experiments performed in an atom interferometer at MIT. By scattering a single photon off of the atom as it passes through the interferometer, we perform a version of a classic gedankenexperiment, a demonstration of a Feynman light microscope. As path information about the atom is gained, contrast in the atom fringes (coherence) is lost. The lost coherence is then recovered by observing only atoms which scatter photons into a particular final direction. This paper reflects the main emphasis of D. E. Pritchard's talk at the RIS meeting. Information about other topics covered in that talk, as well as a review of all of the published work performed with the MIT atom/molecule interferometer, is available on the world wide web at http://coffee.mit.edu/.

  5. Critical exponents for diluted resistor networks

    NASA Astrophysics Data System (ADS)

    Stenull, O.; Janssen, H. K.; Oerding, K.

    1999-05-01

    An approach by Stephen [Phys. Rev. B 17, 4444 (1978)] is used to investigate the critical properties of randomly diluted resistor networks near the percolation threshold by means of renormalized field theory. We reformulate an existing field theory by Harris and Lubensky [Phys. Rev. B 35, 6964 (1987)]. By a decomposition of the principal Feynman diagrams, we obtain diagrams which again can be interpreted as resistor networks. This interpretation provides for an alternative way of evaluating the Feynman diagrams for random resistor networks. We calculate the resistance crossover exponent φ up to second order in ɛ=6-d, where d is the spatial dimension. Our result φ=1+ɛ/42+4ɛ2/3087 verifies a previous calculation by Lubensky and Wang, which itself was based on the Potts-model formulation of the random resistor network.

  6. ALOHA: Automatic libraries of helicity amplitudes for Feynman diagram computations

    NASA Astrophysics Data System (ADS)

    de Aquino, Priscila; Link, William; Maltoni, Fabio; Mattelaer, Olivier; Stelzer, Tim

    2012-10-01

    We present an application that automatically writes the HELAS (HELicity Amplitude Subroutines) library corresponding to the Feynman rules of any quantum field theory Lagrangian. The code is written in Python and takes the Universal FeynRules Output (UFO) as an input. From this input it produces the complete set of routines, wave-functions and amplitudes, that are needed for the computation of Feynman diagrams at leading as well as at higher orders. The representation is language independent and currently it can output routines in Fortran, C++, and Python. A few sample applications implemented in the MADGRAPH 5 framework are presented. Program summary Program title: ALOHA Catalogue identifier: AEMS_v1_0 Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AEMS_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: http://www.opensource.org/licenses/UoI-NCSA.php No. of lines in distributed program, including test data, etc.: 6094320 No. of bytes in distributed program, including test data, etc.: 7479819 Distribution format: tar.gz Programming language: Python2.6 Computer: 32/64 bit Operating system: Linux/Mac/Windows RAM: 512 Mbytes Classification: 4.4, 11.6 Nature of problem: An effcient numerical evaluation of a squared matrix element can be done with the help of the helicity routines implemented in the HELAS library [1]. This static library contains a limited number of helicity functions and is therefore not always able to provide the needed routine in the presence of an arbitrary interaction. This program provides a way to automatically create the corresponding routines for any given model. Solution method: ALOHA takes the Feynman rules associated to the vertex obtained from the model information (in the UFO format [2]), and multiplies it by the different wavefunctions or propagators. As a result the analytical expression of the helicity routines is obtained. Subsequently, this expression is automatically written in the requested language (Python, Fortran or C++) Restrictions: The allowed fields are currently spin 0, 1/2, 1 and 2, and the propagators of these particles are canonical. Running time: A few seconds for the SM and the MSSM, and up to a few minutes for models with spin 2 particles. References: [1] Murayama, H. and Watanabe, I. and Hagiwara, K., HELAS: HELicity Amplitude Subroutines for Feynman diagram evaluations, KEK-91-11, (1992) http://www-lib.kek.jp/cgi-bin/img_index?199124011 [2] C. Degrande, C. Duhr, B. Fuks, D. Grellscheid, O. Mattelaer, et al., UFO— The Universal FeynRules Output, Comput. Phys. Commun. 183 (2012) 1201-1214. arXiv:1108.2040, doi:10.1016/j.cpc.2012.01.022.

  7. Dynamical basis sets for algebraic variational calculations in quantum-mechanical scattering theory

    NASA Technical Reports Server (NTRS)

    Sun, Yan; Kouri, Donald J.; Truhlar, Donald G.; Schwenke, David W.

    1990-01-01

    New basis sets are proposed for linear algebraic variational calculations of transition amplitudes in quantum-mechanical scattering problems. These basis sets are hybrids of those that yield the Kohn variational principle (KVP) and those that yield the generalized Newton variational principle (GNVP) when substituted in Schlessinger's stationary expression for the T operator. Trial calculations show that efficiencies almost as great as that of the GNVP and much greater than the KVP can be obtained, even for basis sets with the majority of the members independent of energy.

  8. Fourth-order self-energy contribution to the two loop Lamb shift

    NASA Astrophysics Data System (ADS)

    Palur Mallampalli, Subrahmanyam

    1998-11-01

    The calculation of the two loop Lamb shift in hydrogenic ions involves the numerical evaluation of ten Feynman diagrams. In this thesis, four fourth-order Feynman diagrams including the pure self-energy contributions are evaluated using exact Dirac-Coulomb propagators, so that higher order binding corrections can be extracted by comparing with the known terms in the Z/alpha expansion. The entire calculation is performed in Feynman gauge. One of the vacuum polarization diagrams is evaluated in the Uehling approximation. At low Z, it is seen to be perturbative in Z/alpha, while new predictions for high Z are made. The calculation of the three self-energy diagrams is reorganized into four terms, which we call the PO, M, F and P terms. The PO term is separately gauge invariant while the latter three form a gauge invariant set. The PO term is shown to exhibit the most non-perturbative behavior yet encountered in QED at low Z, so much so that even at Z = 1, the complete result is of the opposite sign as that of the leading term in its Z/alpha expansion. At high Z, we agree with an earlier calculation. The analysis of ultraviolet divergences in the two loop self-energy is complicated by the presence of sub- divergences. All divergences except the self-mass are shown to cancel. The self-mass is then removed by a self- mass counterterm. Parts of the calculation are shown to contain reference state singularities, that finally cancel. A numerical regulator to handle these singularities is described. The M term, an ultraviolet finite quantity, is defined through a subtraction scheme in coordinate space. Being computationally intensive, it is evaluated only at high Z, specifically Z = 83 and 92. The F term involves the evaluation of several Feynman diagrams with free electron propagators. These are computed for a range of values of Z. The P term, also ultraviolet finite, involves Dirac- Coulomb propagators that are best defined in coordinate space, as well as functions associated with the one loop self-energy that are best defined in momentum space. Possible methods of evaluating the P term are discussed.

  9. Finite element analysis of time-independent superconductivity. Ph.D. Thesis Final Report

    NASA Technical Reports Server (NTRS)

    Schuler, James J.

    1993-01-01

    The development of electromagnetic (EM) finite elements based upon a generalized four-potential variational principle is presented. The use of the four-potential variational principle allows for downstream coupling of EM fields with the thermal, mechanical, and quantum effects exhibited by superconducting materials. The use of variational methods to model an EM system allows for a greater range of applications than just the superconducting problem. The four-potential variational principle can be used to solve a broader range of EM problems than any of the currently available formulations. It also reduces the number of independent variables from six to four while easily dealing with conductor/insulator interfaces. This methodology was applied to a range of EM field problems. Results from all these problems predict EM quantities exceptionally well and are consistent with the expected physical behavior.

  10. The eigenfrequency spectrum of linear magnetohydrodynamic perturbations in stationary equilibria: A variational principle

    NASA Astrophysics Data System (ADS)

    Andries, Jesse

    2010-11-01

    The frequencies of the normal modes of oscillation of linear magnetohydrodynamic perturbations of a stationary equilibrium are related to the stationary points of a quadratic functional over the Hilbert space of Lagrangian displacement vectors, which is subject to a constraint. In the absence of a background flow (or of a uniform flow), the relation reduces to the well-known Rayleigh-Ritz variational principle. In contrast to the existing variational principles for perturbations of stationary equilibria, the present treatment does neither impose additional symmetry restrictions on the equilibrium, nor does it involve the generalization to bilinear functionals instead of quadratic forms. This allows a more natural interpretation of the quadratic forms as energy functionals.

  11. Finite-temperature Gutzwiller approximation from the time-dependent variational principle

    NASA Astrophysics Data System (ADS)

    Lanatà, Nicola; Deng, Xiaoyu; Kotliar, Gabriel

    2015-08-01

    We develop an extension of the Gutzwiller approximation to finite temperatures based on the Dirac-Frenkel variational principle. Our method does not rely on any entropy inequality, and is substantially more accurate than the approaches proposed in previous works. We apply our theory to the single-band Hubbard model at different fillings, and show that our results compare quantitatively well with dynamical mean field theory in the metallic phase. We discuss potential applications of our technique within the framework of first-principle calculations.

  12. Dynamics of non-holonomic systems with stochastic transport

    NASA Astrophysics Data System (ADS)

    Holm, D. D.; Putkaradze, V.

    2018-01-01

    This paper formulates a variational approach for treating observational uncertainty and/or computational model errors as stochastic transport in dynamical systems governed by action principles under non-holonomic constraints. For this purpose, we derive, analyse and numerically study the example of an unbalanced spherical ball rolling under gravity along a stochastic path. Our approach uses the Hamilton-Pontryagin variational principle, constrained by a stochastic rolling condition, which we show is equivalent to the corresponding stochastic Lagrange-d'Alembert principle. In the example of the rolling ball, the stochasticity represents uncertainty in the observation and/or error in the computational simulation of the angular velocity of rolling. The influence of the stochasticity on the deterministically conserved quantities is investigated both analytically and numerically. Our approach applies to a wide variety of stochastic, non-holonomically constrained systems, because it preserves the mathematical properties inherited from the variational principle.

  13. Variational energy principle for compressible, baroclinic flow. 2: Free-energy form of Hamilton's principle

    NASA Technical Reports Server (NTRS)

    Schmid, L. A.

    1977-01-01

    The first and second variations are calculated for the irreducible form of Hamilton's Principle that involves the minimum number of dependent variables necessary to describe the kinetmatics and thermodynamics of inviscid, compressible, baroclinic flow in a specified gravitational field. The form of the second variation shows that, in the neighborhood of a stationary point that corresponds to physically stable flow, the action integral is a complex saddle surface in parameter space. There exists a form of Hamilton's Principle for which a direct solution of a flow problem is possible. This second form is related to the first by a Friedrichs transformation of the thermodynamic variables. This introduces an extra dependent variable, but the first and second variations are shown to have direct physical significance, namely they are equal to the free energy of fluctuations about the equilibrium flow that satisfies the equations of motion. If this equilibrium flow is physically stable, and if a very weak second order integral constraint on the correlation between the fluctuations of otherwise independent variables is satisfied, then the second variation of the action integral for this free energy form of Hamilton's Principle is positive-definite, so the action integral is a minimum, and can serve as the basis for a direct trail and error solution. The second order integral constraint states that the unavailable energy must be maximum at equilibrium, i.e. the fluctuations must be so correlated as to produce a second order decrease in the total unavailable energy.

  14. Molecular Dynamics Simulation of the Thermophysical Properties of Quantum Liquid Helium Using the Feynman-Hibbs Potential

    NASA Astrophysics Data System (ADS)

    Liu, J.; Lu, W. Q.

    2010-03-01

    This paper presents the detailed MD simulation on the properties including the thermal conductivities and viscosities of the quantum fluid helium at different state points. The molecular interactions are represented by the Lennard-Jones pair potentials supplemented by quantum corrections following the Feynman-Hibbs approach and the properties are calculated using the Green-Kubo equations. A comparison is made among the numerical results using LJ and QFH potentials and the existing database and shows that the LJ model is not quantitatively correct for the supercritical liquid helium, thereby the quantum effect must be taken into account when the quantum fluid helium is studied. The comparison of the thermal conductivity is also made as a function of temperatures and pressure and the results show quantum effect correction is an efficient tool to get the thermal conductivities.

  15. Nonperturbative dynamics of scalar field theories through the Feynman-Schwinger representation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cetin Savkli; Franz Gross; John Tjon

    2004-04-01

    In this paper we present a summary of results obtained for scalar field theories using the Feynman-Schwinger (FSR) approach. Specifically, scalar QED and {chi}{sup 2}{phi} theories are considered. The motivation behind the applications discussed in this paper is to use the FSR method as a rigorous tool for testing the quality of commonly used approximations in field theory. Exact calculations in a quenched theory are presented for one-, two-, and three-body bound states. Results obtained indicate that some of the commonly used approximations, such as Bethe-Salpeter ladder summation for bound states and the rainbow summation for one body problems, producemore » significantly different results from those obtained from the FSR approach. We find that more accurate results can be obtained using other, simpler, approximation schemes.« less

  16. Finally making sense of the double-slit experiment.

    PubMed

    Aharonov, Yakir; Cohen, Eliahu; Colombo, Fabrizio; Landsberger, Tomer; Sabadini, Irene; Struppa, Daniele C; Tollaksen, Jeff

    2017-06-20

    Feynman stated that the double-slit experiment "…has in it the heart of quantum mechanics. In reality, it contains the only mystery" and that "nobody can give you a deeper explanation of this phenomenon than I have given; that is, a description of it" [Feynman R, Leighton R, Sands M (1965) The Feynman Lectures on Physics ]. We rise to the challenge with an alternative to the wave function-centered interpretations: instead of a quantum wave passing through both slits, we have a localized particle with nonlocal interactions with the other slit. Key to this explanation is dynamical nonlocality, which naturally appears in the Heisenberg picture as nonlocal equations of motion. This insight led us to develop an approach to quantum mechanics which relies on pre- and postselection, weak measurements, deterministic, and modular variables. We consider those properties of a single particle that are deterministic to be primal. The Heisenberg picture allows us to specify the most complete enumeration of such deterministic properties in contrast to the Schrödinger wave function, which remains an ensemble property. We exercise this approach by analyzing a version of the double-slit experiment augmented with postselection, showing that only it and not the wave function approach can be accommodated within a time-symmetric interpretation, where interference appears even when the particle is localized. Although the Heisenberg and Schrödinger pictures are equivalent formulations, nevertheless, the framework presented here has led to insights, intuitions, and experiments that were missed from the old perspective.

  17. A survey of parametrized variational principles and applications to computational mechanics

    NASA Technical Reports Server (NTRS)

    Felippa, Carlos A.

    1993-01-01

    This survey paper describes recent developments in the area of parametrized variational principles (PVP's) and selected applications to finite-element computational mechanics. A PVP is a variational principle containing free parameters that have no effect on the Euler-Lagrange equations. The theory of single-field PVP's based on gauge functions (also known as null Lagrangians) is a subset of the inverse problem of variational calculus that has limited value. On the other hand, multifield PVP's are more interesting from theoretical and practical standpoints. Following a tutorial introduction, the paper describes the recent construction of multifield PVP's in several areas of elasticity and electromagnetics. It then discusses three applications to finite-element computational mechanics: the derivation of high-performance finite elements, the development of element-level error indicators, and the constructions of finite element templates. The paper concludes with an overview of open research areas.

  18. Derivation of a variational principle for plane strain elastic-plastic silk biopolymers

    NASA Astrophysics Data System (ADS)

    He, J. H.; Liu, F. J.; Cao, J. H.; Zhang, L.

    2014-01-01

    Silk biopolymers, such as spider silk and Bombyx mori silk, behave always elastic-plastically. An elastic-plastic model is adopted and a variational principle for the small strain, rate plasticity problem is established by semi-inverse method. A trial Lagrangian is constructed where an unknown function is included which can be identified step by step.

  19. Statistical mechanical theory for steady state systems. VI. Variational principles

    NASA Astrophysics Data System (ADS)

    Attard, Phil

    2006-12-01

    Several variational principles that have been proposed for nonequilibrium systems are analyzed. These include the principle of minimum rate of entropy production due to Prigogine [Introduction to Thermodynamics of Irreversible Processes (Interscience, New York, 1967)], the principle of maximum rate of entropy production, which is common on the internet and in the natural sciences, two principles of minimum dissipation due to Onsager [Phys. Rev. 37, 405 (1931)] and to Onsager and Machlup [Phys. Rev. 91, 1505 (1953)], and the principle of maximum second entropy due to Attard [J. Chem.. Phys. 122, 154101 (2005); Phys. Chem. Chem. Phys. 8, 3585 (2006)]. The approaches of Onsager and Attard are argued to be the only viable theories. These two are related, although their physical interpretation and mathematical approximations differ. A numerical comparison with computer simulation results indicates that Attard's expression is the only accurate theory. The implications for the Langevin and other stochastic differential equations are discussed.

  20. How to hit HIV where it hurts

    NASA Astrophysics Data System (ADS)

    Chakraborty, Arup

    No medical procedure has saved more lives than vaccination. But, today, some pathogens have evolved which have defied successful vaccination using the empirical paradigms pioneered by Pasteur and Jenner. One characteristic of many pathogens for which successful vaccines do not exist is that they present themselves in various guises. HIV is an extreme example because of its high mutability. This highly mutable virus can evade natural or vaccine induced immune responses, often by mutating at multiple sites linked by compensatory interactions. I will describe first how by bringing to bear ideas from statistical physics (e.g., maximum entropy models, Hopfield models, Feynman variational theory) together with in vitro experiments and clinical data, the fitness landscape of HIV is beginning to be defined with explicit account for collective mutational pathways. I will describe how this knowledge can be harnessed for vaccine design. Finally, I will describe how ideas at the intersection of evolutionary biology, immunology, and statistical physics can help guide the design of strategies that may be able to induce broadly neutralizing antibodies.

  1. Spin models inferred from patient-derived viral sequence data faithfully describe HIV fitness landscapes

    NASA Astrophysics Data System (ADS)

    Shekhar, Karthik; Ruberman, Claire F.; Ferguson, Andrew L.; Barton, John P.; Kardar, Mehran; Chakraborty, Arup K.

    2013-12-01

    Mutational escape from vaccine-induced immune responses has thwarted the development of a successful vaccine against AIDS, whose causative agent is HIV, a highly mutable virus. Knowing the virus' fitness as a function of its proteomic sequence can enable rational design of potent vaccines, as this information can focus vaccine-induced immune responses to target mutational vulnerabilities of the virus. Spin models have been proposed as a means to infer intrinsic fitness landscapes of HIV proteins from patient-derived viral protein sequences. These sequences are the product of nonequilibrium viral evolution driven by patient-specific immune responses and are subject to phylogenetic constraints. How can such sequence data allow inference of intrinsic fitness landscapes? We combined computer simulations and variational theory á la Feynman to show that, in most circumstances, spin models inferred from patient-derived viral sequences reflect the correct rank order of the fitness of mutant viral strains. Our findings are relevant for diverse viruses.

  2. The Ghost of Electricity: A History of Electron Theory from 1897 to 1987.

    ERIC Educational Resources Information Center

    Adams, S. F.

    1988-01-01

    Discusses the history of electron theory from 1897 to 1987. Includes the works of some physicists, such as Thomson, Lorentz, De Broglie, Bohr, Pauli, Dirac, Feynman, Wheeler, Weinberg, and Salam. (YP)

  3. Energies of Screened Coulomb Potentials.

    ERIC Educational Resources Information Center

    Lai, C. S.

    1979-01-01

    This article shows that, by applying the Hellman-Feynman theorem alone to screened Coulomb potentials, the first four coefficients in the energy series in powers of the perturbation parameter can be obtained from the unperturbed Coulomb system. (Author/HM)

  4. FIRST Quantum-(1980)-Computing DISCOVERY in Siegel-Rosen-Feynman-...A.-I. Neural-Networks: Artificial(ANN)/Biological(BNN) and Siegel FIRST Semantic-Web and Siegel FIRST ``Page''-``Brin'' ``PageRank'' PRE-Google Search-Engines!!!

    NASA Astrophysics Data System (ADS)

    Rosen, Charles; Siegel, Edward Carl-Ludwig; Feynman, Richard; Wunderman, Irwin; Smith, Adolph; Marinov, Vesco; Goldman, Jacob; Brine, Sergey; Poge, Larry; Schmidt, Erich; Young, Frederic; Goates-Bulmer, William-Steven; Lewis-Tsurakov-Altshuler, Thomas-Valerie-Genot; Ibm/Exxon Collaboration; Google/Uw Collaboration; Microsoft/Amazon Collaboration; Oracle/Sun Collaboration; Ostp/Dod/Dia/Nsa/W.-F./Boa/Ubs/Ub Collaboration

    2013-03-01

    Belew[Finding Out About, Cambridge(2000)] and separately full-decade pre-Page/Brin/Google FIRST Siegel-Rosen(Machine-Intelligence/Atherton)-Feynman-Smith-Marinov(Guzik Enterprises/Exxon-Enterprises/A.-I./Santa Clara)-Wunderman(H.-P.) [IBM Conf. on Computers and Mathematics, Stanford(1986); APS Mtgs.(1980s): Palo Alto/Santa Clara/San Francisco/...(1980s) MRS Spring-Mtgs.(1980s): Palo Alto/San Jose/San Francisco/...(1980-1992) FIRST quantum-computing via Bose-Einstein quantum-statistics(BEQS) Bose-Einstein CONDENSATION (BEC) in artificial-intelligence(A-I) artificial neural-networks(A-N-N) and biological neural-networks(B-N-N) and Siegel[J. Noncrystalline-Solids 40, 453(1980); Symp. on Fractals..., MRS Fall-Mtg., Boston(1989)-5-papers; Symp. on Scaling..., (1990); Symp. on Transport in Geometric-Constraint (1990)

  5. From Feynman rules to conserved quantum numbers, I

    NASA Astrophysics Data System (ADS)

    Nogueira, P.

    2017-05-01

    In the context of Quantum Field Theory (QFT) there is often the need to find sets of graph-like diagrams (the so-called Feynman diagrams) for a given physical model. If negative, the answer to the related problem 'Are there any diagrams with this set of external fields?' may settle certain physical questions at once. Here the latter problem is formulated in terms of a system of linear diophantine equations derived from the Lagrangian density, from which necessary conditions for the existence of the required diagrams may be obtained. Those conditions are equalities that look like either linear diophantine equations or linear modular (i.e. congruence) equations, and may be found by means of fairly simple algorithms that involve integer computations. The diophantine equations so obtained represent (particle) number conservation rules, and are related to the conserved (additive) quantum numbers that may be assigned to the fields of the model.

  6. Feynman variance for neutrons emitted from photo-fission initiated fission chains - a systematic simulation for selected speacal nuclear materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soltz, R. A.; Danagoulian, A.; Sheets, S.

    Theoretical calculations indicate that the value of the Feynman variance, Y2F for the emitted distribution of neutrons from ssionable exhibits a strong monotonic de- pendence on a the multiplication, M, of a quantity of special nuclear material. In 2012 we performed a series of measurements at the Passport Inc. facility using a 9- MeV bremsstrahlung CW beam of photons incident on small quantities of uranium with liquid scintillator detectors. For the set of objects studies we observed deviations in the expected monotonic dependence, and these deviations were later con rmed by MCNP simulations. In this report, we modify the theorymore » to account for the contri- bution from the initial photo- ssion and benchmark the new theory with a series of MCNP simulations on DU, LEU, and HEU objects spanning a wide range of masses and multiplication values.« less

  7. Sv-map between type I and heterotic sigma models

    NASA Astrophysics Data System (ADS)

    Fan, Wei; Fotopoulos, A.; Stieberger, S.; Taylor, T. R.

    2018-05-01

    The scattering amplitudes of gauge bosons in heterotic and open superstring theories are related by the single-valued projection which yields heterotic amplitudes by selecting a subset of multiple zeta value coefficients in the α‧ (string tension parameter) expansion of open string amplitudes. In the present work, we argue that this relation holds also at the level of low-energy expansions (or individual Feynman diagrams) of the respective effective actions, by investigating the beta functions of two-dimensional sigma models describing world-sheets of open and heterotic strings. We analyze the sigma model Feynman diagrams generating identical effective action terms in both theories and show that the heterotic coefficients are given by the single-valued projection of the open ones. The single-valued projection appears as a result of summing over all radial orderings of heterotic vertices on the complex plane representing string world-sheet.

  8. Higher-order gravitational lensing reconstruction using Feynman diagrams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jenkins, Elizabeth E.; Manohar, Aneesh V.; Yadav, Amit P.S.

    2014-09-01

    We develop a method for calculating the correlation structure of the Cosmic Microwave Background (CMB) using Feynman diagrams, when the CMB has been modified by gravitational lensing, Faraday rotation, patchy reionization, or other distorting effects. This method is used to calculate the bias of the Hu-Okamoto quadratic estimator in reconstructing the lensing power spectrum up to  O (φ{sup 4}) in the lensing potential φ. We consider both the diagonal noise TT TT, EB EB, etc. and, for the first time, the off-diagonal noise TT TE, TB EB, etc. The previously noted large  O (φ{sup 4}) term in the second order noise ismore » identified to come from a particular class of diagrams. It can be significantly reduced by a reorganization of the φ expansion. These improved estimators have almost no bias for the off-diagonal case involving only one B component of the CMB, such as EE EB.« less

  9. Quantization of gauge fields, graph polynomials and graph homology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kreimer, Dirk, E-mail: kreimer@physik.hu-berlin.de; Sars, Matthias; Suijlekom, Walter D. van

    2013-09-15

    We review quantization of gauge fields using algebraic properties of 3-regular graphs. We derive the Feynman integrand at n loops for a non-abelian gauge theory quantized in a covariant gauge from scalar integrands for connected 3-regular graphs, obtained from the two Symanzik polynomials. The transition to the full gauge theory amplitude is obtained by the use of a third, new, graph polynomial, the corolla polynomial. This implies effectively a covariant quantization without ghosts, where all the relevant signs of the ghost sector are incorporated in a double complex furnished by the corolla polynomial–we call it cycle homology–and by graph homology.more » -- Highlights: •We derive gauge theory Feynman from scalar field theory with 3-valent vertices. •We clarify the role of graph homology and cycle homology. •We use parametric renormalization and the new corolla polynomial.« less

  10. An accurate European option pricing model under Fractional Stable Process based on Feynman Path Integral

    NASA Astrophysics Data System (ADS)

    Ma, Chao; Ma, Qinghua; Yao, Haixiang; Hou, Tiancheng

    2018-03-01

    In this paper, we propose to use the Fractional Stable Process (FSP) for option pricing. The FSP is one of the few candidates to directly model a number of desired empirical properties of asset price risk neutral dynamics. However, pricing the vanilla European option under FSP is difficult and problematic. In the paper, built upon the developed Feynman Path Integral inspired techniques, we present a novel computational model for option pricing, i.e. the Fractional Stable Process Path Integral (FSPPI) model under a general fractional stable distribution that tackles this problem. Numerical and empirical experiments show that the proposed pricing model provides a correction of the Black-Scholes pricing error - overpricing long term options, underpricing short term options; overpricing out-of-the-money options, underpricing in-the-money options without any additional structures such as stochastic volatility and a jump process.

  11. Fitting of Hadron Mass Spectra and Contributions to Perturbation Theory of Conformal Quantum Field Theory

    NASA Astrophysics Data System (ADS)

    Luna Acosta, German Aurelio

    The masses of observed hadrons are fitted according to the kinematic predictions of Conformal Relativity. The hypothesis gives a remarkably good fit. The isospin SU(2) gauge invariant Lagrangian L(,(pi)NN)(x,(lamda)) is used in the calculation of d(sigma)/d(OMEGA) to 2nd-order Feynman graphs for simplified models of (pi)N(--->)(pi)N. The resulting infinite mass sums over the nucleon (Conformal) families are done via the Generalized-Sommerfeld-Watson Transform Theorem. Even though the models are too simple to be realistic, they indicate that if (DELTA)-internal lines were to be included, 2nd-order Feynman graphs may reproduce the experimental data qualitatively. The energy -dependence of the propagator and couplings in Conformal QFT is different from that of ordinary QFT. Suggestions for further work are made in the areas of ultra-violet divergences and OPEC calculations.

  12. The electromigration force in metallic bulk

    NASA Astrophysics Data System (ADS)

    Lodder, A.; Dekker, J. P.

    1998-01-01

    The voltage induced driving force on a migrating atom in a metallic system is discussed in the perspective of the Hellmann-Feynman force concept, local screening concepts and the linear-response approach. Since the force operator is well defined in quantum mechanics it appears to be only confusing to refer to the Hellmann-Feynman theorem in the context of electromigration. Local screening concepts are shown to be mainly of historical value. The physics involved is completely represented in ab initio local density treatments of dilute alloys and the implementation does not require additional precautions about screening, being typical for jellium treatments. The linear-response approach is shown to be a reliable guide in deciding about the two contributions to the driving force, the direct force and the wind force. Results are given for the wind valence for electromigration in a number of FCC and BCC metals, calculated using an ab initio KKR-Green's function description of a dilute alloy.

  13. Variational Principles for Buckling of Microtubules Modeled as Nonlocal Orthotropic Shells

    PubMed Central

    2014-01-01

    A variational principle for microtubules subject to a buckling load is derived by semi-inverse method. The microtubule is modeled as an orthotropic shell with the constitutive equations based on nonlocal elastic theory and the effect of filament network taken into account as an elastic surrounding. Microtubules can carry large compressive forces by virtue of the mechanical coupling between the microtubules and the surrounding elastic filament network. The equations governing the buckling of the microtubule are given by a system of three partial differential equations. The problem studied in the present work involves the derivation of the variational formulation for microtubule buckling. The Rayleigh quotient for the buckling load as well as the natural and geometric boundary conditions of the problem is obtained from this variational formulation. It is observed that the boundary conditions are coupled as a result of nonlocal formulation. It is noted that the analytic solution of the buckling problem for microtubules is usually a difficult task. The variational formulation of the problem provides the basis for a number of approximate and numerical methods of solutions and furthermore variational principles can provide physical insight into the problem. PMID:25214886

  14. First principles molecular dynamics of molten NaI: Structure, self-diffusion, polarization effects, and charge transfer

    NASA Astrophysics Data System (ADS)

    Galamba, N.; Costa Cabral, B. J.

    2007-09-01

    The structure and self-diffusion of NaI and NaCl at temperatures close to their melting points are studied by first principles Hellmann-Feynman molecular dynamics (HFMD). The results are compared with classical MD using rigid-ion (RI) and shell-model (ShM) interionic potentials. HFMD for NaCl was reported before at a higher temperature [N. Galamba and B. J. Costa Cabral, J. Chem. Phys. 126, 124502 (2007)]. The main differences between the structures predicted by HFMD and RI MD for NaI concern the cation-cation and the anion-cation pair correlation functions. A ShM which allows only for the polarization of I- reproduces the main features of the HFMD structure of NaI. The inclusion of polarization effects for both ionic species leads to a more structured ionic liquid, although a good agreement with HFMD is also observed. HFMD Green-Kubo self-diffusion coefficients are larger than those obtained from RI and ShM simulations. A qualitative study of charge transfer in molten NaI and NaCl was also carried out with the Hirshfeld charge partitioning method. Charge transfer in molten NaI is comparable to that in NaCl, and results for NaCl at two temperatures support the view that the magnitude of charge transfer is weakly state dependent for ionic systems. Finally, Hirshfeld charge distributions indicate that differences between RI and HFMD results are mainly related to polarization effects, while the influence of charge transfer fluctuations is minimal for these systems.

  15. Variational principles for relativistic smoothed particle hydrodynamics

    NASA Astrophysics Data System (ADS)

    Monaghan, J. J.; Price, D. J.

    2001-12-01

    In this paper we show how the equations of motion for the smoothed particle hydrodynamics (SPH) method may be derived from a variational principle for both non-relativistic and relativistic motion when there is no dissipation. Because the SPH density is a function of the coordinates the derivation of the equations of motion through variational principles is simpler than in the continuum case where the density is defined through the continuity equation. In particular, the derivation of the general relativistic equations is more direct and simpler than that of Fock. The symmetry properties of the Lagrangian lead immediately to the familiar additive conservation laws of linear and angular momentum and energy. In addition, we show that there is an approximately conserved quantity which, in the continuum limit, is the circulation.

  16. Variational principle for the Navier-Stokes equations.

    PubMed

    Kerswell, R R

    1999-05-01

    A variational principle is presented for the Navier-Stokes equations in the case of a contained boundary-driven, homogeneous, incompressible, viscous fluid. Based upon making the fluid's total viscous dissipation over a given time interval stationary subject to the constraint of the Navier-Stokes equations, the variational problem looks overconstrained and intractable. However, introducing a nonunique velocity decomposition, u(x,t)=phi(x,t) + nu(x,t), "opens up" the variational problem so that what is presumed a single allowable point over the velocity domain u corresponding to the unique solution of the Navier-Stokes equations becomes a surface with a saddle point over the extended domain (phi,nu). Complementary or dual variational problems can then be constructed to estimate this saddle point value strictly from above as part of a minimization process or below via a maximization procedure. One of these reduced variational principles is the natural and ultimate generalization of the upper bounding problem developed by Doering and Constantin. The other corresponds to the ultimate Busse problem which now acts to lower bound the true dissipation. Crucially, these reduced variational problems require only the solution of a series of linear problems to produce bounds even though their unique intersection is conjectured to correspond to a solution of the nonlinear Navier-Stokes equations.

  17. A probabilistic Hu-Washizu variational principle

    NASA Technical Reports Server (NTRS)

    Liu, W. K.; Belytschko, T.; Besterfield, G. H.

    1987-01-01

    A Probabilistic Hu-Washizu Variational Principle (PHWVP) for the Probabilistic Finite Element Method (PFEM) is presented. This formulation is developed for both linear and nonlinear elasticity. The PHWVP allows incorporation of the probabilistic distributions for the constitutive law, compatibility condition, equilibrium, domain and boundary conditions into the PFEM. Thus, a complete probabilistic analysis can be performed where all aspects of the problem are treated as random variables and/or fields. The Hu-Washizu variational formulation is available in many conventional finite element codes thereby enabling the straightforward inclusion of the probabilistic features into present codes.

  18. Light, Imaging, Vision: An interdisciplinary undergraduate course

    NASA Astrophysics Data System (ADS)

    Nelson, Philip

    Students in physical and life science, and in engineering, need to know about the physics and biology of light. In the 21st century, it has become increasingly clear that the quantum nature of light is essential both for the latest imaging modalities and even to advance our knowledge of fundamental processes, such as photosynthesis and human vision. But many optics courses remain rooted in classical physics, with photons as an afterthought. I'll describe a new undergraduate course, for students in several science and engineering majors, that takes students from the rudiments of probability theory to modern methods like fluorescence imaging and Förster resonance energy transfer. After a digression into color vision, students then see how the Feynman principle explains the apparently wavelike phenomena associated to light, including applications like diffraction limit, subdiffraction imaging, total internal reflection and TIRF microscopy. Then we see how scientists documented the single-quantum sensitivity of the eye seven decades earlier than `ought' to have been possible, and finally close with the remarkable signaling cascade that delivers such outstanding performance. A new textbook embodying this course will be published by Princeton University Press in Spring 2017. Partially supported by the United States National Science Foundation under Grant PHY-1601894.

  19. Mrst '96: Current Ideas in Theoretical Physics - Proceedings of the Eighteenth Annual Montréal-Rochester-Syracuse-Toronto Meeting

    NASA Astrophysics Data System (ADS)

    O'Donnell, Patrick J.; Smith, Brian Hendee

    1996-11-01

    The Table of Contents for the full book PDF is as follows: * Preface * Roberto Mendel, An Appreciaton * The Infamous Coulomb Gauge * Renormalized Path Integral in Quantum Mechanics * New Analysis of the Divergence of Perturbation Theory * The Last of the Soluble Two Dimensional Field Theories? * Rb and Heavy Quark Mixing * Rb Problem: Loop Contributions and Supersymmetry * QCD Radiative Effects in Inclusive Hadronic B Decays * CP-Violating Dipole Moments of Quarks in the Kobayashi-Maskawa Model * Hints of Dynamical Symmetry Breaking? * Pi Pi Scattering in an Effective Chiral Lagrangian * Pion-Resonance Parameters from QCD Sum Rules * Higgs Theorem, Effective Action, and its Gauge Invariance * SUSY and the Decay H_2^0 to gg * Effective Higgs-to-Light Quark Coupling Induced by Heavy Quark Loops * Heavy Charged Lepton Production in Superstring Inspired E6 Models * The Elastic Properties of a Flat Crystalline Membrane * Gauge Dependence of Topological Observables in Chern-Simons Theory * Entanglement Entropy From Edge States * A Simple General Treatment of Flavor Oscillations * From Schrödinger to Maupertuis: Least Action Principles from Quantum Mechanics * The Matrix Method for Multi-Loop Feynman Integrals * Simplification in QCD and Electroweak Calculations * Programme * List of Participants

  20. Spin foam models for quantum gravity

    NASA Astrophysics Data System (ADS)

    Perez, Alejandro

    The definition of a quantum theory of gravity is explored following Feynman's path-integral approach. The aim is to construct a well defined version of the Wheeler-Misner- Hawking ``sum over four geometries'' formulation of quantum general relativity (GR). This is done by means of exploiting the similarities between the formulation of GR in terms of tetrad-connection variables (Palatini formulation) and a simpler theory called BF theory. One can go from BF theory to GR by imposing certain constraints on the BF-theory configurations. BF theory contains only global degrees of freedom (topological theory) and it can be exactly quantized á la Feynman introducing a discretization of the manifold. Using the path integral for BF theory we define a path integration for GR imposing the BF-to-GR constraints on the BF measure. The infinite degrees of freedom of gravity are restored in the process, and the restriction to a single discretization introduces a cut- off in the summed-over configurations. In order to capture all the degrees of freedom a sum over discretization is implemented. Both the implementation of the BF-to-GR constraints and the sum over discretizations are obtained by means of the introduction of an auxiliary field theory (AFT). 4-geometries in the path integral for GR are given by the Feynman diagrams of the AFT which is in this sense dual to GR. Feynman diagrams correspond to 2-complexes labeled by unitary irreducible representations of the internal gauge group (corresponding to tetrad rotation in the connection to GR). A model for 4-dimensional Euclidean quantum gravity (QG) is defined which corresponds to a different normalization of the Barrett-Crane model. The model is perturbatively finite; divergences appearing in the Barrett-Crane model are cured by the new normalization. We extend our techniques to the Lorentzian sector, where we define two models for four-dimensional QG. The first one contains only time-like representations and is shown to be perturbatively finite. The second model contains both time-like and space-like representations. The spectrum of geometrical operators coincide with the prediction of the canonical approach of loop QG. At the moment, the convergence properties of the model are less understood and remain for future investigation.

  1. Irreversibility and entropy production in transport phenomena, III—Principle of minimum integrated entropy production including nonlinear responses

    NASA Astrophysics Data System (ADS)

    Suzuki, Masuo

    2013-01-01

    A new variational principle of steady states is found by introducing an integrated type of energy dissipation (or entropy production) instead of instantaneous energy dissipation. This new principle is valid both in linear and nonlinear transport phenomena. Prigogine’s dream has now been realized by this new general principle of minimum “integrated” entropy production (or energy dissipation). This new principle does not contradict with the Onsager-Prigogine principle of minimum instantaneous entropy production in the linear regime, but it is conceptually different from the latter which does not hold in the nonlinear regime. Applications of this theory to electric conduction, heat conduction, particle diffusion and chemical reactions are presented. The irreversibility (or positive entropy production) and long time tail problem in Kubo’s formula are also discussed in the Introduction and last section. This constitutes the complementary explanation of our theory of entropy production given in the previous papers (M. Suzuki, Physica A 390 (2011) 1904 and M. Suzuki, Physica A 391 (2012) 1074) and has given the motivation of the present investigation of variational principle.

  2. Coupled structural, thermal, phase-change and electromagnetic analysis for superconductors, volume 2

    NASA Technical Reports Server (NTRS)

    Felippa, Carlos A.; Farhat, Charbel; Park, K. C.; Militello, Carmelo; Schuler, James J.

    1993-01-01

    Two families of parametrized mixed variational principles for linear electromagnetodynamics are constructed. The first family is applicable when the current density distribution is known a priori. Its six independent fields are magnetic intensity and flux density, magnetic potential, electric intensity and flux density and electric potential. Through appropriate specialization of parameters the first principle reduces to more conventional principles proposed in the literature. The second family is appropriate when the current density distribution and a conjugate Lagrange multiplier field are adjoined, giving a total of eight independently varied fields. In this case it is shown that a conventional variational principle exists only in the time-independent (static) case. Several static functionals with reduced number of varied fields are presented. The application of one of these principles to construct finite elements with current prediction capabilities is illustrated with a numerical example.

  3. A Variational Principle for Reconstruction of Elastic Deformations in Shear Deformable Plates and Shells

    NASA Technical Reports Server (NTRS)

    Tessler, Alexander; Spangler, Jan L.

    2003-01-01

    A variational principle is formulated for the inverse problem of full-field reconstruction of three-dimensional plate/shell deformations from experimentally measured surface strains. The formulation is based upon the minimization of a least squares functional that uses the complete set of strain measures consistent with linear, first-order shear-deformation theory. The formulation, which accommodates for transverse shear deformation, is applicable for the analysis of thin and moderately thick plate and shell structures. The main benefit of the variational principle is that it is well suited for C(sup 0)-continuous displacement finite element discretizations, thus enabling the development of robust algorithms for application to complex civil and aeronautical structures. The methodology is especially aimed at the next generation of aerospace vehicles for use in real-time structural health monitoring systems.

  4. Averaged variational principle for autoresonant Bernstein-Greene-Kruskal modes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khain, P.; Friedland, L.

    2010-10-15

    Whitham's averaged variational principle is applied in studying dynamics of formation of autoresonant (continuously phase-locked) Bernstein-Greene-Kruskal (BGK) modes in a plasma driven by a chirped frequency ponderomotive wave. A flat-top electron velocity distribution is used as a model allowing a variational formulation within the water bag theory. The corresponding Lagrangian, averaged over the fast phase variable yields evolution equations for the slow field variables, allows uniform description of all stages of excitation of driven-chirped BGK modes, and predicts modulational stability of these nonlinear phase-space structures. Numerical solutions of the system of slow variational equations are in good agreement with Vlasov-Poissonmore » simulations.« less

  5. LACED

    Science.gov Websites

    Search Site submit Feynman Center for Innovation Los Alamos National Laboratory Collaboration for Explosives Detection Los Alamos National Laboratory Los Alamos Collaboration for Explosives Detection Menu is built upon Los Alamos' unparalleled explosive detection capabilities derived from the expertise of

  6. Variational principles for dissipative (sub)systems, with applications to the theory of linear dispersion and geometrical optics

    DOE PAGES

    Dodin, I. Y.; Zhmoginov, A. I.; Ruiz, D. E.

    2017-02-24

    Applications of variational methods are typically restricted to conservative systems. Some extensions to dissipative systems have been reported too but require ad hoc techniques such as the artificial doubling of the dynamical variables. We propose a different approach. Here, we show that for a broad class of dissipative systems of practical interest, variational principles can be formulated using constant Lagrange multipliers and Lagrangians nonlocal in time, which allow treating reversible and irreversible dynamics on the same footing. A general variational theory of linear dispersion is formulated as an example. Particularly, we present a variational formulation for linear geometrical optics inmore » a general dissipative medium, which is allowed to be nonstationary, inhomogeneous, anisotropic, and exhibit both temporal and spatial dispersion simultaneously.« less

  7. Spin and gravitation

    NASA Technical Reports Server (NTRS)

    Ray, J. R.

    1982-01-01

    The fundamental variational principle for a perfect fluid in general relativity is extended so that it applies to the metric-torsion Einstein-Cartan theory. Field equations for a perfect fluid in the Einstein-Cartan theory are deduced. In addition, the equations of motion for a fluid with intrinsic spin in general relativity are deduced from a special relativistic variational principle. The theory is a direct extension of the theory of nonspinning fluids in special relativity.

  8. Variation of fundamental constants on sub- and super-Hubble scales: From the equivalence principle to the multiverse

    NASA Astrophysics Data System (ADS)

    Uzan, Jean-Philippe

    2013-02-01

    Fundamental constants play a central role in many modern developments in gravitation and cosmology. Most extensions of general relativity lead to the conclusion that dimensionless constants are actually dynamical fields. Any detection of their variation on sub-Hubble scales would signal a violation of the Einstein equivalence principle and hence a lead to gravity beyond general relativity. On super-Hubble scales, or maybe should we say on super-universe scales, such variations are invoked as a solution to the fine-tuning problem, in connection with an anthropic approach.

  9. The biological default state of cell proliferation with variation and motility, a fundamental principle for a theory of organisms.

    PubMed

    Soto, Ana M; Longo, Giuseppe; Montévil, Maël; Sonnenschein, Carlos

    2016-10-01

    The principle of inertia is central to the modern scientific revolution. By postulating this principle Galileo at once identified a pertinent physical observable (momentum) and a conservation law (momentum conservation). He then could scientifically analyze what modifies inertial movement: gravitation and friction. Inertia, the default state in mechanics, represented a major theoretical commitment: there is no need to explain uniform rectilinear motion, rather, there is a need to explain departures from it. By analogy, we propose a biological default state of proliferation with variation and motility. From this theoretical commitment, what requires explanation is proliferative quiescence, lack of variation, lack of movement. That proliferation is the default state is axiomatic for biologists studying unicellular organisms. Moreover, it is implied in Darwin's "descent with modification". Although a "default state" is a theoretical construct and a limit case that does not need to be instantiated, conditions that closely resemble unrestrained cell proliferation are readily obtained experimentally. We will illustrate theoretical and experimental consequences of applying and of ignoring this principle. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. The biological default state of cell proliferation with variation and motility, a fundamental principle for a theory of organisms

    PubMed Central

    SOTO, ANA M.; LONGO, GIUSEPPE; Montévil, Maël; SONNENSCHEIN, CARLOS

    2017-01-01

    The principle of inertia is central to the modern scientific revolution. By postulating this principle Galileo at once identified a pertinent physical observable (momentum) and a conservation law (momentum conservation). He then could scientifically analyze what modifies inertial movement: gravitation and friction. Inertia, the default state in mechanics, represented a major theoretical commitment: there is no need to explain uniform rectilinear motion, rather, there is a need to explain departures from it. By analogy, we propose a biological default state of proliferation with variation and motility. From this theoretical commitment, what requires explanation is proliferative quiescence, lack of variation, lack of movement. That proliferation is the default state is axiomatic for biologists studying unicellular organisms. Moreover, it is implied in Darwin’s “descent with modification”. Although a “default state” is a theoretical construct and a limit case that does not need to be instantiated, conditions that closely resemble unrestrained cell proliferation are readily obtained experimentally. We will illustrate theoretical and experimental consequences of applying and of ignoring this principle. PMID:27381480

  11. Genetics and variation

    Treesearch

    John R. Jones; Norbert V. DeByle

    1985-01-01

    The broad genotypic variability in quaking aspen (Populus tremuloides Michx.), that results in equally broad phenotypic variability among clones is important to the ecology and management of this species. This chapter considers principles of aspen genetics and variation, variation in aspen over its range, and local variation among clones. For a more...

  12. Modular operads and the quantum open-closed homotopy algebra

    NASA Astrophysics Data System (ADS)

    Doubek, Martin; Jurčo, Branislav; Münster, Korbinian

    2015-12-01

    We verify that certain algebras appearing in string field theory are algebras over Feynman transform of modular operads which we describe explicitly. Equivalent description in terms of solutions of generalized BV master equations are explained from the operadic point of view.

  13. The Principle of the Fermionic Projector: An Approach for Quantum Gravity?

    NASA Astrophysics Data System (ADS)

    Finster, Felix

    In this short article we introduce the mathematical framework of the principle of the fermionic projector and set up a variational principle in discrete space-time. The underlying physical principles are discussed. We outline the connection to the continuum theory and state recent results. In the last two sections, we speculate on how it might be possible to describe quantum gravity within this framework.

  14. Speed of pulled fronts with a cutoff

    NASA Astrophysics Data System (ADS)

    Benguria, R. D.; Depassier, M. C.

    2007-05-01

    We study the effect of a small cutoff γ on the velocity of a pulled front in one dimension by means of a variational principle. We obtain a lower bound on the speed dependent on the cutoff, for which the two leading order terms correspond to the Brunet-Derrida expression. To do so we cast a known variational principle for the speed of propagation of fronts in different variables which makes it more suitable for applications.

  15. Unconventional Hamilton-type variational principle in phase space and symplectic algorithm

    NASA Astrophysics Data System (ADS)

    Luo, En; Huang, Weijiang; Zhang, Hexin

    2003-06-01

    By a novel approach proposed by Luo, the unconventional Hamilton-type variational principle in phase space for elastodynamics of multidegree-of-freedom system is established in this paper. It not only can fully characterize the initial-value problem of this dynamic, but also has a natural symplectic structure. Based on this variational principle, a symplectic algorithm which is called a symplectic time-subdomain method is proposed. A non-difference scheme is constructed by applying Lagrange interpolation polynomial to the time subdomain. Furthermore, it is also proved that the presented symplectic algorithm is an unconditionally stable one. From the results of the two numerical examples of different types, it can be seen that the accuracy and the computational efficiency of the new method excel obviously those of widely used Wilson-θ and Newmark-β methods. Therefore, this new algorithm is a highly efficient one with better computational performance.

  16. First principles view on chemical compound space: Gaining rigorous atomistic control of molecular properties

    DOE PAGES

    von Lilienfeld, O. Anatole

    2013-02-26

    A well-defined notion of chemical compound space (CCS) is essential for gaining rigorous control of properties through variation of elemental composition and atomic configurations. Here, we give an introduction to an atomistic first principles perspective on CCS. First, CCS is discussed in terms of variational nuclear charges in the context of conceptual density functional and molecular grand-canonical ensemble theory. Thereafter, we revisit the notion of compound pairs, related to each other via “alchemical” interpolations involving fractional nuclear charges in the electronic Hamiltonian. We address Taylor expansions in CCS, property nonlinearity, improved predictions using reference compound pairs, and the ounce-of-gold prizemore » challenge to linearize CCS. Finally, we turn to machine learning of analytical structure property relationships in CCS. Here, these relationships correspond to inferred, rather than derived through variational principle, solutions of the electronic Schrödinger equation.« less

  17. Kinetic parameters of the GUINEVERE reference configuration in VENUS-F reactor obtained from a pile noise experiment using Rossi and Feynman methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geslot, Benoit; Pepino, Alexandra; Blaise, Patrick

    A pile noise measurement campaign has been conducted by the CEA in the VENUS-F reactor (SCK-CEN, Mol Belgium) in April 2011 in the reference critical configuration of the GUINEVERE experimental program. The experimental setup made it possible to estimate the core kinetic parameters: the prompt neutron decay constant, the delayed neutron fraction and the generation time. A precise assessment of these constants is of prime importance. In particular, the effective delayed neutron fraction is used to normalize and compare calculated reactivities of different subcritical configurations, obtained by modifying either the core layout or the control rods position, with experimental onesmore » deduced from the analysis of measurements. This paper presents results obtained with a CEA-developed time stamping acquisition system. Data were analyzed using Rossi-α and Feynman-α methods. Results were normalized to reactor power using a calibrated fission chamber with a deposit of Np-237. Calculated factors were necessary to the analysis: the Diven factor was computed by the ENEA (Italy) and the power calibration factor by the CNRS/IN2P3/LPC Caen. Results deduced with both methods are consistent with respect to calculated quantities. Recommended values are given by the Rossi-α estimator, that was found to be the most robust. The neutron generation time was found equal to 0.438 ± 0.009 μs and the effective delayed neutron fraction is 765 ± 8 pcm. Discrepancies with the calculated value (722 pcm, calculation from ENEA) are satisfactory: -5.6% for the Rossi-α estimate and -2.7% for the Feynman-α estimate. (authors)« less

  18. Salecker-Wigner-Peres clock, Feynman paths, and a tunneling time that should not exist

    NASA Astrophysics Data System (ADS)

    Sokolovski, D.

    2017-08-01

    The Salecker-Wigner-Peres (SWP) clock is often used to determine the duration a quantum particle is supposed to spend in a specified region of space Ω . By construction, the result is a real positive number, and the method seems to avoid the difficulty of introducing complex time parameters, which arises in the Feynman paths approach. However, it tells little about the particle's motion. We investigate this matter further, and show that the SWP clock, like any other Larmor clock, correlates the rotation of its angular momentum with the durations τ , which the Feynman paths spend in Ω , thereby destroying interference between different durations. An inaccurate weakly coupled clock leaves the interference almost intact, and the need to resolve the resulting "which way?" problem is one of the main difficulties at the center of the "tunnelling time" controversy. In the absence of a probability distribution for the values of τ , the SWP results are expressed in terms of moduli of the "complex times," given by the weighted sums of the corresponding probability amplitudes. It is shown that overinterpretation of these results, by treating the SWP times as physical time intervals, leads to paradoxes and should be avoided. We also analyze various settings of the SWP clock, different calibration procedures, and the relation between the SWP results and the quantum dwell time. The cases of stationary tunneling and tunnel ionization are considered in some detail. Although our detailed analysis addresses only one particular definition of the duration of a tunneling process, it also points towards the impossibility of uniting various time parameters, which may occur in quantum theory, within the concept of a single tunnelling time.

  19. Fermilab Today

    Science.gov Websites

    (NOTE LOCATION) - One West Speaker: Regis Kopper, Duke University Title: Understanding the Benefits of DATE) - One West Speaker: Regina Demina, University of Rochester Title: Top Forward-Backward Asymmetry I play is a very interesting one," says Nobel Laureate Richard Feynman in a low-resolution

  20. The Coupling of Gravity to Spin and Electromagnetism

    NASA Astrophysics Data System (ADS)

    Finster, Felix; Smoller, Joel; Yau, Shing-Tung

    The coupled Einstein-Dirac-Maxwell equations are considered for a static, spherically symmetric system of two fermions in a singlet spinor state. Stable soliton-like solutions are shown to exist, and we discuss the regularizing effect of gravity from a Feynman diagram point of view.

  1. Computing the qg → qg cross section using the BCFW recursion and introduction to jet tomography in heavy ion collisions via MHV techniques

    NASA Astrophysics Data System (ADS)

    Rabemananajara, Tanjona R.; Horowitz, W. A.

    2017-09-01

    To make predictions for the particle physics processes, one has to compute the cross section of the specific process as this is what one can measure in a modern collider experiment such as the Large Hadron Collider (LHC) at CERN. Theoretically, it has been proven to be extremely difficult to compute scattering amplitudes using conventional methods of Feynman. Calculations with Feynman diagrams are realizations of a perturbative expansion and when doing calculations one has to set up all topologically different diagrams, for a given process up to a given order of coupling in the theory. This quickly makes the calculation of scattering amplitudes a hot mess. Fortunately, one can simplify calculations by considering the helicity amplitude for the Maximally Helicity Violating (MHV). This can be extended to the formalism of on-shell recursion, which is able to derive, in a much simpler way the expression of a high order scattering amplitude from lower orders.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Jing-Yuan, E-mail: chjy@uchicago.edu; Stanford Institute for Theoretical Physics, Stanford University, CA 94305; Son, Dam Thanh, E-mail: dtson@uchicago.edu

    We develop an extension of the Landau Fermi liquid theory to systems of interacting fermions with non-trivial Berry curvature. We propose a kinetic equation and a constitutive relation for the electromagnetic current that together encode the linear response of such systems to external electromagnetic perturbations, to leading and next-to-leading orders in the expansion over the frequency and wave number of the perturbations. We analyze the Feynman diagrams in a large class of interacting quantum field theories and show that, after summing up all orders in perturbation theory, the current–current correlator exactly matches with the result obtained from the kinetic theory.more » - Highlights: • We extend Landau’s kinetic theory of Fermi liquid to incorporate Berry phase. • Berry phase effects in Fermi liquid take exactly the same form as in Fermi gas. • There is a new “emergent electric dipole” contribution to the anomalous Hall effect. • Our kinetic theory is matched to field theory to all orders in Feynman diagrams.« less

  3. Interactions as intertwiners in 4D QFT

    NASA Astrophysics Data System (ADS)

    de Mello Koch, Robert; Ramgoolam, Sanjaye

    2016-03-01

    In a recent paper we showed that the correlators of free scalar field theory in four dimensions can be constructed from a two dimensional topological field theory based on so(4 , 2) equivariant maps (intertwiners). The free field result, along with recent results of Frenkel and Libine on equivariance properties of Feynman integrals, are developed further in this paper. We show that the coefficient of the log term in the 1-loop 4-point conformal integral is a projector in the tensor product of so(4 , 2) representations. We also show that the 1-loop 4-point integral can be written as a sum of four terms, each associated with the quantum equation of motion for one of the four external legs. The quantum equation of motion is shown to be related to equivariant maps involving indecomposable representations of so(4 , 2), a phenomenon which illuminates multiplet recombination. The harmonic expansion method for Feynman integrals is a powerful tool for arriving at these results. The generalization to other interactions and higher loops is discussed.

  4. On a three-dimensional symmetric Ising tetrahedron and contributions to the theory of the dilogarithm and Clausen functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coffey, Mark W.

    2008-04-15

    Perturbative quantum field theory for the Ising model at the three-loop level yields a tetrahedral Feynman diagram C(a,b) with masses a and b and four other lines with unit mass. The completely symmetric tetrahedron C{sup Tet}{identical_to}C(1,1) has been of interest from many points of view, with several representations and conjectures having been given in the literature. We prove a conjectured exponentially fast convergent sum for C(1,1), as well as a previously empirical relation for C(1,1) as a remarkable difference of Clausen function values. Our presentation includes propositions extending the theory of the dilogarithm Li{sub 2} and Clausen Cl{sub 2} functions,more » as well as their relation to other special functions of mathematical physics. The results strengthen connections between Feynman diagram integrals, volumes in hyperbolic space, number theory, and special functions and numbers, specifically including dilogarithms, Clausen function values, and harmonic numbers.« less

  5. Stochastic, real-space, imaginary-time evaluation of third-order Feynman-Goldstone diagrams

    NASA Astrophysics Data System (ADS)

    Willow, Soohaeng Yoo; Hirata, So

    2014-01-01

    A new, alternative set of interpretation rules of Feynman-Goldstone diagrams for many-body perturbation theory is proposed, which translates diagrams into algebraic expressions suitable for direct Monte Carlo integrations. A vertex of a diagram is associated with a Coulomb interaction (rather than a two-electron integral) and an edge with the trace of a Green's function in real space and imaginary time. With these, 12 diagrams of third-order many-body perturbation (MP3) theory are converted into 20-dimensional integrals, which are then evaluated by a Monte Carlo method. It uses redundant walkers for convergence acceleration and a weight function for importance sampling in conjunction with the Metropolis algorithm. The resulting Monte Carlo MP3 method has low-rank polynomial size dependence of the operation cost, a negligible memory cost, and a naturally parallel computational kernel, while reproducing the correct correlation energies of small molecules within a few mEh after 106 Monte Carlo steps.

  6. Simple prescription for computing the interparticle potential energy for D-dimensional gravity systems

    NASA Astrophysics Data System (ADS)

    Accioly, Antonio; Helayël-Neto, José; Barone, F. E.; Herdy, Wallace

    2015-02-01

    A straightforward prescription for computing the D-dimensional potential energy of gravitational models, which is strongly based on the Feynman path integral, is built up. Using this method, the static potential energy for the interaction of two masses is found in the context of D-dimensional higher-derivative gravity models, and its behavior is analyzed afterwards in both ultraviolet and infrared regimes. As a consequence, two new gravity systems in which the potential energy is finite at the origin, respectively, in D = 5 and D = 6, are found. Since the aforementioned prescription is equivalent to that based on the marriage between quantum mechanics (to leading order, i.e., in the first Born approximation) and the nonrelativistic limit of quantum field theory, and bearing in mind that the latter relies basically on the calculation of the nonrelativistic Feynman amplitude ({{M}NR}), a trivial expression for computing {{M}NR} is obtained from our prescription as an added bonus.

  7. Test on the effectiveness of the sum over paths approach in favoring the construction of an integrated knowledge of quantum physics in high school

    NASA Astrophysics Data System (ADS)

    Malgieri, Massimiliano; Onorato, Pasquale; De Ambrosis, Anna

    2017-06-01

    In this paper we present the results of a research-based teaching-learning sequence on introductory quantum physics based on Feynman's sum over paths approach in the Italian high school. Our study focuses on students' understanding of two founding ideas of quantum physics, wave particle duality and the uncertainty principle. In view of recent research reporting the fragmentation of students' mental models of quantum concepts after initial instruction, we collected and analyzed data using the assessment tools provided by knowledge integration theory. Our results on the group of n =14 students who performed the final test indicate that the functional explanation of wave particle duality provided by the sum over paths approach may be effective in leading students to build consistent mental models of quantum objects, and in providing them with a unified perspective on both the photon and the electron. Results on the uncertainty principle are less clear cut, as the improvements over traditional instruction appear less significant. Given the low number of students in the sample, this work should be interpreted as a case study, and we do not attempt to draw definitive conclusions. However, our study suggests that (i) the sum over paths approach may deserve more attention from researchers and educators as a possible route to introduce basic concepts of quantum physics in high school, and (ii) more research should be focused not only on the correctness of students' mental models on individual concepts, but also on the ability of students to connect different ideas and experiments related to quantum theory in an organized whole.

  8. Analysis of superconducting electromagnetic finite elements based on a magnetic vector potential variational principle

    NASA Technical Reports Server (NTRS)

    Schuler, James J.; Felippa, Carlos A.

    1991-01-01

    Electromagnetic finite elements are extended based on a variational principle that uses the electromagnetic four potential as primary variable. The variational principle is extended to include the ability to predict a nonlinear current distribution within a conductor. The extension of this theory is first done on a normal conductor and tested on two different problems. In both problems, the geometry remains the same, but the material properties are different. The geometry is that of a 1-D infinite wire. The first problem is merely a linear control case used to validate the new theory. The second problem is made up of linear conductors with varying conductivities. Both problems perform well and predict current densities that are accurate to within a few ten thousandths of a percent of the exact values. The fourth potential is then removed, leaving only the magnetic vector potential, and the variational principle is further extended to predict magnetic potentials, magnetic fields, the number of charge carriers, and the current densities within a superconductor. The new element produces good results for the mean magnetic field, the vector potential, and the number of superconducting charge carriers despite a relatively high system condition number. The element did not perform well in predicting the current density. Numerical problems inherent to this formulation are explored and possible remedies to produce better current predicting finite elements are presented.

  9. Thresholds of Principle and Preference: Exploring Procedural Variation in Postgraduate Surgical Education.

    PubMed

    Apramian, Tavis; Cristancho, Sayra; Watling, Chris; Ott, Michael; Lingard, Lorelei

    2015-11-01

    Expert physicians develop their own ways of doing things. The influence of such practice variation in clinical learning is insufficiently understood. Our grounded theory study explored how residents make sense of, and behave in relation to, the procedural variations of faculty surgeons. We sampled senior postgraduate surgical residents to construct a theoretical framework for how residents make sense of procedural variations. Using a constructivist grounded theory approach, we used marginal participant observation in the operating room across 56 surgical cases (146 hours), field interviews (38), and formal interviews (6) to develop a theoretical framework for residents' ways of dealing with procedural variations. Data analysis used constant comparison to iteratively refine the framework and data collection until theoretical saturation was reached. The core category of the constructed theory was called thresholds of principle and preference and it captured how faculty members position some procedural variations as negotiable and others not. The term thresholding was coined to describe residents' daily experiences of spotting, mapping, and negotiating their faculty members' thresholds and defending their own emerging thresholds. Thresholds of principle and preference play a key role in workplace-based medical education. Postgraduate medical learners are occupied on a day-to-day level with thresholding and attempting to make sense of the procedural variations of faculty. Workplace-based teaching and assessment should include an understanding of the integral role of thresholding in shaping learners' development. Future research should explore the nature and impact of thresholding in workplace-based learning beyond the surgical context.

  10. Unified reduction principle for the evolution of mutation, migration, and recombination

    PubMed Central

    Altenberg, Lee; Liberman, Uri; Feldman, Marcus W.

    2017-01-01

    Modifier-gene models for the evolution of genetic information transmission between generations of organisms exhibit the reduction principle: Selection favors reduction in the rate of variation production in populations near equilibrium under a balance of constant viability selection and variation production. Whereas this outcome has been proven for a variety of genetic models, it has not been proven in general for multiallelic genetic models of mutation, migration, and recombination modification with arbitrary linkage between the modifier and major genes under viability selection. We show that the reduction principle holds for all of these cases by developing a unifying mathematical framework that characterizes all of these evolutionary models. PMID:28265103

  11. Force-field functor theory: classical force-fields which reproduce equilibrium quantum distributions

    PubMed Central

    Babbush, Ryan; Parkhill, John; Aspuru-Guzik, Alán

    2013-01-01

    Feynman and Hibbs were the first to variationally determine an effective potential whose associated classical canonical ensemble approximates the exact quantum partition function. We examine the existence of a map between the local potential and an effective classical potential which matches the exact quantum equilibrium density and partition function. The usefulness of such a mapping rests in its ability to readily improve Born-Oppenheimer potentials for use with classical sampling. We show that such a map is unique and must exist. To explore the feasibility of using this result to improve classical molecular mechanics, we numerically produce a map from a library of randomly generated one-dimensional potential/effective potential pairs then evaluate its performance on independent test problems. We also apply the map to simulate liquid para-hydrogen, finding that the resulting radial pair distribution functions agree well with path integral Monte Carlo simulations. The surprising accessibility and transferability of the technique suggest a quantitative route to adapting Born-Oppenheimer potentials, with a motivation similar in spirit to the powerful ideas and approximations of density functional theory. PMID:24790954

  12. El control de las concentraciones empresariales en el sector electrico

    NASA Astrophysics Data System (ADS)

    Montoya Pardo, Milton Fernando

    The rampant success of quantum theory is the result of applications of the 'new' quantum mechanics of Schrodinger and Heisenberg (1926-7), the Feynman-Schwinger-Tomonaga Quantum Electro-dynamics (1946-51), the electro-weak theory of Salaam, Weinberg, and Glashow (1967-9), and Quantum Chromodynamics (1973-); in fact, this success of 'the' quantum theory has depended on a continuous stream of brilliant and quite disparate mathematical formulations. In this carefully concealed ferment there lie plenty of unresolved difficulties, simply because in churning out fabulously accurate calculational tools there has been no sensible explanation of all that is going on. It is even argued that such an understanding is nothing to do with physics. A long-standing and famous illustration of this is the paradoxical thought-experiment of Einstein, Podolsky and Rosen (1935). Fundamental to all quantum theories, and also their paradoxes, is the location of sub-microscopic objects; or, rather, that the specification of such a location is fraught with mathematical inconsistency. This project encompasses a detailed, critical survey of the tangled history of Position within quantum theories. The first step is to show that, contrary to appearances, canonical quantum mechanics has only a vague notion of locality. After analysing a number of previous attempts at a 'relativistic quantum mechanics', two lines of thought are considered in detail. The first is the work of Wan and students, which is shown to be no real improvement on the iisu.al 'nonrelativistic' theory. The second is based on an idea of Dirac's - using backwards-in-time light-cones as the hypersurface in space-time. There remain considerable difficulties in the way of producing a consistent scheme here. To keep things nicely stirred up, the author then proposes his own approach - an adaptation of Feynman's QED propagators. This new approach is distinguished from Feynman's since the propagator or Green's function is not obtained by Feynman's rule. The type of equation solved is also different: instead of an initial-value problem, a solution that obeys a time-symmetric causality criterion is found for an inhomogeneous partial differential equation with homogeneous boundary conditions. To make the consideration of locality more precise, some results of Fourier transform theory are presented in a form that is directly applicable. Somewhat away from the main thrust of the thesis, there is also an attempt to explain, the manner in which quantum effects disappear as the number of particles increases in such things as experimental realisations of the EPR and de Broglie thought experiments.

  13. Tectonica activa y geodinamica en el norte de centroamerica

    NASA Astrophysics Data System (ADS)

    Alvarez Gomez, Jose Antonio

    The rampant success of quantum theory is the result of applications of the 'new' quantum mechanics of Schrodinger and Heisenberg (1926-7), the Feynman-Schwinger-Tomonaga Quantum Electro-dynamics (1946-51), the electro-weak theory of Salaam, Weinberg, and Glashow (1967-9), and Quantum Chromodynamics (1973-); in fact, this success of 'the' quantum theory has depended on a continuous stream of brilliant and quite disparate mathematical formulations. In this carefully concealed ferment there lie plenty of unresolved difficulties, simply because in churning out fabulously accurate calculational tools there has been no sensible explanation of all that is going on. It is even argued that such an understanding is nothing to do with physics. A long-standing and famous illustration of this is the paradoxical thought-experiment of Einstein, Podolsky and Rosen (1935). Fundamental to all quantum theories, and also their paradoxes, is the location of sub-microscopic objects; or, rather, that the specification of such a location is fraught with mathematical inconsistency. This project encompasses a detailed, critical survey of the tangled history of Position within quantum theories. The first step is to show that, contrary to appearances, canonical quantum mechanics has only a vague notion of locality. After analysing a number of previous attempts at a 'relativistic quantum mechanics', two lines of thought are considered in detail. The first is the work of Wan and students, which is shown to be no real improvement on the iisu.al 'nonrelativistic' theory. The second is based on an idea of Dirac's - using backwards-in-time light-cones as the hypersurface in space-time. There remain considerable difficulties in the way of producing a consistent scheme here. To keep things nicely stirred up, the author then proposes his own approach - an adaptation of Feynman's QED propagators. This new approach is distinguished from Feynman's since the propagator or Green's function is not obtained by Feynman's rule. The type of equation solved is also different: instead of an initial-value problem, a solution that obeys a time-symmetric causality criterion is found for an inhomogeneous partial differential equation with homogeneous boundary conditions. To make the consideration of locality more precise, some results of Fourier transform theory are presented in a form that is directly applicable. Somewhat away from the main thrust of the thesis, there is also an attempt to explain, the manner in which quantum effects disappear as the number of particles increases in such things as experimental realisations of the EPR and de Broglie thought experiments.

  14. Estabilidad de ciertas ondas solitarias sometidas a perturbaciones estocasticas

    NASA Astrophysics Data System (ADS)

    Rodriguez Plaza, Maria Jesus

    The rampant success of quantum theory is the result of applications of the 'new' quantum mechanics of Schrodinger and Heisenberg (1926-7), the Feynman-Schwinger-Tomonaga Quantum Electro-dynamics (1946-51), the electro-weak theory of Salaam, Weinberg, and Glashow (1967-9), and Quantum Chromodynamics (1973-); in fact, this success of 'the' quantum theory has depended on a continuous stream of brilliant and quite disparate mathematical formulations. In this carefully concealed ferment there lie plenty of unresolved difficulties, simply because in churning out fabulously accurate calculational tools there has been no sensible explanation of all that is going on. It is even argued that such an understanding is nothing to do with physics. A long-standing and famous illustration of this is the paradoxical thought-experiment of Einstein, Podolsky and Rosen (1935). Fundamental to all quantum theories, and also their paradoxes, is the location of sub-microscopic objects; or, rather, that the specification of such a location is fraught with mathematical inconsistency. This project encompasses a detailed, critical survey of the tangled history of Position within quantum theories. The first step is to show that, contrary to appearances, canonical quantum mechanics has only a vague notion of locality. After analysing a number of previous attempts at a 'relativistic quantum mechanics', two lines of thought are considered in detail. The first is the work of Wan and students, which is shown to be no real improvement on the iisu.al 'nonrelativistic' theory. The second is based on an idea of Dirac's - using backwards-in-time light-cones as the hypersurface in space-time. There remain considerable difficulties in the way of producing a consistent scheme here. To keep things nicely stirred up, the author then proposes his own approach - an adaptation of Feynman's QED propagators. This new approach is distinguished from Feynman's since the propagator or Green's function is not obtained by Feynman's rule. The type of equation solved is also different: instead of an initial-value problem, a solution that obeys a time-symmetric causality criterion is found for an inhomogeneous partial differential equation with homogeneous boundary conditions. To make the consideration of locality more precise, some results of Fourier transform theory are presented in a form that is directly applicable. Somewhat away from the main thrust of the thesis, there is also an attempt to explain, the manner in which quantum effects disappear as the number of particles increases in such things as experimental realisations of the EPR and de Broglie thought experiments.

  15. The pursuit of locality in quantum mechanics

    NASA Astrophysics Data System (ADS)

    Hodkin, Malcolm

    The rampant success of quantum theory is the result of applications of the 'new' quantum mechanics of Schrodinger and Heisenberg (1926-7), the Feynman-Schwinger-Tomonaga Quantum Electro-dynamics (1946-51), the electro-weak theory of Salaam, Weinberg, and Glashow (1967-9), and Quantum Chromodynamics (1973-); in fact, this success of 'the' quantum theory has depended on a continuous stream of brilliant and quite disparate mathematical formulations. In this carefully concealed ferment there lie plenty of unresolved difficulties, simply because in churning out fabulously accurate calculational tools there has been no sensible explanation of all that is going on. It is even argued that such an understanding is nothing to do with physics. A long-standing and famous illustration of this is the paradoxical thought-experiment of Einstein, Podolsky and Rosen (1935). Fundamental to all quantum theories, and also their paradoxes, is the location of sub-microscopic objects; or, rather, that the specification of such a location is fraught with mathematical inconsistency. This project encompasses a detailed, critical survey of the tangled history of Position within quantum theories. The first step is to show that, contrary to appearances, canonical quantum mechanics has only a vague notion of locality. After analysing a number of previous attempts at a 'relativistic quantum mechanics', two lines of thought are considered in detail. The first is the work of Wan and students, which is shown to be no real improvement on the iisu.al 'nonrelativistic' theory. The second is based on an idea of Dirac's - using backwards-in-time light-cones as the hypersurface in space-time. There remain considerable difficulties in the way of producing a consistent scheme here. To keep things nicely stirred up, the author then proposes his own approach - an adaptation of Feynman's QED propagators. This new approach is distinguished from Feynman's since the propagator or Green's function is not obtained by Feynman's rule. The type of equation solved is also different: instead of an initial-value problem, a solution that obeys a time-symmetric causality criterion is found for an inhomogeneous partial differential equation with homogeneous boundary conditions. To make the consideration of locality more precise, some results of Fourier transform theory are presented in a form that is directly applicable. Somewhat away from the main thrust of the thesis, there is also an attempt to explain, the manner in which quantum effects disappear as the number of particles increases in such things as experimental realisations of the EPR and de Broglie thought experiments.

  16. Teoria de chovitz de segundo orden aplicada a la busqueda de proyecciones cartograficas de minima deformacion

    NASA Astrophysics Data System (ADS)

    Malpica Velasco, Jose Antonio

    The rampant success of quantum theory is the result of applications of the 'new' quantum mechanics of Schrodinger and Heisenberg (1926-7), the Feynman-Schwinger-Tomonaga Quantum Electro-dynamics (1946-51), the electro-weak theory of Salaam, Weinberg, and Glashow (1967-9), and Quantum Chromodynamics (1973-); in fact, this success of 'the' quantum theory has depended on a continuous stream of brilliant and quite disparate mathematical formulations. In this carefully concealed ferment there lie plenty of unresolved difficulties, simply because in churning out fabulously accurate calculational tools there has been no sensible explanation of all that is going on. It is even argued that such an understanding is nothing to do with physics. A long-standing and famous illustration of this is the paradoxical thought-experiment of Einstein, Podolsky and Rosen (1935). Fundamental to all quantum theories, and also their paradoxes, is the location of sub-microscopic objects; or, rather, that the specification of such a location is fraught with mathematical inconsistency. This project encompasses a detailed, critical survey of the tangled history of Position within quantum theories. The first step is to show that, contrary to appearances, canonical quantum mechanics has only a vague notion of locality. After analysing a number of previous attempts at a 'relativistic quantum mechanics', two lines of thought are considered in detail. The first is the work of Wan and students, which is shown to be no real improvement on the iisu.al 'nonrelativistic' theory. The second is based on an idea of Dirac's - using backwards-in-time light-cones as the hypersurface in space-time. There remain considerable difficulties in the way of producing a consistent scheme here. To keep things nicely stirred up, the author then proposes his own approach - an adaptation of Feynman's QED propagators. This new approach is distinguished from Feynman's since the propagator or Green's function is not obtained by Feynman's rule. The type of equation solved is also different: instead of an initial-value problem, a solution that obeys a time-symmetric causality criterion is found for an inhomogeneous partial differential equation with homogeneous boundary conditions. To make the consideration of locality more precise, some results of Fourier transform theory are presented in a form that is directly applicable. Somewhat away from the main thrust of the thesis, there is also an attempt to explain, the manner in which quantum effects disappear as the number of particles increases in such things as experimental realisations of the EPR and de Broglie thought experiments.

  17. Analisis espectroscopico de estrellas variables Delta Scuti

    NASA Astrophysics Data System (ADS)

    Solano Marquez, Enrique

    The rampant success of quantum theory is the result of applications of the 'new' quantum mechanics of Schrodinger and Heisenberg (1926-7), the Feynman-Schwinger-Tomonaga Quantum Electro-dynamics (1946-51), the electro-weak theory of Salaam, Weinberg, and Glashow (1967-9), and Quantum Chromodynamics (1973-); in fact, this success of 'the' quantum theory has depended on a continuous stream of brilliant and quite disparate mathematical formulations. In this carefully concealed ferment there lie plenty of unresolved difficulties, simply because in churning out fabulously accurate calculational tools there has been no sensible explanation of all that is going on. It is even argued that such an understanding is nothing to do with physics. A long-standing and famous illustration of this is the paradoxical thought-experiment of Einstein, Podolsky and Rosen (1935). Fundamental to all quantum theories, and also their paradoxes, is the location of sub-microscopic objects; or, rather, that the specification of such a location is fraught with mathematical inconsistency. This project encompasses a detailed, critical survey of the tangled history of Position within quantum theories. The first step is to show that, contrary to appearances, canonical quantum mechanics has only a vague notion of locality. After analysing a number of previous attempts at a 'relativistic quantum mechanics', two lines of thought are considered in detail. The first is the work of Wan and students, which is shown to be no real improvement on the iisu.al 'nonrelativistic' theory. The second is based on an idea of Dirac's - using backwards-in-time light-cones as the hypersurface in space-time. There remain considerable difficulties in the way of producing a consistent scheme here. To keep things nicely stirred up, the author then proposes his own approach - an adaptation of Feynman's QED propagators. This new approach is distinguished from Feynman's since the propagator or Green's function is not obtained by Feynman's rule. The type of equation solved is also different: instead of an initial-value problem, a solution that obeys a time-symmetric causality criterion is found for an inhomogeneous partial differential equation with homogeneous boundary conditions. To make the consideration of locality more precise, some results of Fourier transform theory are presented in a form that is directly applicable. Somewhat away from the main thrust of the thesis, there is also an attempt to explain, the manner in which quantum effects disappear as the number of particles increases in such things as experimental realisations of the EPR and de Broglie thought experiments.

  18. Inversion gravimetrica 3D por tecnicas de evolucion: Aplicacion a la Isla de Fuerteventura

    NASA Astrophysics Data System (ADS)

    Gonzalez Montesinos, Fuensanta

    The rampant success of quantum theory is the result of applications of the 'new' quantum mechanics of Schrodinger and Heisenberg (1926-7), the Feynman-Schwinger-Tomonaga Quantum Electro-dynamics (1946-51), the electro-weak theory of Salaam, Weinberg, and Glashow (1967-9), and Quantum Chromodynamics (1973-); in fact, this success of 'the' quantum theory has depended on a continuous stream of brilliant and quite disparate mathematical formulations. In this carefully concealed ferment there lie plenty of unresolved difficulties, simply because in churning out fabulously accurate calculational tools there has been no sensible explanation of all that is going on. It is even argued that such an understanding is nothing to do with physics. A long-standing and famous illustration of this is the paradoxical thought-experiment of Einstein, Podolsky and Rosen (1935). Fundamental to all quantum theories, and also their paradoxes, is the location of sub-microscopic objects; or, rather, that the specification of such a location is fraught with mathematical inconsistency. This project encompasses a detailed, critical survey of the tangled history of Position within quantum theories. The first step is to show that, contrary to appearances, canonical quantum mechanics has only a vague notion of locality. After analysing a number of previous attempts at a 'relativistic quantum mechanics', two lines of thought are considered in detail. The first is the work of Wan and students, which is shown to be no real improvement on the iisu.al 'nonrelativistic' theory. The second is based on an idea of Dirac's - using backwards-in-time light-cones as the hypersurface in space-time. There remain considerable difficulties in the way of producing a consistent scheme here. To keep things nicely stirred up, the author then proposes his own approach - an adaptation of Feynman's QED propagators. This new approach is distinguished from Feynman's since the propagator or Green's function is not obtained by Feynman's rule. The type of equation solved is also different: instead of an initial-value problem, a solution that obeys a time-symmetric causality criterion is found for an inhomogeneous partial differential equation with homogeneous boundary conditions. To make the consideration of locality more precise, some results of Fourier transform theory are presented in a form that is directly applicable. Somewhat away from the main thrust of the thesis, there is also an attempt to explain, the manner in which quantum effects disappear as the number of particles increases in such things as experimental realisations of the EPR and de Broglie thought experiments.

  19. Evolution tectonothermale du massif Hercynien des Rehamna (zone centre-mesetienne, Maroc)

    NASA Astrophysics Data System (ADS)

    Aghzer, Abdel Mouhsine

    The rampant success of quantum theory is the result of applications of the 'new' quantum mechanics of Schrodinger and Heisenberg (1926-7), the Feynman-Schwinger-Tomonaga Quantum Electro-dynamics (1946-51), the electro-weak theory of Salaam, Weinberg, and Glashow (1967-9), and Quantum Chromodynamics (1973-); in fact, this success of 'the' quantum theory has depended on a continuous stream of brilliant and quite disparate mathematical formulations. In this carefully concealed ferment there lie plenty of unresolved difficulties, simply because in churning out fabulously accurate calculational tools there has been no sensible explanation of all that is going on. It is even argued that such an understanding is nothing to do with physics. A long-standing and famous illustration of this is the paradoxical thought-experiment of Einstein, Podolsky and Rosen (1935). Fundamental to all quantum theories, and also their paradoxes, is the location of sub-microscopic objects; or, rather, that the specification of such a location is fraught with mathematical inconsistency. This project encompasses a detailed, critical survey of the tangled history of Position within quantum theories. The first step is to show that, contrary to appearances, canonical quantum mechanics has only a vague notion of locality. After analysing a number of previous attempts at a 'relativistic quantum mechanics', two lines of thought are considered in detail. The first is the work of Wan and students, which is shown to be no real improvement on the iisu.al 'nonrelativistic' theory. The second is based on an idea of Dirac's - using backwards-in-time light-cones as the hypersurface in space-time. There remain considerable difficulties in the way of producing a consistent scheme here. To keep things nicely stirred up, the author then proposes his own approach - an adaptation of Feynman's QED propagators. This new approach is distinguished from Feynman's since the propagator or Green's function is not obtained by Feynman's rule. The type of equation solved is also different: instead of an initial-value problem, a solution that obeys a time-symmetric causality criterion is found for an inhomogeneous partial differential equation with homogeneous boundary conditions. To make the consideration of locality more precise, some results of Fourier transform theory are presented in a form that is directly applicable. Somewhat away from the main thrust of the thesis, there is also an attempt to explain, the manner in which quantum effects disappear as the number of particles increases in such things as experimental realisations of the EPR and de Broglie thought experiments.

  20. Comportamiento mecanico de la interfase de subduccion durante el ciclo sismico: Estudio mediante la geodesia espacial en el norte de Chile

    NASA Astrophysics Data System (ADS)

    Bejar Pizarro, Marta

    The rampant success of quantum theory is the result of applications of the 'new' quantum mechanics of Schrodinger and Heisenberg (1926-7), the Feynman-Schwinger-Tomonaga Quantum Electro-dynamics (1946-51), the electro-weak theory of Salaam, Weinberg, and Glashow (1967-9), and Quantum Chromodynamics (1973-); in fact, this success of 'the' quantum theory has depended on a continuous stream of brilliant and quite disparate mathematical formulations. In this carefully concealed ferment there lie plenty of unresolved difficulties, simply because in churning out fabulously accurate calculational tools there has been no sensible explanation of all that is going on. It is even argued that such an understanding is nothing to do with physics. A long-standing and famous illustration of this is the paradoxical thought-experiment of Einstein, Podolsky and Rosen (1935). Fundamental to all quantum theories, and also their paradoxes, is the location of sub-microscopic objects; or, rather, that the specification of such a location is fraught with mathematical inconsistency. This project encompasses a detailed, critical survey of the tangled history of Position within quantum theories. The first step is to show that, contrary to appearances, canonical quantum mechanics has only a vague notion of locality. After analysing a number of previous attempts at a 'relativistic quantum mechanics', two lines of thought are considered in detail. The first is the work of Wan and students, which is shown to be no real improvement on the iisu.al 'nonrelativistic' theory. The second is based on an idea of Dirac's - using backwards-in-time light-cones as the hypersurface in space-time. There remain considerable difficulties in the way of producing a consistent scheme here. To keep things nicely stirred up, the author then proposes his own approach - an adaptation of Feynman's QED propagators. This new approach is distinguished from Feynman's since the propagator or Green's function is not obtained by Feynman's rule. The type of equation solved is also different: instead of an initial-value problem, a solution that obeys a time-symmetric causality criterion is found for an inhomogeneous partial differential equation with homogeneous boundary conditions. To make the consideration of locality more precise, some results of Fourier transform theory are presented in a form that is directly applicable. Somewhat away from the main thrust of the thesis, there is also an attempt to explain, the manner in which quantum effects disappear as the number of particles increases in such things as experimental realisations of the EPR and de Broglie thought experiments.

  1. Sintesis y caracterizacion microestructural de aluminas obtenidas a partir de un precursor no convencional

    NASA Astrophysics Data System (ADS)

    Fillali, Laila

    The rampant success of quantum theory is the result of applications of the 'new' quantum mechanics of Schrodinger and Heisenberg (1926-7), the Feynman-Schwinger-Tomonaga Quantum Electro-dynamics (1946-51), the electro-weak theory of Salaam, Weinberg, and Glashow (1967-9), and Quantum Chromodynamics (1973-); in fact, this success of 'the' quantum theory has depended on a continuous stream of brilliant and quite disparate mathematical formulations. In this carefully concealed ferment there lie plenty of unresolved difficulties, simply because in churning out fabulously accurate calculational tools there has been no sensible explanation of all that is going on. It is even argued that such an understanding is nothing to do with physics. A long-standing and famous illustration of this is the paradoxical thought-experiment of Einstein, Podolsky and Rosen (1935). Fundamental to all quantum theories, and also their paradoxes, is the location of sub-microscopic objects; or, rather, that the specification of such a location is fraught with mathematical inconsistency. This project encompasses a detailed, critical survey of the tangled history of Position within quantum theories. The first step is to show that, contrary to appearances, canonical quantum mechanics has only a vague notion of locality. After analysing a number of previous attempts at a 'relativistic quantum mechanics', two lines of thought are considered in detail. The first is the work of Wan and students, which is shown to be no real improvement on the iisu.al 'nonrelativistic' theory. The second is based on an idea of Dirac's - using backwards-in-time light-cones as the hypersurface in space-time. There remain considerable difficulties in the way of producing a consistent scheme here. To keep things nicely stirred up, the author then proposes his own approach - an adaptation of Feynman's QED propagators. This new approach is distinguished from Feynman's since the propagator or Green's function is not obtained by Feynman's rule. The type of equation solved is also different: instead of an initial-value problem, a solution that obeys a time-symmetric causality criterion is found for an inhomogeneous partial differential equation with homogeneous boundary conditions. To make the consideration of locality more precise, some results of Fourier transform theory are presented in a form that is directly applicable. Somewhat away from the main thrust of the thesis, there is also an attempt to explain, the manner in which quantum effects disappear as the number of particles increases in such things as experimental realisations of the EPR and de Broglie thought experiments.

  2. Variational Algorithms for Test Particle Trajectories

    NASA Astrophysics Data System (ADS)

    Ellison, C. Leland; Finn, John M.; Qin, Hong; Tang, William M.

    2015-11-01

    The theory of variational integration provides a novel framework for constructing conservative numerical methods for magnetized test particle dynamics. The retention of conservation laws in the numerical time advance captures the correct qualitative behavior of the long time dynamics. For modeling the Lorentz force system, new variational integrators have been developed that are both symplectic and electromagnetically gauge invariant. For guiding center test particle dynamics, discretization of the phase-space action principle yields multistep variational algorithms, in general. Obtaining the desired long-term numerical fidelity requires mitigation of the multistep method's parasitic modes or applying a discretization scheme that possesses a discrete degeneracy to yield a one-step method. Dissipative effects may be modeled using Lagrange-D'Alembert variational principles. Numerical results will be presented using a new numerical platform that interfaces with popular equilibrium codes and utilizes parallel hardware to achieve reduced times to solution. This work was supported by DOE Contract DE-AC02-09CH11466.

  3. On hydrostatic flows in isentropic coordinates

    NASA Astrophysics Data System (ADS)

    Bokhove, Onno

    2000-01-01

    The hydrostatic primitive equations of motion which have been used in large-scale weather prediction and climate modelling over the last few decades are analysed with variational methods in an isentropic Eulerian framework. The use of material isentropic coordinates for the Eulerian hydrostatic equations is known to have distinct conceptual advantages since fluid motion is, under inviscid and statically stable circumstances, confined to take place on quasi-horizontal isentropic surfaces. First, an Eulerian isentropic Hamilton's principle, expressed in terms of fluid parcel variables, is therefore derived by transformation of a Lagrangian Hamilton's principle to an Eulerian one. This Eulerian principle explicitly describes the boundary dynamics of the time-dependent domain in terms of advection of boundary isentropes sB; these are the values the isentropes have at their intersection with the (lower) boundary. A partial Legendre transform for only the interior variables yields an Eulerian ‘action’ principle. Secondly, Noether's theorem is used to derive energy and potential vorticity conservation from the Eulerian Hamilton's principle. Thirdly, these conservation laws are used to derive a wave-activity invariant which is second-order in terms of small-amplitude disturbances relative to a resting or moving basic state. Linear stability criteria are derived but only for resting basic states. In mid-latitudes a time- scale separation between gravity and vortical modes occurs. Finally, this time-scale separation suggests that conservative geostrophic and ageostrophic approximations can be made to the Eulerian action principle for hydrostatic flows. Approximations to Eulerian variational principles may be more advantageous than approximations to Lagrangian ones because non-dimensionalization and scaling tend to be based on Eulerian estimates of the characteristic scales involved. These approximations to the stratified hydrostatic formulation extend previous approximations to the shallow- water equations. An explicit variational derivation is given of an isentropic version of Hoskins & Bretherton's model for atmospheric fronts.

  4. Patterned variation in prehistoric chiefdoms

    PubMed Central

    Drennan, Robert D.; Peterson, Christian E.

    2006-01-01

    Comparative study of early complex societies (chiefdoms) conjures visions of a cultural evolutionary emphasis on similarities and societal typology. Variation within the group has not been as systematically examined but offers an even more productive avenue of approach to fundamental principles of organization and change. Three widely separated trajectories of early chiefdom development are compared here: the Valley of Oaxaca (Mexico), the Alto Magdalena (Colombia), and Northeast China. Archaeological data from all three regions are analyzed with the same tools to reveal variation in human activities, relationships, and interactions as these change in the emergence of chiefly communities. Patterning in this variation suggests the operation of underlying general principles, which are offered as hypotheses that merit further investigation and evaluation in comparative study of a much larger number of cases. PMID:16473941

  5. Development of New Methods for Predicting the Bistatic Electromagnetic Scattering from Absorbing Shapes

    DTIC Science & Technology

    1990-01-01

    least-squares sense by adding a penalty term proportional to the square of the divergence to the variational principle At the start of this project... principle required for stable solutions of the electromagnetic field: It must be possible to express the basis functions used in the finite element method as... principle to derive several different methods for computing stable solutions to electromagnetic field problems. To understand above principle , notice that

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeng, Li; Jacobsen, Stein B., E-mail: astrozeng@gmail.com, E-mail: jacobsen@neodymium.harvard.edu

    In the past few years, the number of confirmed planets has grown above 2000. It is clear that they represent a diversity of structures not seen in our own solar system. In addition to very detailed interior modeling, it is valuable to have a simple analytical framework for describing planetary structures. The variational principle is a fundamental principle in physics, entailing that a physical system follows the trajectory, which minimizes its action. It is alternative to the differential equation formulation of a physical system. Applying the variational principle to the planetary interior can beautifully summarize the set of differential equationsmore » into one, which provides us some insight into the problem. From this principle, a universal mass–radius relation, an estimate of the error propagation from the equation of state to the mass–radius relation, and a form of the virial theorem applicable to planetary interiors are derived.« less

  7. Development of a direct experimental test for any violation of the equivalence principle by the weak interaction

    NASA Technical Reports Server (NTRS)

    Parker, P. D. M.

    1981-01-01

    Violation of the equivalence principle by the weak interaction is tested. Any variation of the weak interaction coupling constant with gravitational potential, i.e., a spatial variation of the fundamental constants is investigated. The level of sensitivity required for such a measurement is estimated on the basis of the size of a change in the gravitational potential which is accessible. The alpha particle spectrum is analyzed, and the counting rate was improved by a factor of approximately 100.

  8. Computational fluid mechanics utilizing the variational principle of modeling damping seals

    NASA Technical Reports Server (NTRS)

    Abernathy, J. M.

    1986-01-01

    A computational fluid dynamics code for application to traditional incompressible flow problems has been developed. The method is actually a slight compressibility approach which takes advantage of the bulk modulus and finite sound speed of all real fluids. The finite element numerical analog uses a dynamic differencing scheme based, in part, on a variational principle for computational fluid dynamics. The code was developed in order to study the feasibility of damping seals for high speed turbomachinery. Preliminary seal analyses have been performed.

  9. Variational principle model for the nuclear caloric curve

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Das Gupta, S.

    2005-12-15

    Following the lead of a recent work, I perform a variational principle model calculation for the nuclear caloric curve. A Skyrme-type interaction with and without momentum dependence is used. The calculation is done for a large nucleus, i.e., in the nuclear matter limit. Thus I address the issue of volume fragmentation only. Nonetheless, the results are similar to the previous, largely phenomenological calculation for a finite nucleus. I find that the onset of fragmentation can be sudden as a function of temperature or excitation energy.

  10. Irreversibility and entropy production in transport phenomena, IV: Symmetry, integrated intermediate processes and separated variational principles for multi-currents

    NASA Astrophysics Data System (ADS)

    Suzuki, Masuo

    2013-10-01

    The mechanism of entropy production in transport phenomena is discussed again by emphasizing the role of symmetry of non-equilibrium states and also by reformulating Einstein’s theory of Brownian motion to derive entropy production from it. This yields conceptual reviews of the previous papers [M. Suzuki, Physica A 390 (2011) 1904; 391 (2012) 1074; 392 (2013) 314]. Separated variational principles of steady states for multi external fields {Xi} and induced currents {Ji} are proposed by extending the principle of minimum integrated entropy production found by the present author for a single external field. The basic strategy of our theory on steady states is to take in all the intermediate processes from the equilibrium state to the final possible steady states in order to study the irreversible physics even in the steady states. As an application of this principle, Gransdorff-Prigogine’s evolution criterion inequality (or stability condition) dXP≡∫dr∑iJidXi≤0 is derived in the stronger form dQi≡∫drJidXi≤0 for individual force Xi and current Ji even in nonlinear responses which depend on all the external forces {Xk} nonlinearly. This is called “separated evolution criterion”. Some explicit demonstrations of the present general theory to simple electric circuits with multi external fields are given in order to clarify the physical essence of our new theory and to realize the condition of its validity concerning the existence of the solutions of the simultaneous equations obtained by the separated variational principles. It is also instructive to compare the two results obtained by the new variational theory and by the old scheme based on the instantaneous entropy production. This seems to be suggestive even to the energy problem in the world.

  11. Nanotechnology: The Incredible Invisible World

    ERIC Educational Resources Information Center

    Roberts, Amanda S.

    2011-01-01

    The concept of nanotechnology was first introduced in 1959 by Richard Feynman at a meeting of the American Physical Society. Nanotechnology opens the door to an exciting new science/technology/engineering field. The possibilities for the uses of this technology should inspire the imagination to think big. Many are already pursuing such feats…

  12. Laboratory for Computer Science Progress Report 18, July 1980-June 1981,

    DTIC Science & Technology

    1983-04-01

    group in collaboration with Rolf Landauer of IBM Research. Some of the most conspicuous participants: Dyson, Feynman, Wheeler Landauer, Keyes, Bennett...Sheldon A. Data Model Equivalence, December 1978, AD A062-753 TM-119 Shamir, Adi and Richard E. Zippel On the Security of the Merkle -Hellman

  13. Perturbative test of exact vacuum expectation values of local fields in affine Toda theories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahn, Changrim; Baseilhac, P.; Kim, Chanju

    Vacuum expectation values of local fields for all dual pairs of nonsimply laced affine Toda field theories recently proposed are checked against perturbative analysis. The computations based on Feynman diagram expansion are performed up to the two-loop level. We obtain, good agreement.

  14. The ethics of characterizing difference: guiding principles on using racial categories in human genetics

    PubMed Central

    Lee, Sandra Soo-Jin; Mountain, Joanna; Koenig, Barbara; Altman, Russ; Brown, Melissa; Camarillo, Albert; Cavalli-Sforza, Luca; Cho, Mildred; Eberhardt, Jennifer; Feldman, Marcus; Ford, Richard; Greely, Henry; King, Roy; Markus, Hazel; Satz, Debra; Snipp, Matthew; Steele, Claude; Underhill, Peter

    2008-01-01

    We are a multidisciplinary group of Stanford faculty who propose ten principles to guide the use of racial and ethnic categories when characterizing group differences in research into human genetic variation. PMID:18638359

  15. A variational principle for compressible fluid mechanics. Discussion of the one-dimensional theory

    NASA Technical Reports Server (NTRS)

    Prozan, R. J.

    1982-01-01

    The second law of thermodynamics is used as a variational statement to derive a numerical procedure to satisfy the governing equations of motion. The procedure, based on numerical experimentation, appears to be stable provided the CFL condition is satisfied. This stability is manifested no matter how severe the gradients (compression or expansion) are in the flow field. For reasons of simplicity only one dimensional inviscid compressible unsteady flow is discussed here; however, the concepts and techniques are not restricted to one dimension nor are they restricted to inviscid non-reacting flow. The solution here is explicit in time. Further study is required to determine the impact of the variational principle on implicit algorithms.

  16. An Iodine Fluorescence Quenching Clock Reaction

    NASA Astrophysics Data System (ADS)

    Weinberg, Richard B.

    2007-05-01

    A fluorescent clock reaction is described that is based on the principles of the Landolt iodine reaction but uses the potent fluorescence quenching properties of triiodide to abruptly extinguish the ultraviolet fluorescence of optical brighteners present in liquid laundry detergents. The reaction uses easily obtained household products. One variation illustrates the sequential steps and mechanisms of the reaction; other variations maximize the dramatic impact of the demonstration; and a variation that uses liquid detergent in the Briggs Rauscher reaction yields a striking oscillating luminescence. The iodine fluorescence quenching clock reaction can be used in the classroom to explore not only the principles of redox chemistry and reaction kinetics, but also the photophysics of fluorescent pH probes and optical quenching.

  17. Life-space foam: A medium for motivational and cognitive dynamics

    NASA Astrophysics Data System (ADS)

    Ivancevic, Vladimir; Aidman, Eugene

    2007-08-01

    General stochastic dynamics, developed in a framework of Feynman path integrals, have been applied to Lewinian field-theoretic psychodynamics [K. Lewin, Field Theory in Social Science, University of Chicago Press, Chicago, 1951; K. Lewin, Resolving Social Conflicts, and, Field Theory in Social Science, American Psychological Association, Washington, 1997; M. Gold, A Kurt Lewin Reader, the Complete Social Scientist, American Psychological Association, Washington, 1999], resulting in the development of a new concept of life-space foam (LSF) as a natural medium for motivational and cognitive psychodynamics. According to LSF formalisms, the classic Lewinian life space can be macroscopically represented as a smooth manifold with steady force fields and behavioral paths, while at the microscopic level it is more realistically represented as a collection of wildly fluctuating force fields, (loco)motion paths and local geometries (and topologies with holes). A set of least-action principles is used to model the smoothness of global, macro-level LSF paths, fields and geometry. To model the corresponding local, micro-level LSF structures, an adaptive path integral is used, defining a multi-phase and multi-path (multi-field and multi-geometry) transition process from intention to goal-driven action. Application examples of this new approach include (but are not limited to) information processing, motivational fatigue, learning, memory and decision making.

  18. Quantum mechanics from Newton's second law and the canonical commutation relation [X, P] = i

    NASA Astrophysics Data System (ADS)

    Palenik, Mark C.

    2014-07-01

    Despite the fact that it has been known since the time of Heisenberg that quantum operators obey a quantum version of Newton's laws, students are often told that derivations of quantum mechanics must necessarily follow from the Hamiltonian or Lagrangian formulations of mechanics. Here, we first derive the existing Heisenberg equations of motion from Newton's laws and the uncertainty principle using only the equations F=\\frac{dP}{dt}, P=m\\frac{dV}{dt}, and [X, P] = i. Then, a new expression for the propagator is derived that makes a connection between time evolution in quantum mechanics and the motion of a classical particle under Newton's laws. The propagator is solved for three cases where an exact solution is possible: (1) the free particle; (2) the harmonic oscillator; and (3) a constant force, or linear potential in the standard interpretation. We then show that for a general for a general force F(X), by Taylor expanding X(t) in time, we can use this methodology to reproduce the Feynman path integral formula for the propagator. Such a picture may be useful for students as they make the transition from classical to quantum mechanics and help solidify the equivalence of the Hamiltonian, Lagrangian, and Newtonian pictures of physics in their minds.

  19. Woodward Effect Experimental Verifications

    NASA Astrophysics Data System (ADS)

    March, Paul

    2004-02-01

    The work of J. F. Woodward (1990 1996a; 1996b; 1998; 2002a; 2002b; 2004) on the existence of ``mass fluctuations'' and their use in exotic propulsion schemes was examined for possible application in improving space flight propulsion and power generation. Woodward examined Einstein's General Relativity Theory (GRT) and assumed that if the strong Machian interpretation of GRT as well as gravitational / inertia like Wheeler-Feynman radiation reaction forces hold, then when an elementary particle is accelerated through a potential gradient, its rest mass should fluctuate around its mean value during its acceleration. Woodward also used GRT to clarify the precise experimental conditions necessary for observing and exploiting these mass fluctuations or ``Woodward effect'' (W-E). Later, in collaboration with his ex-graduate student T. Mahood, they also pushed the experimental verification boundaries of these proposals. If these purported mass fluctuations occur as Woodward claims, and his assumption that gravity and inertia are both byproducts of the same GRT based phenomenon per Mach's Principle is correct, then many innovative applications such as propellantless propulsion and gravitational exotic matter generators may be feasible. This paper examines the reality of mass fluctuations and the feasibility of using the W-E to design propellantless propulsion devices in the near to mid-term future. The latest experimental results, utilizing MHD-like force rectification systems, will also be presented.

  20. Feynman path integral application on deriving black-scholes diffusion equation for european option pricing

    NASA Astrophysics Data System (ADS)

    Utama, Briandhika; Purqon, Acep

    2016-08-01

    Path Integral is a method to transform a function from its initial condition to final condition through multiplying its initial condition with the transition probability function, known as propagator. At the early development, several studies focused to apply this method for solving problems only in Quantum Mechanics. Nevertheless, Path Integral could also apply to other subjects with some modifications in the propagator function. In this study, we investigate the application of Path Integral method in financial derivatives, stock options. Black-Scholes Model (Nobel 1997) was a beginning anchor in Option Pricing study. Though this model did not successfully predict option price perfectly, especially because its sensitivity for the major changing on market, Black-Scholes Model still is a legitimate equation in pricing an option. The derivation of Black-Scholes has a high difficulty level because it is a stochastic partial differential equation. Black-Scholes equation has a similar principle with Path Integral, where in Black-Scholes the share's initial price is transformed to its final price. The Black-Scholes propagator function then derived by introducing a modified Lagrange based on Black-Scholes equation. Furthermore, we study the correlation between path integral analytical solution and Monte-Carlo numeric solution to find the similarity between this two methods.

  1. Statistical Systems with Z

    NASA Astrophysics Data System (ADS)

    William, Peter

    In this dissertation several two dimensional statistical systems exhibiting discrete Z(n) symmetries are studied. For this purpose a newly developed algorithm to compute the partition function of these models exactly is utilized. The zeros of the partition function are examined in order to obtain information about the observable quantities at the critical point. This occurs in the form of critical exponents of the order parameters which characterize phenomena at the critical point. The correlation length exponent is found to agree very well with those computed from strong coupling expansions for the mass gap and with Monte Carlo results. In Feynman's path integral formalism the partition function of a statistical system can be related to the vacuum expectation value of the time ordered product of the observable quantities of the corresponding field theoretic model. Hence a generalization of ordinary scale invariance in the form of conformal invariance is focussed upon. This principle is very suitably applicable, in the case of two dimensional statistical models undergoing second order phase transitions at criticality. The conformal anomaly specifies the universality class to which these models belong. From an evaluation of the partition function, the free energy at criticality is computed, to determine the conformal anomaly of these models. The conformal anomaly for all the models considered here are in good agreement with the predicted values.

  2. Thresholds of Principle and Preference: Exploring Procedural Variation in Postgraduate Surgical Education

    PubMed Central

    Apramian, Tavis; Cristancho, Sayra; Watling, Chris; Ott, Michael; Lingard, Lorelei

    2017-01-01

    Background Expert physicians develop their own ways of doing things. The influence of such practice variation in clinical learning is insufficiently understood. Our grounded theory study explored how residents make sense of, and behave in relation to, the procedural variations of faculty surgeons. Method We sampled senior postgraduate surgical residents to construct a theoretical framework for how residents make sense of procedural variations. Using a constructivist grounded theory approach, we used marginal participant observation in the operating room across 56 surgical cases (146 hours), field interviews (38), and formal interviews (6) to develop a theoretical framework for residents’ ways of dealing with procedural variations. Data analysis used constant comparison to iteratively refine the framework and data collection until theoretical saturation was reached. Results The core category of the constructed theory was called thresholds of principle and preference and it captured how faculty members position some procedural variations as negotiable and others not. The term thresholding was coined to describe residents’ daily experiences of spotting, mapping, and negotiating their faculty members’ thresholds and defending their own emerging thresholds. Conclusions Thresholds of principle and preference play a key role in workplace-based medical education. Postgraduate medical learners are occupied on a day-to-day level with thresholding and attempting to make sense of the procedural variations of faculty. Workplace-based teaching and assessment should include an understanding of the integral role of thresholding in shaping learners’ development. Future research should explore the nature and impact of thresholding in workplace-based learning beyond the surgical context. PMID:26505105

  3. Coupled fluid-structure interaction. Part 1: Theory. Part 2: Application

    NASA Technical Reports Server (NTRS)

    Felippa, Carlos A.; Ohayon, Roger

    1991-01-01

    A general three dimensional variational principle is obtained for the motion of an acoustic field enclosed in a rigid or flexible container by the method of canonical decomposition applied to a modified form of the wave equation in the displacement potential. The general principle is specialized to a mixed two-field principle that contains the fluid displacement potential and pressure as independent fields. Semidiscrete finite element equations of motion based on this principle are derived and sample cases are given.

  4. Hamilton's Principle and Approximate Solutions to Problems in Classical Mechanics

    ERIC Educational Resources Information Center

    Schlitt, D. W.

    1977-01-01

    Shows how to use the Ritz method for obtaining approximate solutions to problems expressed in variational form directly from the variational equation. Application of this method to classical mechanics is given. (MLH)

  5. Extension to linear dynamics for hybrid stress finite element formulation based on additional displacements

    NASA Astrophysics Data System (ADS)

    Sumihara, K.

    Based upon legitimate variational principles, one microscopic-macroscopic finite element formulation for linear dynamics is presented by Hybrid Stress Finite Element Method. The microscopic application of Geometric Perturbation introduced by Pian and the introduction of infinitesimal limit core element (Baby Element) have been consistently combined according to the flexible and inherent interpretation of the legitimate variational principles initially originated by Pian and Tong. The conceptual development based upon Hybrid Finite Element Method is extended to linear dynamics with the introduction of physically meaningful higher modes.

  6. Total Quality Management in Higher Education: Applying Deming's Fourteen Points.

    ERIC Educational Resources Information Center

    Masters, Robert J.; Leiker, Linda

    1992-01-01

    This article presents guidelines to aid administrators of institutions of higher education in applying the 14 principles of Total Quality Management. The principles stress understanding process improvements, handling variation, fostering prediction, and using psychology to capitalize on human resources. (DB)

  7. Curricular Guidelines for Dental Auxiliary Radiology.

    ERIC Educational Resources Information Center

    Journal of Dental Education, 1981

    1981-01-01

    AADS curricular guidelines suggest objectives for these areas of dental auxiliary radiology: physical principles of X-radiation in dentistry, related radiobiological concepts, principles of radiologic health, radiographic technique, x-ray films and intensifying screens, factors contributing to film quality, darkroom, and normal variations in…

  8. Proof of a new colour decomposition for QCD amplitudes

    DOE PAGES

    Melia, Tom

    2015-12-16

    Recently, Johansson and Ochirov conjectured the form of a new colour decom-position for QCD tree-level amplitudes. This note provides a proof of that conjecture. The proof is based on ‘Mario World’ Feynman diagrams, which exhibit the hierarchical Dyck structure previously found to be very useful when dealing with multi-quark amplitudes.

  9. QCD for Postgraduates (1/5)

    ScienceCinema

    Zanderighi, Giulia

    2018-04-26

    Modern QCD - Lecture 1 Starting from the QCD Lagrangian we will revisit some basic QCD concepts and derive fundamental properties like gauge invariance and isospin symmetry and will discuss the Feynman rules of the theory. We will then focus on the gauge group of QCD and derive the Casimirs CF and CA and some useful color identities.

  10. Differential equations for loop integrals in Baikov representation

    NASA Astrophysics Data System (ADS)

    Bosma, Jorrit; Larsen, Kasper J.; Zhang, Yang

    2018-05-01

    We present a proof that differential equations for Feynman loop integrals can always be derived in Baikov representation without involving dimension-shift identities. We moreover show that in a large class of two- and three-loop diagrams it is possible to avoid squared propagators in the intermediate steps of setting up the differential equations.

  11. Exotic Gauge Bosons in the 331 Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romero, D.; Ravinez, O.; Diaz, H.

    We analize the bosonic sector of the 331 model which contains exotic leptons, quarks and bosons (E,J,U,V) in order to satisfy the weak gauge SU(3){sub L} invariance. We develop the Feynman rules of the entire kinetic bosonic sector which will let us to compute some of the Z(0)' decays modes.

  12. Using and Applying Mathematics

    ERIC Educational Resources Information Center

    Knight, Rupert

    2011-01-01

    The Nobel prize winning physicist Richard Feynman (2007) famously enthused about "the pleasure of finding things out". In day-to-day classroom life, however, it is easy to lose and undervalue this pleasure in the process, as opposed to products, of mathematics. Finding things out involves a journey and is often where the learning takes place.…

  13. Loopedia, a database for loop integrals

    NASA Astrophysics Data System (ADS)

    Bogner, C.; Borowka, S.; Hahn, T.; Heinrich, G.; Jones, S. P.; Kerner, M.; von Manteuffel, A.; Michel, M.; Panzer, E.; Papara, V.

    2018-04-01

    Loopedia is a new database at loopedia.org for information on Feynman integrals, intended to provide both bibliographic information as well as results made available by the community. Its bibliometry is complementary to that of INSPIRE or arXiv in the sense that it admits searching for integrals by graph-theoretical objects, e.g. its topology.

  14. Methods and Strategies: Much Ado about Nothing

    ERIC Educational Resources Information Center

    Smith, P. Sean; Plumley, Courtney L.; Hayes, Meredith L.

    2017-01-01

    This column provides ideas and techniques to enhance your science teaching. This month's issue discusses how children think about the small-particle model of matter. What Richard Feynman referred to as the "atomic hypothesis" is perhaps more familiar to us as the small-particle model of matter. In its most basic form, the model states…

  15. Energy and Change

    ERIC Educational Resources Information Center

    Hecht, Eugene

    2007-01-01

    When Feynman wrote, "It is important to realize that in physics today, we have no knowledge of what energy is," he was recognizing that although we have expressions for various forms of energy from kinetic to elastic, we seem to have no idea of what the all-encompassing notion of "energy" "is": This paper addresses that issue offering a definition…

  16. Beyond maths to meaning

    NASA Astrophysics Data System (ADS)

    Clegg, Brian

    2018-04-01

    Everybody knows that quantum physics is weird, right? Indeed, quantum physicist Richard Feynman once said in a lecture: "The theory of quantum electrodynamics describes Nature as absurd from the point of view of common sense." Beyond Weird: Why Everything You Thought You Knew About Quantum Physics is Different by Philip Ball presents a refreshing challenge to this viewpoint.

  17. Path Integration on the Upper Half-Plane

    NASA Astrophysics Data System (ADS)

    Kubo, R.

    1987-10-01

    Feynman's path integral is considered on the Poincaré upper half-plane. It is shown that the fundermental solution to the heat equation partial f/partial t=Delta_{H}f can be expressed in terms of a path integral. A simple relation between the path integral and the Selberg trace formula is discussed briefly.

  18. Proof of a new colour decomposition for QCD amplitudes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Melia, Tom

    Recently, Johansson and Ochirov conjectured the form of a new colour decom-position for QCD tree-level amplitudes. This note provides a proof of that conjecture. The proof is based on ‘Mario World’ Feynman diagrams, which exhibit the hierarchical Dyck structure previously found to be very useful when dealing with multi-quark amplitudes.

  19. Quantum space foam and string theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nekrasov, Nikita

    2006-11-03

    String theory is originally defined as a modification of the Feynman rules in perturbation theory. It contains gravity in its perturbative spectrum. We review some recent developments which demonstrate that nonperturbative effects of quantum gravity, such as spacetime foam, arise in string theory as well.Prepared for the proceedings of 'Albert Einstein Century Conference' , Paris July 2005.

  20. Molecular simulation of the thermodynamic, structural, and vapor-liquid equilibrium properties of neon

    NASA Astrophysics Data System (ADS)

    Vlasiuk, Maryna; Frascoli, Federico; Sadus, Richard J.

    2016-09-01

    The thermodynamic, structural, and vapor-liquid equilibrium properties of neon are comprehensively studied using ab initio, empirical, and semi-classical intermolecular potentials and classical Monte Carlo simulations. Path integral Monte Carlo simulations for isochoric heat capacity and structural properties are also reported for two empirical potentials and one ab initio potential. The isobaric and isochoric heat capacities, thermal expansion coefficient, thermal pressure coefficient, isothermal and adiabatic compressibilities, Joule-Thomson coefficient, and the speed of sound are reported and compared with experimental data for the entire range of liquid densities from the triple point to the critical point. Lustig's thermodynamic approach is formally extended for temperature-dependent intermolecular potentials. Quantum effects are incorporated using the Feynman-Hibbs quantum correction, which results in significant improvement in the accuracy of predicted thermodynamic properties. The new Feynman-Hibbs version of the Hellmann-Bich-Vogel potential predicts the isochoric heat capacity to an accuracy of 1.4% over the entire range of liquid densities. It also predicts other thermodynamic properties more accurately than alternative intermolecular potentials.

  1. Application of a New Ensemble Conserving Quantum Dynamics Simulation Algorithm to Liquid para-Hydrogen and ortho-Deuterium

    DOE PAGES

    Smith, Kyle K.G.; Poulsen, Jens Aage; Nyman, Gunnar; ...

    2015-06-30

    Here, we apply the Feynman-Kleinert Quasi-Classical Wigner (FK-QCW) method developed in our previous work [Smith et al., J. Chem. Phys. 142, 244112 (2015)] for the determination of the dynamic structure factor of liquid para-hydrogen and ortho-deuterium at state points of (T = 20.0 K, n = 21.24 nm -3) and (T = 23.0 K, n = 24.61 nm -3), respectively. When applied to this challenging system, it is shown that this new FK-QCW method consistently reproduces the experimental dynamic structure factor reported by Smith et al. [J. Chem. Phys. 140, 034501 (2014)] for all momentum transfers considered. Moreover, this showsmore » that FK-QCW provides a substantial improvement over the Feynman-Kleinert linearized path-integral method, in which purely classical dynamics are used. Furthermore, for small momentum transfers, it is shown that FK-QCW provides nearly the same results as ring-polymer molecular dynamics (RPMD), thus suggesting that FK-QCW provides a potentially more appealing algorithm than RPMD since it is not formally limited to correlation functions involving linear operators.« less

  2. Application of a New Ensemble Conserving Quantum Dynamics Simulation Algorithm to Liquid para-Hydrogen and ortho-Deuterium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Kyle K.G.; Poulsen, Jens Aage; Nyman, Gunnar

    Here, we apply the Feynman-Kleinert Quasi-Classical Wigner (FK-QCW) method developed in our previous work [Smith et al., J. Chem. Phys. 142, 244112 (2015)] for the determination of the dynamic structure factor of liquid para-hydrogen and ortho-deuterium at state points of (T = 20.0 K, n = 21.24 nm -3) and (T = 23.0 K, n = 24.61 nm -3), respectively. When applied to this challenging system, it is shown that this new FK-QCW method consistently reproduces the experimental dynamic structure factor reported by Smith et al. [J. Chem. Phys. 140, 034501 (2014)] for all momentum transfers considered. Moreover, this showsmore » that FK-QCW provides a substantial improvement over the Feynman-Kleinert linearized path-integral method, in which purely classical dynamics are used. Furthermore, for small momentum transfers, it is shown that FK-QCW provides nearly the same results as ring-polymer molecular dynamics (RPMD), thus suggesting that FK-QCW provides a potentially more appealing algorithm than RPMD since it is not formally limited to correlation functions involving linear operators.« less

  3. Application of a new ensemble conserving quantum dynamics simulation algorithm to liquid para-hydrogen and ortho-deuterium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Kyle K. G., E-mail: kylesmith@utexas.edu; Poulsen, Jens Aage, E-mail: jens72@chem.gu.se; Nyman, Gunnar, E-mail: nyman@chem.gu.se

    We apply the Feynman-Kleinert Quasi-Classical Wigner (FK-QCW) method developed in our previous work [Smith et al., J. Chem. Phys. 142, 244112 (2015)] for the determination of the dynamic structure factor of liquid para-hydrogen and ortho-deuterium at state points of (T = 20.0 K, n = 21.24 nm{sup −3}) and (T = 23.0 K, n = 24.61 nm{sup −3}), respectively. When applied to this challenging system, it is shown that this new FK-QCW method consistently reproduces the experimental dynamic structure factor reported by Smith et al. [J. Chem. Phys. 140, 034501 (2014)] for all momentum transfers considered. This shows that FK-QCWmore » provides a substantial improvement over the Feynman-Kleinert linearized path-integral method, in which purely classical dynamics are used. Furthermore, for small momentum transfers, it is shown that FK-QCW provides nearly the same results as ring-polymer molecular dynamics (RPMD), thus suggesting that FK-QCW provides a potentially more appealing algorithm than RPMD since it is not formally limited to correlation functions involving linear operators.« less

  4. Possible Quantum Absorber Effects in Cortical Synchronization

    NASA Astrophysics Data System (ADS)

    Kämpf, Uwe

    The Wheeler-Feynman transactional "absorber" approach was proposed originally to account for anomalous resonance coupling between spatio-temporally distant measurement partners in entangled quantum states of so-called Einstein-Podolsky-Rosen paradoxes, e.g. of spatio-temporal non-locality, quantum teleportation, etc. Applied to quantum brain dynamics, however, this view provides an anticipative resonance coupling model for aspects of cortical synchronization and recurrent visual action control. It is proposed to consider the registered activation patterns of neuronal loops in so-called synfire chains not as a result of retarded brain communication processes, but rather as surface effects of a system of standing waves generated in the depth of visual processing. According to this view, they arise from a counterbalance between the actual input's delayed bottom-up data streams and top-down recurrent information-processing of advanced anticipative signals in a Wheeler-Feynman-type absorber mode. In the framework of a "time-loop" model, findings about mirror neurons in the brain cortex are suggested to be at least partially associated with temporal rather than spatial mirror functions of visual processing, similar to phase conjugate adaptive resonance-coupling in nonlinear optics.

  5. Application of a new ensemble conserving quantum dynamics simulation algorithm to liquid para-hydrogen and ortho-deuterium.

    PubMed

    Smith, Kyle K G; Poulsen, Jens Aage; Nyman, Gunnar; Cunsolo, Alessandro; Rossky, Peter J

    2015-06-28

    We apply the Feynman-Kleinert Quasi-Classical Wigner (FK-QCW) method developed in our previous work [Smith et al., J. Chem. Phys. 142, 244112 (2015)] for the determination of the dynamic structure factor of liquid para-hydrogen and ortho-deuterium at state points of (T = 20.0 K, n = 21.24 nm(-3)) and (T = 23.0 K, n = 24.61 nm(-3)), respectively. When applied to this challenging system, it is shown that this new FK-QCW method consistently reproduces the experimental dynamic structure factor reported by Smith et al. [J. Chem. Phys. 140, 034501 (2014)] for all momentum transfers considered. This shows that FK-QCW provides a substantial improvement over the Feynman-Kleinert linearized path-integral method, in which purely classical dynamics are used. Furthermore, for small momentum transfers, it is shown that FK-QCW provides nearly the same results as ring-polymer molecular dynamics (RPMD), thus suggesting that FK-QCW provides a potentially more appealing algorithm than RPMD since it is not formally limited to correlation functions involving linear operators.

  6. Neocortical malformation as consequence of nonadaptive regulation of neuronogenetic sequence

    NASA Technical Reports Server (NTRS)

    Caviness, V. S. Jr; Takahashi, T.; Nowakowski, R. S.

    2000-01-01

    Variations in the structure of the neocortex induced by single gene mutations may be extreme or subtle. They differ from variations in neocortical structure encountered across and within species in that these "normal" structural variations are adaptive (both structurally and behaviorally), whereas those associated with disorders of development are not. Here we propose that they also differ in principle in that they represent disruptions of molecular mechanisms that are not normally regulatory to variations in the histogenetic sequence. We propose an algorithm for the operation of the neuronogenetic sequence in relation to the overall neocortical histogenetic sequence and highlight the restriction point of the G1 phase of the cell cycle as the master regulatory control point for normal coordinate structural variation across species and importantly within species. From considerations based on the anatomic evidence from neocortical malformation in humans, we illustrate in principle how this overall sequence appears to be disrupted by molecular biological linkages operating principally outside the control mechanisms responsible for the normal structural variation of the neocortex. MRDD Research Reviews 6:22-33, 2000. Copyright 2000 Wiley-Liss, Inc.

  7. Variational Approach to Enhanced Sampling and Free Energy Calculations

    NASA Astrophysics Data System (ADS)

    Valsson, Omar; Parrinello, Michele

    2014-08-01

    The ability of widely used sampling methods, such as molecular dynamics or Monte Carlo simulations, to explore complex free energy landscapes is severely hampered by the presence of kinetic bottlenecks. A large number of solutions have been proposed to alleviate this problem. Many are based on the introduction of a bias potential which is a function of a small number of collective variables. However constructing such a bias is not simple. Here we introduce a functional of the bias potential and an associated variational principle. The bias that minimizes the functional relates in a simple way to the free energy surface. This variational principle can be turned into a practical, efficient, and flexible sampling method. A number of numerical examples are presented which include the determination of a three-dimensional free energy surface. We argue that, beside being numerically advantageous, our variational approach provides a convenient and novel standpoint for looking at the sampling problem.

  8. Perspective: Maximum caliber is a general variational principle for dynamical systems

    NASA Astrophysics Data System (ADS)

    Dixit, Purushottam D.; Wagoner, Jason; Weistuch, Corey; Pressé, Steve; Ghosh, Kingshuk; Dill, Ken A.

    2018-01-01

    We review here Maximum Caliber (Max Cal), a general variational principle for inferring distributions of paths in dynamical processes and networks. Max Cal is to dynamical trajectories what the principle of maximum entropy is to equilibrium states or stationary populations. In Max Cal, you maximize a path entropy over all possible pathways, subject to dynamical constraints, in order to predict relative path weights. Many well-known relationships of non-equilibrium statistical physics—such as the Green-Kubo fluctuation-dissipation relations, Onsager's reciprocal relations, and Prigogine's minimum entropy production—are limited to near-equilibrium processes. Max Cal is more general. While it can readily derive these results under those limits, Max Cal is also applicable far from equilibrium. We give examples of Max Cal as a method of inference about trajectory distributions from limited data, finding reaction coordinates in bio-molecular simulations, and modeling the complex dynamics of non-thermal systems such as gene regulatory networks or the collective firing of neurons. We also survey its basis in principle and some limitations.

  9. Perspective: Maximum caliber is a general variational principle for dynamical systems.

    PubMed

    Dixit, Purushottam D; Wagoner, Jason; Weistuch, Corey; Pressé, Steve; Ghosh, Kingshuk; Dill, Ken A

    2018-01-07

    We review here Maximum Caliber (Max Cal), a general variational principle for inferring distributions of paths in dynamical processes and networks. Max Cal is to dynamical trajectories what the principle of maximum entropy is to equilibrium states or stationary populations. In Max Cal, you maximize a path entropy over all possible pathways, subject to dynamical constraints, in order to predict relative path weights. Many well-known relationships of non-equilibrium statistical physics-such as the Green-Kubo fluctuation-dissipation relations, Onsager's reciprocal relations, and Prigogine's minimum entropy production-are limited to near-equilibrium processes. Max Cal is more general. While it can readily derive these results under those limits, Max Cal is also applicable far from equilibrium. We give examples of Max Cal as a method of inference about trajectory distributions from limited data, finding reaction coordinates in bio-molecular simulations, and modeling the complex dynamics of non-thermal systems such as gene regulatory networks or the collective firing of neurons. We also survey its basis in principle and some limitations.

  10. "They Have to Adapt to Learn": Surgeons' Perspectives on the Role of Procedural Variation in Surgical Education.

    PubMed

    Apramian, Tavis; Cristancho, Sayra; Watling, Chris; Ott, Michael; Lingard, Lorelei

    2016-01-01

    Clinical research increasingly acknowledges the existence of significant procedural variation in surgical practice. This study explored surgeons' perspectives regarding the influence of intersurgeon procedural variation on the teaching and learning of surgical residents. This qualitative study used a grounded theory-based analysis of observational and interview data. Observational data were collected in 3 tertiary care teaching hospitals in Ontario, Canada. Semistructured interviews explored potential procedural variations arising during the observations and prompts from an iteratively refined guide. Ongoing data analysis refined the theoretical framework and informed data collection strategies, as prescribed by the iterative nature of grounded theory research. Our sample included 99 hours of observation across 45 cases with 14 surgeons. Semistructured, audio-recorded interviews (n = 14) occurred immediately following observational periods. Surgeons endorsed the use of intersurgeon procedural variations to teach residents about adapting to the complexity of surgical practice and the norms of surgical culture. Surgeons suggested that residents' efforts to identify thresholds of principle and preference are crucial to professional development. Principles that emerged from the study included the following: (1) knowing what comes next, (2) choosing the right plane, (3) handling tissue appropriately, (4) recognizing the abnormal, and (5) making safe progress. Surgeons suggested that learning to follow these principles while maintaining key aspects of surgical culture, like autonomy and individuality, are important social processes in surgical education. Acknowledging intersurgeon variation has important implications for curriculum development and workplace-based assessment in surgical education. Adapting to intersurgeon procedural variations may foster versatility in surgical residents. However, the existence of procedural variations and their active use in surgeons' teaching raises questions about the lack of attention to this form of complexity in current workplace-based assessment strategies. Failure to recognize the role of such variations may threaten the implementation of competency-based medical education in surgery. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  11. “They Have to Adapt to Learn”: Surgeons’ Perspectives on the Role of Procedural Variation in Surgical Education

    PubMed Central

    Apramian, Tavis; Cristancho, Sayra; Watling, Chris; Ott, Michael; Lingard, Lorelei

    2017-01-01

    OBJECTIVE Clinical research increasingly acknowledges the existence of significant procedural variation in surgical practice. This study explored surgeons’ perspectives regarding the influence of intersurgeon procedural variation on the teaching and learning of surgical residents. DESIGN AND SETTING This qualitative study used a grounded theory-based analysis of observational and interview data. Observational data were collected in 3 tertiary care teaching hospitals in Ontario, Canada. Semistructured interviews explored potential procedural variations arising during the observations and prompts from an iteratively refined guide. Ongoing data analysis refined the theoretical framework and informed data collection strategies, as prescribed by the iterative nature of grounded theory research. PARTICIPANTS Our sample included 99 hours of observation across 45 cases with 14 surgeons. Semistructured, audio-recorded interviews (n = 14) occurred immediately following observational periods. RESULTS Surgeons endorsed the use of intersurgeon procedural variations to teach residents about adapting to the complexity of surgical practice and the norms of surgical culture. Surgeons suggested that residents’ efforts to identify thresholds of principle and preference are crucial to professional development. Principles that emerged from the study included the following: (1) knowing what comes next, (2) choosing the right plane, (3) handling tissue appropriately, (4) recognizing the abnormal, and (5) making safe progress. Surgeons suggested that learning to follow these principles while maintaining key aspects of surgical culture, like autonomy and individuality, are important social processes in surgical education. CONCLUSIONS Acknowledging intersurgeon variation has important implications for curriculum development and workplace-based assessment in surgical education. Adapting to intersurgeon procedural variations may foster versatility in surgical residents. However, the existence of procedural variations and their active use in surgeons’ teaching raises questions about the lack of attention to this form of complexity in current workplace-based assessment strategies. Failure to recognize the role of such variations may threaten the implementation of competency-based medical education in surgery. PMID:26705062

  12. Using Uncertainty Principle to Find the Ground-State Energy of the Helium and a Helium-like Hookean Atom

    ERIC Educational Resources Information Center

    Harbola, Varun

    2011-01-01

    In this paper, we accurately estimate the ground-state energy and the atomic radius of the helium atom and a helium-like Hookean atom by employing the uncertainty principle in conjunction with the variational approach. We show that with the use of the uncertainty principle, electrons are found to be spread over a radial region, giving an electron…

  13. Estudio de reflectancia enfocado a la cartografia litologica de rocas igneas, efectos de distintos tipos de metamorfismo y analisis estructural en materiales precambricos, basado en datos espectrales de laboratorio e imagenes thematic mapper (Macizo Hesperico Central, Prov. de Caceres y Badajoz)

    NASA Astrophysics Data System (ADS)

    Plaza Garcia, Maria Asuncion

    The rampant success of quantum theory is the result of applications of the 'new' quantum mechanics of Schrodinger and Heisenberg (1926-7), the Feynman-Schwinger-Tomonaga Quantum Electro-dynamics (1946-51), the electro-weak theory of Salaam, Weinberg, and Glashow (1967-9), and Quantum Chromodynamics (1973-); in fact, this success of 'the' quantum theory has depended on a continuous stream of brilliant and quite disparate mathematical formulations. In this carefully concealed ferment there lie plenty of unresolved difficulties, simply because in churning out fabulously accurate calculational tools there has been no sensible explanation of all that is going on. It is even argued that such an understanding is nothing to do with physics. A long-standing and famous illustration of this is the paradoxical thought-experiment of Einstein, Podolsky and Rosen (1935). Fundamental to all quantum theories, and also their paradoxes, is the location of sub-microscopic objects; or, rather, that the specification of such a location is fraught with mathematical inconsistency. This project encompasses a detailed, critical survey of the tangled history of Position within quantum theories. The first step is to show that, contrary to appearances, canonical quantum mechanics has only a vague notion of locality. After analysing a number of previous attempts at a 'relativistic quantum mechanics', two lines of thought are considered in detail. The first is the work of Wan and students, which is shown to be no real improvement on the iisu.al 'nonrelativistic' theory. The second is based on an idea of Dirac's - using backwards-in-time light-cones as the hypersurface in space-time. There remain considerable difficulties in the way of producing a consistent scheme here. To keep things nicely stirred up, the author then proposes his own approach - an adaptation of Feynman's QED propagators. This new approach is distinguished from Feynman's since the propagator or Green's function is not obtained by Feynman's rule. The type of equation solved is also different: instead of an initial-value problem, a solution that obeys a time-symmetric causality criterion is found for an inhomogeneous partial differential equation with homogeneous boundary conditions. To make the consideration of locality more precise, some results of Fourier transform theory are presented in a form that is directly applicable. Somewhat away from the main thrust of the thesis, there is also an attempt to explain, the manner in which quantum effects disappear as the number of particles increases in such things as experimental realisations of the EPR and de Broglie thought experiments.

  14. BOOK REVIEW: Path Integrals in Field Theory: An Introduction

    NASA Astrophysics Data System (ADS)

    Ryder, Lewis

    2004-06-01

    In the 1960s Feynman was known to particle physicists as one of the people who solved the major problems of quantum electrodynamics, his contribution famously introducing what are now called Feynman diagrams. To other physicists he gained a reputation as the author of the Feynman Lectures on Physics; in addition some people were aware of his work on the path integral formulation of quantum theory, and a very few knew about his work on gravitation and Yang--Mills theories, which made use of path integral methods. Forty years later the scene is rather different. Many of the problems of high energy physics are solved; and the standard model incorporates Feynman's path integral method as a way of proving the renormalisability of the gauge (Yang--Mills) theories involved. Gravitation is proving a much harder nut to crack, but here also questions of renormalisability are couched in path-integral language. What is more, theoretical studies of condensed matter physics now also appeal to this technique for quantisation, so the path integral method is becoming part of the standard apparatus of theoretical physics. Chapters on it appear in a number of recent books, and a few books have appeared devoted to this topic alone; the book under review is a very recent one. Path integral techniques have the advantage of enormous conceptual appeal and the great disadvantage of mathematical complexity, this being partly the result of messy integrals but more fundamentally due to the notions of functional differentiation and integration which are involved in the method. All in all this subject is not such an easy ride. Mosel's book, described as an introduction, is aimed at graduate students and research workers in particle physics. It assumes a background knowledge of quantum mechanics, both non-relativistic and relativistic. After three chapters on the path integral formulation of non-relativistic quantum mechanics there are eight chapters on scalar and spinor field theory, followed by three on gauge field theories---quantum electrodynamics and Yang--Mills theories, Faddeev--Popov ghosts and so on.There is no treatment of the quantisation of gravity.Thus in about 200 pages the reader has the chance to learn in some detail about a most important area of modern physics. The subject is tough but the style is clear and pedagogic, results for the most part being derived explicitly. The choice of topics included is main-stream and sensible and one has a clear sense that the author knows where he is going and is a reliable guide. Path Integrals in Field Theory is clearly the work of a man with considerable teaching experience and is recommended as a readable and helpful account of a rather non-trivial subject.

  15. Dual and mixed nonsymmetric stress-based variational formulations for coupled thermoelastodynamics with second sound effect

    NASA Astrophysics Data System (ADS)

    Tóth, Balázs

    2018-03-01

    Some new dual and mixed variational formulations based on a priori nonsymmetric stresses will be developed for linearly coupled irreversible thermoelastodynamic problems associated with second sound effect according to the Lord-Shulman theory. Having introduced the entropy flux vector instead of the entropy field and defining the dissipation and the relaxation potential as the function of the entropy flux, a seven-field dual and mixed variational formulation will be derived from the complementary Biot-Hamilton-type variational principle, using the Lagrange multiplier method. The momentum-, the displacement- and the infinitesimal rotation vector, and the a priori nonsymmetric stress tensor, the temperature change, the entropy field and its flux vector are considered as the independent field variables of this formulation. In order to handle appropriately the six different groups of temporal prescriptions in the relaxed- and/or the strong form, two variational integrals will be incorporated into the seven-field functional. Then, eliminating the entropy from this formulation through the strong fulfillment of the constitutive relation for the temperature change with the use of the Legendre transformation between the enthalpy and Gibbs potential, a six-field dual and mixed action functional is obtained. As a further development, the elimination of the momentum- and the velocity vector from the six-field principle through the a priori satisfaction of the kinematic equation and the constitutive relation for the momentum vector leads to a five-field variational formulation. These principles are suitable for the transient analyses of the structures exposed to a thermal shock of short temporal domain or a large heat flux.

  16. Applications of He's semi-inverse method, ITEM and GGM to the Davey-Stewartson equation

    NASA Astrophysics Data System (ADS)

    Zinati, Reza Farshbaf; Manafian, Jalil

    2017-04-01

    We investigate the Davey-Stewartson (DS) equation. Travelling wave solutions were found. In this paper, we demonstrate the effectiveness of the analytical methods, namely, He's semi-inverse variational principle method (SIVPM), the improved tan(φ/2)-expansion method (ITEM) and generalized G'/G-expansion method (GGM) for seeking more exact solutions via the DS equation. These methods are direct, concise and simple to implement compared to other existing methods. The exact solutions containing four types solutions have been achieved. The results demonstrate that the aforementioned methods are more efficient than the Ansatz method applied by Mirzazadeh (2015). Abundant exact travelling wave solutions including solitons, kink, periodic and rational solutions have been found by the improved tan(φ/2)-expansion and generalized G'/G-expansion methods. By He's semi-inverse variational principle we have obtained dark and bright soliton wave solutions. Also, the obtained semi-inverse variational principle has profound implications in physical understandings. These solutions might play important role in engineering and physics fields. Moreover, by using Matlab, some graphical simulations were done to see the behavior of these solutions.

  17. Evidence of seasonal variation in longitudinal growth of height in a sample of boys from Stuttgart Carlsschule, 1771-1793, using combined principal component analysis and maximum likelihood principle.

    PubMed

    Lehmann, A; Scheffler, Ch; Hermanussen, M

    2010-02-01

    Recent progress in modelling individual growth has been achieved by combining the principal component analysis and the maximum likelihood principle. This combination models growth even in incomplete sets of data and in data obtained at irregular intervals. We re-analysed late 18th century longitudinal growth of German boys from the boarding school Carlsschule in Stuttgart. The boys, aged 6-23 years, were measured at irregular 3-12 monthly intervals during the period 1771-1793. At the age of 18 years, mean height was 1652 mm, but height variation was large. The shortest boy reached 1474 mm, the tallest 1826 mm. Measured height closely paralleled modelled height, with mean difference of 4 mm, SD 7 mm. Seasonal height variation was found. Low growth rates occurred in spring and high growth rates in summer and autumn. The present study demonstrates that combining the principal component analysis and the maximum likelihood principle enables growth modelling in historic height data also. Copyright (c) 2009 Elsevier GmbH. All rights reserved.

  18. Brans-Dicke Galileon and the variational principle

    NASA Astrophysics Data System (ADS)

    Quiros, Israel; García-Salcedo, Ricardo; Gonzalez, Tame; Horta-Rangel, F. Antonio; Saavedra, Joel

    2016-09-01

    This paper is aimed at a (mostly) pedagogical exposition of the derivation of the motion equations of certain modifications of general relativity. Here we derive in all detail the motion equations in the Brans-Dicke theory with cubic self-interaction. This is a modification of the Brans-Dicke theory by the addition of a term in the Lagrangian which is non-linear in the derivatives of the scalar field: it contains second-order derivatives. This is the basis of the so-called Brans-Dicke Galileon. We pay special attention to the variational principle and to the algebraic details of the derivation. It is shown how higher order derivatives of the fields appearing in the intermediate computations cancel out leading to second order motion equations. The reader will find useful tips for the derivation of the field equations of modifications of general relativity such as the scalar-tensor theories and f(R) theories, by means of the (stationary action) variational principle. The content of this paper is particularly recommended to those graduate and postgraduate students who are interested in the study of the mentioned modifications of general relativity.

  19. Construction of Consumption Possibility Frontiers in Principles Textbooks.

    ERIC Educational Resources Information Center

    Olson, Terry L.

    1997-01-01

    Explores the consequences of textbook authors' failure to recognize that producers can acquire the good in which they lack a comparative advantage through either trade or internal production. Examines variations in the construction and graphical depiction of consumption possibility frontiers in principles of economics textbooks. (MJP)

  20. Variation Theory of Learning and Developmental Pedagogy: Two Context-Related Models of Learning Grounded in Phenomenography

    ERIC Educational Resources Information Center

    Pramling Samuelsson, Ingrid; Pramling, Niklas

    2016-01-01

    Honoring the variation theory principle that meaning springs from differences, in this article we will show how two different strands of theorizing emerging from the mutual base of phenomenography have developed into developmental pedagogy and variation theory, respectively. Through looking at texts from these two strands, we will illustrate how…

  1. Five for Sydney--A Journey through Science

    ERIC Educational Resources Information Center

    Lam, Stephen

    2014-01-01

    What is science? Depending on who is asked, it may mean the pursuit of knowledge, explanations of the everyday world, a difficult subject at school, or a field populated by larger than life characters such as Einstein, Feynman, or Hawking. For the author, science has been and remains an unexpected journey, an adventure and an ever-changing career.…

  2. Planck's Constant as a Natural Unit of Measurement

    ERIC Educational Resources Information Center

    Quincey, Paul

    2013-01-01

    The proposed revision of SI units would embed Planck's constant into the definition of the kilogram, as a fixed constant of nature. Traditionally, Planck's constant is not readily interpreted as the size of something physical, and it is generally only encountered by students in the mathematics of quantum physics. Richard Feynman's…

  3. Feynman Path Integral Approach to Electron Diffraction for One and Two Slits: Analytical Results

    ERIC Educational Resources Information Center

    Beau, Mathieu

    2012-01-01

    In this paper we present an analytic solution of the famous problem of diffraction and interference of electrons through one and two slits (for simplicity, only the one-dimensional case is considered). In addition to exact formulae, various approximations of the electron distribution are shown which facilitate the interpretation of the results.…

  4. Perturbative Yang-Mills theory without Faddeev-Popov ghost fields

    NASA Astrophysics Data System (ADS)

    Huffel, Helmuth; Markovic, Danijel

    2018-05-01

    A modified Faddeev-Popov path integral density for the quantization of Yang-Mills theory in the Feynman gauge is discussed, where contributions of the Faddeev-Popov ghost fields are replaced by multi-point gauge field interactions. An explicit calculation to O (g2) shows the equivalence of the usual Faddeev-Popov scheme and its modified version.

  5. Exploring the Standard Model of Particles

    ERIC Educational Resources Information Center

    Johansson, K. E.; Watkins, P. M.

    2013-01-01

    With the recent discovery of a new particle at the CERN Large Hadron Collider (LHC) the Higgs boson could be about to be discovered. This paper provides a brief summary of the standard model of particle physics and the importance of the Higgs boson and field in that model for non-specialists. The role of Feynman diagrams in making predictions for…

  6. Developing a Framework for Analyzing Definitions: A Study of "The Feynman Lectures"

    ERIC Educational Resources Information Center

    Wong, Chee Leong; Chu, Hye-Eun; Yap, Kueh Chin

    2014-01-01

    One important purpose of a definition is to explain the meaning of a word. Any problems associated with a definition may impede students' learning. However, research studies on the definitional problems from the perspective of physics education are limited. Physics educators may not be aware of the nature and extent of definitional problems.…

  7. Critique and Fiction: Doing Science Right in Rural Education Research

    ERIC Educational Resources Information Center

    Howley, Craig B.

    2006-01-01

    This essay explains the relevance of critique in rural education to novels about rural places. The most important quoted passage in the essay is from the noted physicist Richard Feynman: "Science is the belief in the ignorance of experts." Novelist-physicist C. P. Snow, historian Henry Adams, and poet and student-of-mathematics Kelly Cherry also…

  8. On the Support of Minimizers of Causal Variational Principles

    NASA Astrophysics Data System (ADS)

    Finster, Felix; Schiefeneder, Daniela

    2013-11-01

    A class of causal variational principles on a compact manifold is introduced and analyzed both numerically and analytically. It is proved under general assumptions that the support of a minimizing measure is either completely timelike, or it is singular in the sense that its interior is empty. In the examples of the circle, the sphere and certain flag manifolds, the general results are supplemented by a more detailed and explicit analysis of the minimizers. On the sphere, we get a connection to packing problems and the Tammes distribution. Moreover, the minimal action is estimated from above and below.

  9. Understanding molecular motor walking along a microtubule: a themosensitive asymmetric Brownian motor driven by bubble formation.

    PubMed

    Arai, Noriyoshi; Yasuoka, Kenji; Koishi, Takahiro; Ebisuzaki, Toshikazu; Zeng, Xiao Cheng

    2013-06-12

    The "asymmetric Brownian ratchet model", a variation of Feynman's ratchet and pawl system, is invoked to understand the kinesin walking behavior along a microtubule. The model system, consisting of a motor and a rail, can exhibit two distinct binding states, namely, the random Brownian state and the asymmetric potential state. When the system is transformed back and forth between the two states, the motor can be driven to "walk" in one direction. Previously, we suggested a fundamental mechanism, that is, bubble formation in a nanosized channel surrounded by hydrophobic atoms, to explain the transition between the two states. In this study, we propose a more realistic and viable switching method in our computer simulation of molecular motor walking. Specifically, we propose a thermosensitive polymer model with which the transition between the two states can be controlled by temperature pulses. Based on this new motor system, the stepping size and stepping time of the motor can be recorded. Remarkably, the "walking" behavior observed in the newly proposed model resembles that of the realistic motor protein. The bubble formation based motor not only can be highly efficient but also offers new insights into the physical mechanism of realistic biomolecule motors.

  10. Greek classicism in living structure? Some deductive pathways in animal morphology.

    PubMed

    Zweers, G A

    1985-01-01

    Classical temples in ancient Greece show two deterministic illusionistic principles of architecture, which govern their functional design: geometric proportionalism and a set of illusion-strengthening rules in the proportionalism's "stochastic margin". Animal morphology, in its mechanistic-deductive revival, applies just one architectural principle, which is not always satisfactory. Whether a "Greek Classical" situation occurs in the architecture of living structure is to be investigated by extreme testing with deductive methods. Three deductive methods for explanation of living structure in animal morphology are proposed: the parts, the compromise, and the transformation deduction. The methods are based upon the systems concept for an organism, the flow chart for a functionalistic picture, and the network chart for a structuralistic picture, whereas the "optimal design" serves as the architectural principle for living structure. These methods show clearly the high explanatory power of deductive methods in morphology, but they also make one open end most explicit: neutral issues do exist. Full explanation of living structure asks for three entries: functional design within architectural and transformational constraints. The transformational constraint brings necessarily in a stochastic component: an at random variation being a sort of "free management space". This variation must be a variation from the deterministic principle of the optimal design, since any transformation requires space for plasticity in structure and action, and flexibility in role fulfilling. Nevertheless, finally the question comes up whether for animal structure a similar situation exists as in Greek Classical temples. This means that the at random variation, that is found when the optimal design is used to explain structure, comprises apart from a stochastic part also real deviations being yet another deterministic part. This deterministic part could be a set of rules that governs actualization in the "free management space".

  11. First-principles investigations into the thermodynamics of cation disorder and its impact on electronic structure and magnetic properties of spinel Co(Cr1-x Mn x )2O4.

    PubMed

    Das, Debashish; Ghosh, Subhradip

    2017-02-08

    Cation disorder over different crystallographic sites in spinel oxides is known to affect their properties. Recent experiments on Mn doped multiferroic [Formula: see text] indicate that a possible distribution of Mn atoms among tetrahedrally and octahedrally coordinated sites in the spinel lattice give rise to different variations in the structural parameters and saturation magnetisations in different concentration regimes of Mn atoms substituting the Cr. A composition dependent magnetic compensation behaviour points to the role conversions of the magnetic constituents. In this work, we have investigated the thermodynamics of cation disorder in [Formula: see text] system and its consequences on the structural, electronic and magnetic properties, using results from first-principles electronic structure calculations. We have computed the variations in the cation-disorder as a function of Mn concentration and the temperature and found that at the annealing temperature of the experiment many of the systems exhibit cation disorder. Our results support the interpretations of the experimental results regarding the qualitative variations in the sub-lattice occupancies and the associated magnetisation behaviour, with composition. We have analysed the variations in structural, magnetic and electronic properties of this system with variations in the compositions and the degree of cation disorder from the variations in their electronic structures and by using the ideas from crystal field theory. Our study provides a complete microscopic picture of the effects that are responsible for composition dependent behavioural differences of the properties of this system. This work lays down a general framework, based upon results from first-principles calculations, to understand and analyse the substitutional magnetic spinel oxides [Formula: see text] in presence of cation disorder.

  12. A Systematic Approach for Computing Zero-Point Energy, Quantum Partition Function, and Tunneling Effect Based on Kleinert's Variational Perturbation Theory.

    PubMed

    Wong, Kin-Yiu; Gao, Jiali

    2008-09-09

    In this paper, we describe an automated integration-free path-integral (AIF-PI) method, based on Kleinert's variational perturbation (KP) theory, to treat internuclear quantum-statistical effects in molecular systems. We have developed an analytical method to obtain the centroid potential as a function of the variational parameter in the KP theory, which avoids numerical difficulties in path-integral Monte Carlo or molecular dynamics simulations, especially at the limit of zero-temperature. Consequently, the variational calculations using the KP theory can be efficiently carried out beyond the first order, i.e., the Giachetti-Tognetti-Feynman-Kleinert variational approach, for realistic chemical applications. By making use of the approximation of independent instantaneous normal modes (INM), the AIF-PI method can readily be applied to many-body systems. Previously, we have shown that in the INM approximation, the AIF-PI method is accurate for computing the quantum partition function of a water molecule (3 degrees of freedom) and the quantum correction factor for the collinear H(3) reaction rate (2 degrees of freedom). In this work, the accuracy and properties of the KP theory are further investigated by using the first three order perturbations on an asymmetric double-well potential, the bond vibrations of H(2), HF, and HCl represented by the Morse potential, and a proton-transfer barrier modeled by the Eckart potential. The zero-point energy, quantum partition function, and tunneling factor for these systems have been determined and are found to be in excellent agreement with the exact quantum results. Using our new analytical results at the zero-temperature limit, we show that the minimum value of the computed centroid potential in the KP theory is in excellent agreement with the ground state energy (zero-point energy) and the position of the centroid potential minimum is the expectation value of particle position in wave mechanics. The fast convergent property of the KP theory is further examined in comparison with results from the traditional Rayleigh-Ritz variational approach and Rayleigh-Schrödinger perturbation theory in wave mechanics. The present method can be used for thermodynamic and quantum dynamic calculations, including to systematically determine the exact value of zero-point energy and to study kinetic isotope effects for chemical reactions in solution and in enzymes.

  13. Spiers Memorial Lecture. Quantum chemistry: the first seventy years.

    PubMed

    McWeeny, Roy

    2007-01-01

    Present-day theoretical chemistry is rooted in Quantum Mechanics. The aim of the opening lecture is to trace the evolution of Quantum Chemistry from the Heitler-London paper of 1927 up to the end of the last century, emphasizing concepts rather than calculations. The importance of symmetry concepts became evident in the early years: one thinks of the necessary anti-symmetry of the wave function under electron permutations, the Pauli principle, the aufbau scheme, and the classification of spectroscopic states. But for chemists perhaps the key concept is embodied in the Hellmann-Feynman theorem, which provides a pictorial interpretation of chemical bonding in terms of classical electrostatic forces exerted on the nuclei by the electron distribution. Much of the lecture is concerned with various electron distribution functions--the electron density, the current density, the spin density, and other 'property densities'--and with their use in interpreting both molecular structure and molecular properties. Other topics touched upon include Response theory and propagators; Chemical groups in molecules and the group function approach; Atoms in molecules and Bader's theory; Electron correlation and the 'pair function'. Finally, some long-standing controversies, in particular the EPR paradox, are re-examined in the context of molecular dissociation. By admitting the concept of symmetry breaking, along with the use of the von Neumann-Dirac statistical ensemble, orthodox quantum mechanics can lead to a convincing picture of the dissociation mechanism.

  14. Anisotropy and temperature dependence of structural, thermodynamic, and elastic properties of crystalline cellulose Iβ: a first-principles investigation

    Treesearch

    ShunLi Shang; Louis G. Hector Jr.; Paul Saxe; Zi-Kui Liu; Robert J. Moon; Pablo D. Zavattieri

    2014-01-01

    Anisotropy and temperature dependence of structural, thermodynamic and elastic properties of crystalline cellulose Iβ were computed with first-principles density functional theory (DFT) and a semi-empirical correction for van der Waals interactions. Specifically, we report the computed temperature variation (up to 500...

  15. Beyond Universal Design for Learning: Guiding Principles to Reduce Barriers to Digital & Media Literacy Competence

    ERIC Educational Resources Information Center

    Dalton, Elizabeth M.

    2017-01-01

    Universal Design for Learning (UDL), a framework for designing instruction to address the wide range of learner variation in today's inclusive classrooms, can be applied effectively to broaden access, understanding, and engagement in digital and media literacy learning for ALL. UDL supports constructivist learning principles. UDL strategies and…

  16. Variation and Linguistic Theory.

    ERIC Educational Resources Information Center

    Bailey, Charles-James N.

    This volume presents principles and models for describing language variation, and introduces a time-based, dynamic framework for linguistic description. The book first summarizes some of the problems of grammatical description encountered from Saussure through the present and then outlines possibilities for new descriptions of language which take…

  17. Genomic Copy Number Variation in Disorders of Cognitive Development

    ERIC Educational Resources Information Center

    Morrow, Eric M.

    2010-01-01

    Objective: To highlight recent discoveries in the area of genomic copy number variation in neuropsychiatric disorders including intellectual disability, autism, and schizophrenia. To emphasize new principles emerging from this area, involving the genetic architecture of disease, pathophysiology, and diagnosis. Method: Review of studies published…

  18. A Blocked Linear Method for Optimizing Large Parameter Sets in Variational Monte Carlo

    DOE PAGES

    Zhao, Luning; Neuscamman, Eric

    2017-05-17

    We present a modification to variational Monte Carlo’s linear method optimization scheme that addresses a critical memory bottleneck while maintaining compatibility with both the traditional ground state variational principle and our recently-introduced variational principle for excited states. For wave function ansatzes with tens of thousands of variables, our modification reduces the required memory per parallel process from tens of gigabytes to hundreds of megabytes, making the methodology a much better fit for modern supercomputer architectures in which data communication and per-process memory consumption are primary concerns. We verify the efficacy of the new optimization scheme in small molecule tests involvingmore » both the Hilbert space Jastrow antisymmetric geminal power ansatz and real space multi-Slater Jastrow expansions. Satisfied with its performance, we have added the optimizer to the QMCPACK software package, with which we demonstrate on a hydrogen ring a prototype approach for making systematically convergent, non-perturbative predictions of Mott-insulators’ optical band gaps.« less

  19. On electromagnetic forming processes in finitely strained solids: Theory and examples

    NASA Astrophysics Data System (ADS)

    Thomas, J. D.; Triantafyllidis, N.

    2009-08-01

    The process of electromagnetic forming (EMF) is a high velocity manufacturing technique that uses electromagnetic (Lorentz) body forces to shape sheet metal parts. EMF holds several advantages over conventional forming techniques: speed, repeatability, one-sided tooling, and most importantly considerable ductility increase in several metals. Current modeling techniques for EMF processes are not based on coupled variational principles to simultaneously account for electromagnetic and mechanical effects. Typically, separate solutions to the electromagnetic (Maxwell) and motion (Newton) equations are combined in staggered or lock-step methods, sequentially solving the mechanical and electromagnetic problems. The present work addresses these issues by introducing a fully coupled Lagrangian (reference configuration) least-action variational principle, involving magnetic flux and electric potentials and the displacement field as independent variables. The corresponding Euler-Lagrange equations are Maxwell's and Newton's equations in the reference configuration, which are shown to coincide with their current configuration counterparts obtained independently by a direct approach. The general theory is subsequently simplified for EMF processes by considering the eddy current approximation. Next, an application is presented for axisymmetric EMF problems. It is shown that the proposed variational principle forms the basis of a variational integration numerical scheme that provides an efficient staggered solution algorithm. As an illustration a number of such processes are simulated, inspired by recent experiments of freely expanding uncoated and polyurea-coated aluminum tubes.

  20. Babinet's principle for optical frequency metamaterials and nanoantennas

    NASA Astrophysics Data System (ADS)

    Zentgraf, T.; Meyrath, T. P.; Seidel, A.; Kaiser, S.; Giessen, H.; Rockstuhl, C.; Lederer, F.

    2007-07-01

    We consider Babinet’s principle for metamaterials at optical frequencies and include realistic conditions which deviate from the theoretical assumptions of the classic principle such as an infinitely thin and perfectly conducting metal layer. It is shown that Babinet’s principle associates not only transmission and reflection between a structure and its complement but also the field modal profiles of the electromagnetic resonances as well as effective material parameters—a critical concept for metamaterials. Also playing an important role in antenna design, Babinet’s principle is particularly interesting to consider in this case where the metasurfaces and their complements can be regarded as variations on a folded dipole antenna array and patch antenna array, respectively.

  1. Gravity, Time, and Lagrangians

    ERIC Educational Resources Information Center

    Huggins, Elisha

    2010-01-01

    Feynman mentioned to us that he understood a topic in physics if he could explain it to a college freshman, a high school student, or a dinner guest. Here we will discuss two topics that took us a while to get to that level. One is the relationship between gravity and time. The other is the minus sign that appears in the Lagrangian. (Why would one…

  2. Animating Energy: Stop-Motion Animation and Energy Tracking Representations

    ERIC Educational Resources Information Center

    Atkins, Leslie J.; Erstad, Craig; Gudeman, Paul; McGowan, Jacob; Mulhern, Kristin; Prader, Kaitlyn; Rodriguez, Gregoria; Showaker, Amy; Timmons, Adam

    2014-01-01

    Energy is a topic that is often treated as an accounting process-a number that students are asked to calculate, but that is not particularly meaningful in itself. When we try to ascribe meaning to this number ("an ability to do work," for example), we are met with caveats and hedges. As Feynman notes when lecturing on the conservation of…

  3. Group field theory with noncommutative metric variables.

    PubMed

    Baratin, Aristide; Oriti, Daniele

    2010-11-26

    We introduce a dual formulation of group field theories as a type of noncommutative field theories, making their simplicial geometry manifest. For Ooguri-type models, the Feynman amplitudes are simplicial path integrals for BF theories. We give a new definition of the Barrett-Crane model for gravity by imposing the simplicity constraints directly at the level of the group field theory action.

  4. Energy Blocks — A Physical Model for Teaching Energy Concepts

    NASA Astrophysics Data System (ADS)

    Hertting, Scott

    2016-01-01

    Most physics educators would agree that energy is a very useful, albeit abstract topic. It is therefore important to use various methods to help the student internalize the concept of energy itself and its related ideas. These methods include using representations such as energy bar graphs, energy pie charts, or energy tracking diagrams. Activities and analogies like Energy Theater and Richard Feynman's blocks, as well as the popular money (or wealth) analogy, can also be very effective. The goal of this paper is to describe a physical model of Feynman's blocks that can be employed by instructors to help students learn the following energy-related concepts: 1. The factors affecting each individual mechanical energy storage mode (this refers to what has been traditionally called a form of energy, and while the Modeling Method of instruction is not the focus of this paper, much of the energy related language used is specific to the Modeling Method). For example, how mass or height affects gravitational energy; 2. Energy conservation; and 3. The graphical relationships between the energy storage mode and a factor affecting it. For example, the graphical relationship between elastic energy and the change in length of a spring.

  5. Implications of improved Higgs mass calculations for supersymmetric models.

    PubMed

    Buchmueller, O; Dolan, M J; Ellis, J; Hahn, T; Heinemeyer, S; Hollik, W; Marrouche, J; Olive, K A; Rzehak, H; de Vries, K J; Weiglein, G

    We discuss the allowed parameter spaces of supersymmetric scenarios in light of improved Higgs mass predictions provided by FeynHiggs 2.10.0. The Higgs mass predictions combine Feynman-diagrammatic results with a resummation of leading and subleading logarithmic corrections from the stop/top sector, which yield a significant improvement in the region of large stop masses. Scans in the pMSSM parameter space show that, for given values of the soft supersymmetry-breaking parameters, the new logarithmic contributions beyond the two-loop order implemented in FeynHiggs tend to give larger values of the light CP-even Higgs mass, [Formula: see text], in the region of large stop masses than previous predictions that were based on a fixed-order Feynman-diagrammatic result, though the differences are generally consistent with the previous estimates of theoretical uncertainties. We re-analyse the parameter spaces of the CMSSM, NUHM1 and NUHM2, taking into account also the constraints from CMS and LHCb measurements of [Formula: see text]and ATLAS searches for [Formula: see text] events using 20/fb of LHC data at 8 TeV. Within the CMSSM, the Higgs mass constraint disfavours [Formula: see text], though not in the NUHM1 or NUHM2.

  6. A proposal of a renormalizable Nambu-Jona-Lasinio model

    NASA Astrophysics Data System (ADS)

    Cabo Montes de Oca, Alejandro

    2018-03-01

    A local and gauge invariant gauge field model including Nambu-Jona-Lasinio (NJL) and QCD Lagrangian terms in its action is introduced. Surprisingly, it becomes power counting renormalizable. This occurs thanks to the presence of action terms which modify the quark propagators, to become more decreasing that the Dirac one at large momenta in a Lee-Wick form, implying power counting renormalizability. The appearance of finite quark masses already in the tree approximation in the scheme is determined by the fact that the new action terms explicitly break chiral invariance. In this starting work we present the renormalized Feynman diagram expansion of the model and derive the formula for the degree of divergence of the diagrams. An explanation for the usual exclusion of the added Lagrangian terms is presented. In addition, the primitíve divergent graphs are identified. We start their evaluation by calculating the simpler contribution to the gluon polarization operator. The divergent and finite parts both result transverse as required by gauge invariance. The full evaluation of the various primitive divergences, which are required for completely defining the counterterm Feynman expansion will be considered in coming works, for further allowing to discuss the flavour symmetry breaking and unitarity.

  7. Quantum Metropolis sampling.

    PubMed

    Temme, K; Osborne, T J; Vollbrecht, K G; Poulin, D; Verstraete, F

    2011-03-03

    The original motivation to build a quantum computer came from Feynman, who imagined a machine capable of simulating generic quantum mechanical systems--a task that is believed to be intractable for classical computers. Such a machine could have far-reaching applications in the simulation of many-body quantum physics in condensed-matter, chemical and high-energy systems. Part of Feynman's challenge was met by Lloyd, who showed how to approximately decompose the time evolution operator of interacting quantum particles into a short sequence of elementary gates, suitable for operation on a quantum computer. However, this left open the problem of how to simulate the equilibrium and static properties of quantum systems. This requires the preparation of ground and Gibbs states on a quantum computer. For classical systems, this problem is solved by the ubiquitous Metropolis algorithm, a method that has basically acquired a monopoly on the simulation of interacting particles. Here we demonstrate how to implement a quantum version of the Metropolis algorithm. This algorithm permits sampling directly from the eigenstates of the Hamiltonian, and thus evades the sign problem present in classical simulations. A small-scale implementation of this algorithm should be achievable with today's technology.

  8. Theories of Variable Mass Particles and Low Energy Nuclear Phenomena

    NASA Astrophysics Data System (ADS)

    Davidson, Mark

    2014-02-01

    Variable particle masses have sometimes been invoked to explain observed anomalies in low energy nuclear reactions (LENR). Such behavior has never been observed directly, and is not considered possible in theoretical nuclear physics. Nevertheless, there are covariant off-mass-shell theories of relativistic particle dynamics, based on works by Fock, Stueckelberg, Feynman, Greenberger, Horwitz, and others. We review some of these and we also consider virtual particles that arise in conventional Feynman diagrams in relativistic field theories. Effective Lagrangian models incorporating variable mass particle theories might be useful in describing anomalous nuclear reactions by combining mass shifts together with resonant tunneling and other effects. A detailed model for resonant fusion in a deuterium molecule with off-shell deuterons and electrons is presented as an example. Experimental means of observing such off-shell behavior directly, if it exists, is proposed and described. Brief explanations for elemental transmutation and formation of micro-craters are also given, and an alternative mechanism for the mass shift in the Widom-Larsen theory is presented. If variable mass theories were to find experimental support from LENR, then they would undoubtedly have important implications for the foundations of quantum mechanics, and practical applications may arise.

  9. Integrand Reduction Reloaded: Algebraic Geometry and Finite Fields

    NASA Astrophysics Data System (ADS)

    Sameshima, Ray D.; Ferroglia, Andrea; Ossola, Giovanni

    2017-01-01

    The evaluation of scattering amplitudes in quantum field theory allows us to compare the phenomenological prediction of particle theory with the measurement at collider experiments. The study of scattering amplitudes, in terms of their symmetries and analytic properties, provides a theoretical framework to develop techniques and efficient algorithms for the evaluation of physical cross sections and differential distributions. Tree-level calculations have been known for a long time. Loop amplitudes, which are needed to reduce the theoretical uncertainty, are more challenging since they involve a large number of Feynman diagrams, expressed as integrals of rational functions. At one-loop, the problem has been solved thanks to the combined effect of integrand reduction, such as the OPP method, and unitarity. However, plenty of work is still needed at higher orders, starting with the two-loop case. Recently, integrand reduction has been revisited using algebraic geometry. In this presentation, we review the salient features of integrand reduction for dimensionally regulated Feynman integrals, and describe an interesting technique for their reduction based on multivariate polynomial division. We also show a novel approach to improve its efficiency by introducing finite fields. Supported in part by the National Science Foundation under Grant PHY-1417354.

  10. Building logical qubits in a superconducting quantum computing system

    NASA Astrophysics Data System (ADS)

    Gambetta, Jay M.; Chow, Jerry M.; Steffen, Matthias

    2017-01-01

    The technological world is in the midst of a quantum computing and quantum information revolution. Since Richard Feynman's famous `plenty of room at the bottom' lecture (Feynman, Engineering and Science23, 22 (1960)), hinting at the notion of novel devices employing quantum mechanics, the quantum information community has taken gigantic strides in understanding the potential applications of a quantum computer and laid the foundational requirements for building one. We believe that the next significant step will be to demonstrate a quantum memory, in which a system of interacting qubits stores an encoded logical qubit state longer than the incorporated parts. Here, we describe the important route towards a logical memory with superconducting qubits, employing a rotated version of the surface code. The current status of technology with regards to interconnected superconducting-qubit networks will be described and near-term areas of focus to improve devices will be identified. Overall, the progress in this exciting field has been astounding, but we are at an important turning point, where it will be critical to incorporate engineering solutions with quantum architectural considerations, laying the foundation towards scalable fault-tolerant quantum computers in the near future.

  11. A new look at the Feynman ‘hodograph’ approach to the Kepler first law

    NASA Astrophysics Data System (ADS)

    Cariñena, José F.; Rañada, Manuel F.; Santander, Mariano

    2016-03-01

    Hodographs for the Kepler problem are circles. This fact, known for almost two centuries, still provides the simplest path to derive the Kepler first law. Through Feynman’s ‘lost lecture’, this derivation has now reached a wider audience. Here we look again at Feynman’s approach to this problem, as well as the recently suggested modification by van Haandel and Heckman (vHH), with two aims in mind, both of which extend the scope of the approach. First we review the geometric constructions of the Feynman and vHH approaches (that prove the existence of elliptic orbits without making use of integral calculus or differential equations) and then extend the geometric approach to also cover the hyperbolic orbits (corresponding to E\\gt 0). In the second part we analyse the properties of the director circles of the conics, which are used to simplify the approach, and we relate with the properties of the hodographs and Laplace-Runge-Lenz vector the constant of motion specific to the Kepler problem. Finally, we briefly discuss the generalisation of the geometric method to the Kepler problem in configuration spaces of constant curvature, i.e. in the sphere and the hyperbolic plane.

  12. The decay width of the Z_c(3900) as an axialvector tetraquark state in solid quark-hadron duality

    NASA Astrophysics Data System (ADS)

    Wang, Zhi-Gang; Zhang, Jun-Xia

    2018-01-01

    In this article, we tentatively assign the Z_c^± (3900) to be the diquark-antidiquark type axialvector tetraquark state, study the hadronic coupling constants G_{Z_cJ/ψ π }, G_{Z_cη _cρ }, G_{Z_cD \\bar{D}^{*}} with the QCD sum rules in details. We take into account both the connected and disconnected Feynman diagrams in carrying out the operator product expansion, as the connected Feynman diagrams alone cannot do the work. Special attentions are paid to matching the hadron side of the correlation functions with the QCD side of the correlation functions to obtain solid duality, the routine can be applied to study other hadronic couplings directly. We study the two-body strong decays Z_c^+(3900)→ J/ψ π ^+, η _cρ ^+, D^+ \\bar{D}^{*0}, \\bar{D}^0 D^{*+} and obtain the total width of the Z_c^± (3900). The numerical results support assigning the Z_c^± (3900) to be the diquark-antidiquark type axialvector tetraquark state, and assigning the Z_c^± (3885) to be the meson-meson type axialvector molecular state.

  13. A proposed physical analog for a quantum probability amplitude

    NASA Astrophysics Data System (ADS)

    Boyd, Jeffrey

    What is the physical analog of a probability amplitude? All quantum mathematics, including quantum information, is built on amplitudes. Every other science uses probabilities; QM alone uses their square root. Why? This question has been asked for a century, but no one previously has proposed an answer. We will present cylindrical helices moving toward a particle source, which particles follow backwards. Consider Feynman's book QED. He speaks of amplitudes moving through space like the hand of a spinning clock. His hand is a complex vector. It traces a cylindrical helix in Cartesian space. The Theory of Elementary Waves changes direction so Feynman's clock faces move toward the particle source. Particles follow amplitudes (quantum waves) backwards. This contradicts wave particle duality. We will present empirical evidence that wave particle duality is wrong about the direction of particles versus waves. This involves a paradigm shift; which are always controversial. We believe that our model is the ONLY proposal ever made for the physical foundations of probability amplitudes. We will show that our ``probability amplitudes'' in physical nature form a Hilbert vector space with adjoints, an inner product and support both linear algebra and Dirac notation.

  14. Variational methods for direct/inverse problems of atmospheric dynamics and chemistry

    NASA Astrophysics Data System (ADS)

    Penenko, Vladimir; Penenko, Alexey; Tsvetova, Elena

    2013-04-01

    We present a variational approach for solving direct and inverse problems of atmospheric hydrodynamics and chemistry. It is important that the accurate matching of numerical schemes has to be provided in the chain of objects: direct/adjoint problems - sensitivity relations - inverse problems, including assimilation of all available measurement data. To solve the problems we have developed a new enhanced set of cost-effective algorithms. The matched description of the multi-scale processes is provided by a specific choice of the variational principle functionals for the whole set of integrated models. Then all functionals of variational principle are approximated in space and time by splitting and decomposition methods. Such approach allows us to separately consider, for example, the space-time problems of atmospheric chemistry in the frames of decomposition schemes for the integral identity sum analogs of the variational principle at each time step and in each of 3D finite-volumes. To enhance the realization efficiency, the set of chemical reactions is divided on the subsets related to the operators of production and destruction. Then the idea of the Euler's integrating factors is applied in the frames of the local adjoint problem technique [1]-[3]. The analytical solutions of such adjoint problems play the role of integrating factors for differential equations describing atmospheric chemistry. With their help, the system of differential equations is transformed to the equivalent system of integral equations. As a result we avoid the construction and inversion of preconditioning operators containing the Jacobi matrixes which arise in traditional implicit schemes for ODE solution. This is the main advantage of our schemes. At the same time step but on the different stages of the "global" splitting scheme, the system of atmospheric dynamic equations is solved. For convection - diffusion equations for all state functions in the integrated models we have developed the monotone and stable discrete-analytical numerical schemes [1]-[3] conserving the positivity of the chemical substance concentrations and possessing the properties of energy and mass balance that are postulated in the general variational principle for integrated models. All algorithms for solution of transport, diffusion and transformation problems are direct (without iterations). The work is partially supported by the Programs No 4 of Presidium RAS and No 3 of Mathematical Department of RAS, by RFBR project 11-01-00187 and Integrating projects of SD RAS No 8 and 35. Our studies are in the line with the goals of COST Action ES1004. References Penenko V., Tsvetova E. Discrete-analytical methods for the implementation of variational principles in environmental applications// Journal of computational and applied mathematics, 2009, v. 226, 319-330. Penenko A.V. Discrete-analytic schemes for solving an inverse coefficient heat conduction problem in a layered medium with gradient methods// Numerical Analysis and Applications, 2012, V. 5, pp 326-341. V. Penenko, E. Tsvetova. Variational methods for constructing the monotone approximations for atmospheric chemistry models //Numerical Analysis and Applications, 2013 (in press).

  15. The Variation Theorem Applied to H-2+: A Simple Quantum Chemistry Computer Project

    ERIC Educational Resources Information Center

    Robiette, Alan G.

    1975-01-01

    Describes a student project which requires limited knowledge of Fortran and only minimal computing resources. The results illustrate such important principles of quantum mechanics as the variation theorem and the virial theorem. Presents sample calculations and the subprogram for energy calculations. (GS)

  16. Adaptive force produced by stress-induced regulation of random variation intensity.

    PubMed

    Shimansky, Yury P

    2010-08-01

    The Darwinian theory of life evolution is capable of explaining the majority of related phenomena. At the same time, the mechanisms of optimizing traits beneficial to a population as a whole but not directly to an individual remain largely unclear. There are also significant problems with explaining the phenomenon of punctuated equilibrium. From another perspective, multiple mechanisms for the regulation of the rate of genetic mutations according to the environmental stress have been discovered, but their precise functional role is not well understood yet. Here a novel mathematical paradigm called a Kinetic-Force Principle (KFP), which can serve as a general basis for biologically plausible optimization methods, is introduced and its rigorous derivation is provided. Based on this principle, it is shown that, if the rate of random changes in a biological system is proportional, even only roughly, to the amount of environmental stress, a virtual force is created, acting in the direction of stress relief. It is demonstrated that KFP can provide important insights into solving the above problems. Evidence is presented in support of a hypothesis that the nature employs KFP for accelerating adaptation in biological systems. A detailed comparison between KFP and the principle of variation and natural selection is presented and their complementarity is revealed. It is concluded that KFP is not a competing alternative, but a powerful addition to the principle of variation and natural selection. It is also shown KFP can be used in multiple ways for adaptation of individual biological organisms.

  17. Importance of parametrizing constraints in quantum-mechanical variational calculations

    NASA Technical Reports Server (NTRS)

    Chung, Kwong T.; Bhatia, A. K.

    1992-01-01

    In variational calculations of quantum mechanics, constraints are sometimes imposed explicitly on the wave function. These constraints, which are deduced by physical arguments, are often not uniquely defined. In this work, the advantage of parametrizing constraints and letting the variational principle determine the best possible constraint for the problem is pointed out. Examples are carried out to show the surprising effectiveness of the variational method if constraints are parameterized. It is also shown that misleading results may be obtained if a constraint is not parameterized.

  18. Eliminating Undesirable Variation in Neonatal Practice: Balancing Standardization and Customization.

    PubMed

    Balakrishnan, Maya; Raghavan, Aarti; Suresh, Gautham K

    2017-09-01

    Consistency of care and elimination of unnecessary and harmful variation are underemphasized aspects of health care quality. This article describes the prevalence and patterns of practice variation in health care and neonatology; discusses the potential role of standardization as a solution to eliminating wasteful and harmful practice variation, particularly when it is founded on principles of evidence-based medicine; and proposes ways to balance standardization and customization of practice to ultimately improve the quality of neonatal care. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Phonetic Variation and Interactional Contingencies in Simultaneous Responses

    ERIC Educational Resources Information Center

    Walker, Gareth

    2016-01-01

    An auspicious but unexplored environment for studying phonetic variation in naturalistic interaction is where two or more participants say the same thing at the same time. Working with a core dataset built from the multimodal Augmented Multi-party Interaction corpus, the principles of conversation analysis were followed to analyze the sequential…

  20. Interindividual variation in DNA methylation at a putative POMC metastable epiallele is associated with obesity

    USDA-ARS?s Scientific Manuscript database

    The estimated heritability of human BMI is close to 75%, but identified genetic variants explain only a small fraction of interindividual body-weight variation. Inherited epigenetic variants identified in mouse models named "metastable epialleles" could in principle explain this "missing heritabilit...

  1. Gluing Ladder Feynman Diagrams into Fishnets

    DOE PAGES

    Basso, Benjamin; Dixon, Lance J.

    2017-08-14

    We use integrability at weak coupling to compute fishnet diagrams for four-point correlation functions in planar Φ 4 theory. Our results are always multilinear combinations of ladder integrals, which are in turn built out of classical polylogarithms. The Steinmann relations provide a powerful constraint on such linear combinations, which leads to a natural conjecture for any fishnet diagram as the determinant of a matrix of ladder integrals.

  2. DOE Research and Development Accomplishments Alfred Nobel Laureates

    Science.gov Websites

    Science DOE Nobel Laureates Alphabetical Listing Chronological Listing A B C D E F G H I J K L M N O P Q R Politzer Richard P. Feynman 1965 Physics 2003 Val L. Fitch 1980 Physics Alexei A. Abrikosov Physics Paul J 1973 Physics 2000 Donald A. Glaser 1960 Physics Alan Heeger Chemistry Sheldon L. Glashow 1979 Physics

  3. "It Has to Go down a Little, in Order to Go around"--Revisiting Feynman on the Gyroscope

    ERIC Educational Resources Information Center

    Kostov, Svilen; Hammer, Daniel

    2011-01-01

    In this paper we show that with the help of accessible, teaching-quality equipment, some interesting and important details of the motion of a gyroscope, which are typically overlooked in introductory courses, can be measured and compared to theory. We begin by deriving a simple relation between the "dip angle" of a gyroscope released from rest and…

  4. Science, Technology, and the Quest for International Influence

    DTIC Science & Technology

    2009-09-01

    accept technical arguments on sensitive issues like protection of the Amazon rainforest . In 2009, non- governmental organizations, a primary channel of...biodiversity in the Amazon , or petroleum in the pre-salt layer off Brazil‘s coast complicated matters. After all, Richard Feynman, before visiting...defended its sovereign right to utilize its natural resources, including its enormous tropical rainforest , for natural development.79 Even as Brazil

  5. Polynomial complexity despite the fermionic sign

    NASA Astrophysics Data System (ADS)

    Rossi, R.; Prokof'ev, N.; Svistunov, B.; Van Houcke, K.; Werner, F.

    2017-04-01

    It is commonly believed that in unbiased quantum Monte Carlo approaches to fermionic many-body problems, the infamous sign problem generically implies prohibitively large computational times for obtaining thermodynamic-limit quantities. We point out that for convergent Feynman diagrammatic series evaluated with a recently introduced Monte Carlo algorithm (see Rossi R., arXiv:1612.05184), the computational time increases only polynomially with the inverse error on thermodynamic-limit quantities.

  6. Representation of Renormalization Group Functions By Nonsingular Integrals in a Model of the Critical Dynamics of Ferromagnets: The Fourth Order of The ɛ-Expansion

    NASA Astrophysics Data System (ADS)

    Adzhemyan, L. Ts.; Vorob'eva, S. E.; Ivanova, E. V.; Kompaniets, M. V.

    2018-04-01

    Using the representation for renormalization group functions in terms of nonsingular integrals, we calculate the dynamical critical exponents in the model of critical dynamics of ferromagnets in the fourth order of the ɛ-expansion. We calculate the Feynman diagrams using the sector decomposition technique generalized to critical dynamics problems.

  7. Stopping powers and cross sections due to two-photon processes in relativistic nucleus-nucleus collisions

    NASA Technical Reports Server (NTRS)

    Cheung, Wang K.; Norbury, John W.

    1994-01-01

    The effects of electromagnetic-production processes due to two-photon exchange in nucleus-nucleus collisions are discussed. Feynman diagrams for two-photon exchange are evaluated using quantum electrodynamics. The total cross section and stopping power for projectile and target nuclei of identical charge are found to be significant for heavy nuclei above a few GeV per nucleon-incident energy.

  8. Observations of Breather Solitons in a Nonlinear Vibratory Lattice

    DTIC Science & Technology

    1992-03-01

    abundant ( Christiansen 1988) and applications are still under development. One application is in fiber optic communications, where the self-localized...were clearly two- dimensional. It may be that this degeneracy prevents the formation of breathers. 73 LIST OF REFERENCES Christiansen , P., 1988...1982, Solitons and Nonlinear Wave Eguations, Academic Press. Feynman, R., Leighton , R., and Sands, M., 1965, Lectures on Physics, Vol. III, Addison

  9. Analysis of Local Variations in Free Field Seismic Ground Motion.

    DTIC Science & Technology

    1981-01-01

    analysis) can conveniently account for material damping through the introduction of complex moduli into the equations of motion. This method can...determined, and the total response is obtained by superposition. This technique, however, can not properly account for the spatial variation of damping...2.9. Most available data only consider the variation of shear modulus and damping ratio with shear strain amplitude. In principle , two moduli and two

  10. A Variational Reduction and the Existence of a Fully Localised Solitary Wave for the Three-Dimensional Water-Wave Problem with Weak Surface Tension

    NASA Astrophysics Data System (ADS)

    Buffoni, Boris; Groves, Mark D.; Wahlén, Erik

    2017-12-01

    Fully localised solitary waves are travelling-wave solutions of the three- dimensional gravity-capillary water wave problem which decay to zero in every horizontal spatial direction. Their existence has been predicted on the basis of numerical simulations and model equations (in which context they are usually referred to as `lumps'), and a mathematically rigorous existence theory for strong surface tension (Bond number {β} greater than {1/3} ) has recently been given. In this article we present an existence theory for the physically more realistic case {0 < β < 1/3} . A classical variational principle for fully localised solitary waves is reduced to a locally equivalent variational principle featuring a perturbation of the functional associated with the Davey-Stewartson equation. A nontrivial critical point of the reduced functional is found by minimising it over its natural constraint set.

  11. A Variational Reduction and the Existence of a Fully Localised Solitary Wave for the Three-Dimensional Water-Wave Problem with Weak Surface Tension

    NASA Astrophysics Data System (ADS)

    Buffoni, Boris; Groves, Mark D.; Wahlén, Erik

    2018-06-01

    Fully localised solitary waves are travelling-wave solutions of the three- dimensional gravity-capillary water wave problem which decay to zero in every horizontal spatial direction. Their existence has been predicted on the basis of numerical simulations and model equations (in which context they are usually referred to as `lumps'), and a mathematically rigorous existence theory for strong surface tension (Bond number {β} greater than {1/3}) has recently been given. In this article we present an existence theory for the physically more realistic case {0 < β < 1/3}. A classical variational principle for fully localised solitary waves is reduced to a locally equivalent variational principle featuring a perturbation of the functional associated with the Davey-Stewartson equation. A nontrivial critical point of the reduced functional is found by minimising it over its natural constraint set.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Luning; Neuscamman, Eric

    We present a modification to variational Monte Carlo’s linear method optimization scheme that addresses a critical memory bottleneck while maintaining compatibility with both the traditional ground state variational principle and our recently-introduced variational principle for excited states. For wave function ansatzes with tens of thousands of variables, our modification reduces the required memory per parallel process from tens of gigabytes to hundreds of megabytes, making the methodology a much better fit for modern supercomputer architectures in which data communication and per-process memory consumption are primary concerns. We verify the efficacy of the new optimization scheme in small molecule tests involvingmore » both the Hilbert space Jastrow antisymmetric geminal power ansatz and real space multi-Slater Jastrow expansions. Satisfied with its performance, we have added the optimizer to the QMCPACK software package, with which we demonstrate on a hydrogen ring a prototype approach for making systematically convergent, non-perturbative predictions of Mott-insulators’ optical band gaps.« less

  13. Variational theorems for superimposed motions in elasticity, with application to beams

    NASA Technical Reports Server (NTRS)

    Doekmeci, M. C.

    1976-01-01

    Variational theorems are presented for a theory of small motions superimposed on large static deformations and governing equations for prestressed beams on the basis of 3-D theory of elastodynamics. First, the principle of virtual work is modified through Friedrichs's transformation so as to describe the initial stress problem of elastodynamics. Next, the modified principle together with a chosen displacement field is used to derive a set of 1-D macroscopic governing equations of prestressed beams. The resulting equations describe all the types of superimposed motions in elastic beams, and they include all the effects of transverse shear and normal strains, and the rotatory inertia. The instability of the governing equations is discussed briefly.

  14. Improved techniques for outgoing wave variational principle calculations of converged state-to-state transition probabilities for chemical reactions

    NASA Technical Reports Server (NTRS)

    Mielke, Steven L.; Truhlar, Donald G.; Schwenke, David W.

    1991-01-01

    Improved techniques and well-optimized basis sets are presented for application of the outgoing wave variational principle to calculate converged quantum mechanical reaction probabilities. They are illustrated with calculations for the reactions D + H2 yields HD + H with total angular momentum J = 3 and F + H2 yields HF + H with J = 0 and 3. The optimization involves the choice of distortion potential, the grid for calculating half-integrated Green's functions, the placement, width, and number of primitive distributed Gaussians, and the computationally most efficient partition between dynamically adapted and primitive basis functions. Benchmark calculations with 224-1064 channels are presented.

  15. A Free Energy Principle for Biological Systems

    PubMed Central

    Karl, Friston

    2012-01-01

    This paper describes a free energy principle that tries to explain the ability of biological systems to resist a natural tendency to disorder. It appeals to circular causality of the sort found in synergetic formulations of self-organization (e.g., the slaving principle) and models of coupled dynamical systems, using nonlinear Fokker Planck equations. Here, circular causality is induced by separating the states of a random dynamical system into external and internal states, where external states are subject to random fluctuations and internal states are not. This reduces the problem to finding some (deterministic) dynamics of the internal states that ensure the system visits a limited number of external states; in other words, the measure of its (random) attracting set, or the Shannon entropy of the external states is small. We motivate a solution using a principle of least action based on variational free energy (from statistical physics) and establish the conditions under which it is formally equivalent to the information bottleneck method. This approach has proved useful in understanding the functional architecture of the brain. The generality of variational free energy minimisation and corresponding information theoretic formulations may speak to interesting applications beyond the neurosciences; e.g., in molecular or evolutionary biology. PMID:23204829

  16. Classroom Experiments: Teaching Specific Topics or Promoting the Economic Way of Thinking?

    ERIC Educational Resources Information Center

    Emerson, Tisha L. N.; English, Linda K.

    2016-01-01

    The authors' data contain inter- and intra-class variations in experiments to which students in a principles of microeconomics course were exposed. These variations allowed the estimation of the effect on student achievement from the experimental treatment generally, as well as effects associated with participation in specific experiments. The…

  17. General stochastic variational formulation for the oligopolistic market equilibrium problem with excesses

    NASA Astrophysics Data System (ADS)

    Barbagallo, Annamaria; Di Meglio, Guglielmo; Mauro, Paolo

    2017-07-01

    The aim of the paper is to study, in a Hilbert space setting, a general random oligopolistic market equilibrium problem in presence of both production and demand excesses and to characterize the random Cournot-Nash equilibrium principle by means of a stochastic variational inequality. Some existence results are presented.

  18. Variational Approach to Monte Carlo Renormalization Group

    NASA Astrophysics Data System (ADS)

    Wu, Yantao; Car, Roberto

    2017-12-01

    We present a Monte Carlo method for computing the renormalized coupling constants and the critical exponents within renormalization theory. The scheme, which derives from a variational principle, overcomes critical slowing down, by means of a bias potential that renders the coarse grained variables uncorrelated. The two-dimensional Ising model is used to illustrate the method.

  19. Action principle for Coulomb collisions in plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirvijoki, Eero

    In this study, an action principle for Coulomb collisions in plasmas is proposed. Although no natural Lagrangian exists for the Landau-Fokker-Planck equation, an Eulerian variational formulation is found considering the system of partial differential equations that couple the distribution function and the Rosenbluth-MacDonald-Judd potentials. Conservation laws are derived after generalizing the energy-momentum stress tensor for second order Lagrangians and, in the case of a test-particle population in a given plasma background, the action principle is shown to correspond to the Langevin equation for individual particles.

  20. Action principle for Coulomb collisions in plasmas

    DOE PAGES

    Hirvijoki, Eero

    2016-09-14

    In this study, an action principle for Coulomb collisions in plasmas is proposed. Although no natural Lagrangian exists for the Landau-Fokker-Planck equation, an Eulerian variational formulation is found considering the system of partial differential equations that couple the distribution function and the Rosenbluth-MacDonald-Judd potentials. Conservation laws are derived after generalizing the energy-momentum stress tensor for second order Lagrangians and, in the case of a test-particle population in a given plasma background, the action principle is shown to correspond to the Langevin equation for individual particles.

  1. Maximum principle for a stochastic delayed system involving terminal state constraints.

    PubMed

    Wen, Jiaqiang; Shi, Yufeng

    2017-01-01

    We investigate a stochastic optimal control problem where the controlled system is depicted as a stochastic differential delayed equation; however, at the terminal time, the state is constrained in a convex set. We firstly introduce an equivalent backward delayed system depicted as a time-delayed backward stochastic differential equation. Then a stochastic maximum principle is obtained by virtue of Ekeland's variational principle. Finally, applications to a state constrained stochastic delayed linear-quadratic control model and a production-consumption choice problem are studied to illustrate the main obtained result.

  2. Dirac structures in vakonomic mechanics

    NASA Astrophysics Data System (ADS)

    Jiménez, Fernando; Yoshimura, Hiroaki

    2015-08-01

    In this paper, we explore dynamics of the nonholonomic system called vakonomic mechanics in the context of Lagrange-Dirac dynamical systems using a Dirac structure and its associated Hamilton-Pontryagin variational principle. We first show the link between vakonomic mechanics and nonholonomic mechanics from the viewpoints of Dirac structures as well as Lagrangian submanifolds. Namely, we clarify that Lagrangian submanifold theory cannot represent nonholonomic mechanics properly, but vakonomic mechanics instead. Second, in order to represent vakonomic mechanics, we employ the space TQ ×V∗, where a vakonomic Lagrangian is defined from a given Lagrangian (possibly degenerate) subject to nonholonomic constraints. Then, we show how implicit vakonomic Euler-Lagrange equations can be formulated by the Hamilton-Pontryagin variational principle for the vakonomic Lagrangian on the extended Pontryagin bundle (TQ ⊕T∗ Q) ×V∗. Associated with this variational principle, we establish a Dirac structure on (TQ ⊕T∗ Q) ×V∗ in order to define an intrinsic vakonomic Lagrange-Dirac system. Furthermore, we also establish another construction for the vakonomic Lagrange-Dirac system using a Dirac structure on T∗ Q ×V∗, where we introduce a vakonomic Dirac differential. Finally, we illustrate our theory of vakonomic Lagrange-Dirac systems by some examples such as the vakonomic skate and the vertical rolling coin.

  3. A principle of organization which facilitates broad Lamarckian-like adaptations by improvisation.

    PubMed

    Soen, Yoav; Knafo, Maor; Elgart, Michael

    2015-12-02

    During the lifetime of an organism, every individual encounters many combinations of diverse changes in the somatic genome, epigenome and microbiome. This gives rise to many novel combinations of internal failures which are unique to each individual. How any individual can tolerate this high load of new, individual-specific scenarios of failure is not clear. While stress-induced plasticity and hidden variation have been proposed as potential mechanisms of tolerance, the main conceptual problem remains unaddressed, namely: how largely non-beneficial random variation can be rapidly and safely organized into net benefits to every individual. We propose an organizational principle which explains how every individual can alleviate a high load of novel stressful scenarios using many random variations in flexible and inherently less harmful traits. Random changes which happen to reduce stress, benefit the organism and decrease the drive for additional changes. This adaptation (termed 'Adaptive Improvisation') can be further enhanced, propagated, stabilized and memorized when beneficial changes reinforce themselves by auto-regulatory mechanisms. This principle implicates stress not only in driving diverse variations in cells tissues and organs, but also in organizing these variations into adaptive outcomes. Specific (but not exclusive) examples include stress reduction by rapid exchange of mobile genetic elements (or exosomes) in unicellular, and rapid changes in the symbiotic microorganisms of animals. In all cases, adaptive changes can be transmitted across generations, allowing rapid improvement and assimilation in a few generations. We provide testable predictions derived from the hypothesis. The hypothesis raises a critical, but thus far overlooked adaptation problem and explains how random variation can self-organize to confer a wide range of individual-specific adaptations beyond the existing outcomes of natural selection. It portrays gene regulation as an inseparable synergy between natural selection and adaptation by improvisation. The latter provides a basis for Lamarckian adaptation that is not limited to a specific mechanism and readily accounts for the remarkable resistance of tumors to treatment.

  4. First-principles investigations into the thermodynamics of cation disorder and its impact on electronic structure and magnetic properties of spinel Co(Cr1-x Mn x )2O4

    NASA Astrophysics Data System (ADS)

    Das, Debashish; Ghosh, Subhradip

    2017-02-01

    Cation disorder over different crystallographic sites in spinel oxides is known to affect their properties. Recent experiments on Mn doped multiferroic \\text{CoC}{{\\text{r}}2}{{\\text{O}}4} indicate that a possible distribution of Mn atoms among tetrahedrally and octahedrally coordinated sites in the spinel lattice give rise to different variations in the structural parameters and saturation magnetisations in different concentration regimes of Mn atoms substituting the Cr. A composition dependent magnetic compensation behaviour points to the role conversions of the magnetic constituents. In this work, we have investigated the thermodynamics of cation disorder in \\text{Co}{{≤ft(\\text{C}{{\\text{r}}1-x}\\text{M}{{\\text{n}}x}\\right)}2}{{\\text{O}}4} system and its consequences on the structural, electronic and magnetic properties, using results from first-principles electronic structure calculations. We have computed the variations in the cation-disorder as a function of Mn concentration and the temperature and found that at the annealing temperature of the experiment many of the systems exhibit cation disorder. Our results support the interpretations of the experimental results regarding the qualitative variations in the sub-lattice occupancies and the associated magnetisation behaviour, with composition. We have analysed the variations in structural, magnetic and electronic properties of this system with variations in the compositions and the degree of cation disorder from the variations in their electronic structures and by using the ideas from crystal field theory. Our study provides a complete microscopic picture of the effects that are responsible for composition dependent behavioural differences of the properties of this system. This work lays down a general framework, based upon results from first-principles calculations, to understand and analyse the substitutional magnetic spinel oxides A{{≤ft({{B}1-x}{{C}x}\\right)}2}{{\\text{O}}4} in presence of cation disorder.

  5. The multiscale coarse-graining method. II. Numerical implementation for coarse-grained molecular models

    PubMed Central

    Noid, W. G.; Liu, Pu; Wang, Yanting; Chu, Jhih-Wei; Ayton, Gary S.; Izvekov, Sergei; Andersen, Hans C.; Voth, Gregory A.

    2008-01-01

    The multiscale coarse-graining (MS-CG) method [S. Izvekov and G. A. Voth, J. Phys. Chem. B 109, 2469 (2005);J. Chem. Phys. 123, 134105 (2005)] employs a variational principle to determine an interaction potential for a CG model from simulations of an atomically detailed model of the same system. The companion paper proved that, if no restrictions regarding the form of the CG interaction potential are introduced and if the equilibrium distribution of the atomistic model has been adequately sampled, then the MS-CG variational principle determines the exact many-body potential of mean force (PMF) governing the equilibrium distribution of CG sites generated by the atomistic model. In practice, though, CG force fields are not completely flexible, but only include particular types of interactions between CG sites, e.g., nonbonded forces between pairs of sites. If the CG force field depends linearly on the force field parameters, then the vector valued functions that relate the CG forces to these parameters determine a set of basis vectors that span a vector subspace of CG force fields. The companion paper introduced a distance metric for the vector space of CG force fields and proved that the MS-CG variational principle determines the CG force force field that is within that vector subspace and that is closest to the force field determined by the many-body PMF. The present paper applies the MS-CG variational principle for parametrizing molecular CG force fields and derives a linear least squares problem for the parameter set determining the optimal approximation to this many-body PMF. Linear systems of equations for these CG force field parameters are derived and analyzed in terms of equilibrium structural correlation functions. Numerical calculations for a one-site CG model of methanol and a molecular CG model of the EMIM+∕NO3− ionic liquid are provided to illustrate the method. PMID:18601325

  6. Off-shell gluon production in interaction of a projectile with 2 or 3 targets

    NASA Astrophysics Data System (ADS)

    Braun, M. A.; Salykin, M. Yu.

    2017-07-01

    Within the effective QCD action for the Regge kinematics, the amplitudes for virtual gluon emission are studied in collision of a projectile with two and three targets. It is demonstrated that all non-Feynman singularities cancel between induced vertices and rescattering contributions. Formulas simplify considerably in a special gauge, which is a straightforward generalization of the light-cone gauge for emission of real gluons.

  7. The extent to which path-integral models account for evanescent (tunneling) and complex (near-field) waves

    NASA Astrophysics Data System (ADS)

    Ranfagni, Anedio; Mugnai, Daniela; Cacciari, Ilaria

    2018-05-01

    The usefulness of a stochastic approach in determining time scales in tunneling processes (mainly, but not only, in the microwave range) is reconsidered and compared with a different approach to these kinds of processes, based on Feynman's transition elements. This latter method is found to be particularly suitable for interpreting situations in the near field, as results from some experimental cases considered here.

  8. Analysis of a gauged model with a spin-1/2 field directly coupled to a Rarita-Schwinger spin-3/2 field

    NASA Astrophysics Data System (ADS)

    Adler, Stephen L.

    2018-02-01

    We give a detailed analysis of an Abelianized gauge field model in which a Rarita-Schwinger spin-3/2 field is directly coupled to a spin-1/2 field. The model permits a perturbative expansion in powers of the gauge field coupling, and from the Feynman rules for the model we calculate the chiral anomaly.

  9. Velocity Noise in Space Shuttle and ISS GPS from the Ionosphere

    NASA Technical Reports Server (NTRS)

    Kramer, Leonard

    2004-01-01

    A viewgraph presentation on the noise velocity effects on the Space Shuttle and International Space Station (ISS) Global Positioning System (GPS) from the ionosphere is shown. The topics include: Scintillation in MAGR/S GPS used for Shuttle; 2) Geographic Distribution of Scintillation; 3) Diurnal Variability; 4) Feynman's interpretation of interference; 5) Angle between line of sight and S/C velocity; and 6) Space Station GPS

  10. K-Means Clustering to Study How Student Reasoning Lines Can Be Modified by a Learning Activity Based on Feynman's Unifying Approach

    ERIC Educational Resources Information Center

    Battaglia, Onofrio Rosario; Di Paola, Benedetto; Fazio, Claudio

    2017-01-01

    Research in Science Education has shown that often students need to learn how to identify differences and similarities between descriptive and explicative models. The development and use of explicative skills in the field of thermal science has always been a difficult objective to reach. A way to develop analogical reasoning is to use in Science…

  11. Born approximation in linear-time invariant system

    NASA Astrophysics Data System (ADS)

    Gumjudpai, Burin

    2017-09-01

    An alternative way of finding the LTI’s solution with the Born approximation, is investigated. We use Born approximation in the LTI and in the transformed LTI in form of Helmholtz equation. General solution are considered as infinite series or Feynman graph. Slow-roll approximation are explored. Transforming the LTI system into Helmholtz equation, approximated general solution can be found for any given forms of force with its initial value.

  12. On the Feynman-Hellmann theorem in quantum field theory and the calculation of matrix elements

    DOE PAGES

    Bouchard, Chris; Chang, Chia Cheng; Kurth, Thorsten; ...

    2017-07-12

    In this paper, the Feynman-Hellmann theorem can be derived from the long Euclidean-time limit of correlation functions determined with functional derivatives of the partition function. Using this insight, we fully develop an improved method for computing matrix elements of external currents utilizing only two-point correlation functions. Our method applies to matrix elements of any external bilinear current, including nonzero momentum transfer, flavor-changing, and two or more current insertion matrix elements. The ability to identify and control all the systematic uncertainties in the analysis of the correlation functions stems from the unique time dependence of the ground-state matrix elements and the fact that all excited states and contact terms are Euclidean-time dependent. We demonstrate the utility of our method with a calculation of the nucleon axial charge using gradient-flowed domain-wall valence quarks on themore » $$N_f=2+1+1$$ MILC highly improved staggered quark ensemble with lattice spacing and pion mass of approximately 0.15 fm and 310 MeV respectively. We show full control over excited-state systematics with the new method and obtain a value of $$g_A = 1.213(26)$$ with a quark-mass-dependent renormalization coefficient.« less

  13. Pinch technique and the Batalin-Vilkovisky formalism

    NASA Astrophysics Data System (ADS)

    Binosi, Daniele; Papavassiliou, Joannis

    2002-07-01

    In this paper we take the first step towards a nondiagrammatic formulation of the pinch technique. In particular we proceed into a systematic identification of the parts of the one-loop and two-loop Feynman diagrams that are exchanged during the pinching process in terms of unphysical ghost Green's functions; the latter appear in the standard Slavnov-Taylor identity satisfied by the tree-level and one-loop three-gluon vertex. This identification allows for the consistent generalization of the intrinsic pinch technique to two loops, through the collective treatment of entire sets of diagrams, instead of the laborious algebraic manipulation of individual graphs, and sets up the stage for the generalization of the method to all orders. We show that the task of comparing the effective Green's functions obtained by the pinch technique with those computed in the background field method Feynman gauge is significantly facilitated when employing the powerful quantization framework of Batalin and Vilkovisky. This formalism allows for the derivation of a set of useful nonlinear identities, which express the background field method Green's functions in terms of the conventional (quantum) ones and auxiliary Green's functions involving the background source and the gluonic antifield; these latter Green's functions are subsequently related by means of a Schwinger-Dyson type of equation to the ghost Green's functions appearing in the aforementioned Slavnov-Taylor identity.

  14. Elastic and inelastic electrons in the double-slit experiment: A variant of Feynman's which-way set-up.

    PubMed

    Frabboni, Stefano; Gazzadi, Gian Carlo; Grillo, Vincenzo; Pozzi, Giulio

    2015-07-01

    Modern nanotechnology tools allowed us to prepare slits of 90 nm width and 450 nm spacing in a screen almost completely opaque to 200 keV electrons. Then by covering both slits with a layer of amorphous material and carrying out the experiment in a conventional transmission electron microscope equipped with an energy filter we can demonstrate that the diffraction pattern, taken by selecting the elastically scattered electrons, shows the presence of interference fringes, but with a bimodal envelope which can be accounted for by taking into account the non-constant thickness of the deposited layer. However, the intensity of the inelastically scattered electrons in the diffraction plane is very broad and at the limit of detectability. Therefore the experiment was repeated using an aluminum film and a microscope also equipped with a Schottky field emission gun. It was thus possible to observe also the image due to the inelastically scattered electron, which does not show interference phenomena both in the Fraunhofer or Fresnel regimes. If we assume that inelastic scattering through the thin layer covering the slits provides the dissipative process of interaction responsible for the localization mechanism, then these experiments can be considered a variant of the Feynman which-way thought experiment. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Form of the manifestly covariant Lagrangian

    NASA Astrophysics Data System (ADS)

    Johns, Oliver Davis

    1985-10-01

    The preferred form for the manifestly covariant Lagrangian function of a single, charged particle in a given electromagnetic field is the subject of some disagreement in the textbooks. Some authors use a ``homogeneous'' Lagrangian and others use a ``modified'' form in which the covariant Hamiltonian function is made to be nonzero. We argue in favor of the ``homogeneous'' form. We show that the covariant Lagrangian theories can be understood only if one is careful to distinguish quantities evaluated on the varied (in the sense of the calculus of variations) world lines from quantities evaluated on the unvaried world lines. By making this distinction, we are able to derive the Hamilton-Jacobi and Klein-Gordon equations from the ``homogeneous'' Lagrangian, even though the covariant Hamiltonian function is identically zero on all world lines. The derivation of the Klein-Gordon equation in particular gives Lagrangian theoretical support to the derivations found in standard quantum texts, and is also shown to be consistent with the Feynman path-integral method. We conclude that the ``homogeneous'' Lagrangian is a completely adequate basis for covariant Lagrangian theory both in classical and quantum mechanics. The article also explores the analogy with the Fermat theorem of optics, and illustrates a simple invariant notation for the Lagrangian and other four-vector equations.

  16. Physics Without Physics. The Power of Information-theoretical Principles

    NASA Astrophysics Data System (ADS)

    D'Ariano, Giacomo Mauro

    2017-01-01

    David Finkelstein was very fond of the new information-theoretic paradigm of physics advocated by John Archibald Wheeler and Richard Feynman. Only recently, however, the paradigm has concretely shown its full power, with the derivation of quantum theory (Chiribella et al., Phys. Rev. A 84:012311, 2011; D'Ariano et al., 2017) and of free quantum field theory (D'Ariano and Perinotti, Phys. Rev. A 90:062106, 2014; Bisio et al., Phys. Rev. A 88:032301, 2013; Bisio et al., Ann. Phys. 354:244, 2015; Bisio et al., Ann. Phys. 368:177, 2016) from informational principles. The paradigm has opened for the first time the possibility of avoiding physical primitives in the axioms of the physical theory, allowing a re-foundation of the whole physics over logically solid grounds. In addition to such methodological value, the new information-theoretic derivation of quantum field theory is particularly interesting for establishing a theoretical framework for quantum gravity, with the idea of obtaining gravity itself as emergent from the quantum information processing, as also suggested by the role played by information in the holographic principle (Susskind, J. Math. Phys. 36:6377, 1995; Bousso, Rev. Mod. Phys. 74:825, 2002). In this paper I review how free quantum field theory is derived without using mechanical primitives, including space-time, special relativity, Hamiltonians, and quantization rules. The theory is simply provided by the simplest quantum algorithm encompassing a countable set of quantum systems whose network of interactions satisfies the three following simple principles: homogeneity, locality, and isotropy. The inherent discrete nature of the informational derivation leads to an extension of quantum field theory in terms of a quantum cellular automata and quantum walks. A simple heuristic argument sets the scale to the Planck one, and the currently observed regime where discreteness is not visible is the so-called "relativistic regime" of small wavevectors, which holds for all energies ever tested (and even much larger), where the usual free quantum field theory is perfectly recovered. In the present quantum discrete theory Einstein relativity principle can be restated without using space-time in terms of invariance of the eigenvalue equation of the automaton/walk under change of representations. Distortions of the Poincaré group emerge at the Planck scale, whereas special relativity is perfectly recovered in the relativistic regime. Discreteness, on the other hand, has some plus compared to the continuum theory: 1) it contains it as a special regime; 2) it leads to some additional features with GR flavor: the existence of an upper bound for the particle mass (with physical interpretation as the Planck mass), and a global De Sitter invariance; 3) it provides its own physical standards for space, time, and mass within a purely mathematical adimensional context. The paper ends with the future perspectives of this project, and with an Appendix containing biographic notes about my friendship with David Finkelstein, to whom this paper is dedicated.

  17. There Is More Variation "within" than "across" Domains: An Interview with Paul A. Kirschner about Applying Cognitive Psychology-Based Instructional Design Principles in Mathematics Teaching and Learning

    ERIC Educational Resources Information Center

    Kirschner, Paul A.; Verschaffel, Lieven; Star, Jon; Van Dooren, Wim

    2017-01-01

    In this interview we asked Paul A. Kirschner about his comments and reflections regarding the idea to apply cognitive psychology-based instructional design principles to mathematics education and some related issues. With a main focus on cognitive psychology, educational psychology, educational technology and instructional design, this…

  18. Solar variability, weather, and climate

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Advances in the understanding of possible effects of solar variations on weather and climate are most likely to emerge by addressing the subject in terms of fundamental physical principles of atmospheric sciences and solar-terrestrial physis. The limits of variability of solar inputs to the atmosphere and the depth in the atmosphere to which these variations have significant effects are determined.

  19. Heredity vs. Environment: The Effects of Genetic Variation with Age

    ERIC Educational Resources Information Center

    Gourlay, N.

    1978-01-01

    Major problems in the field are presented through a brief review of Burt's work and a critical account of the Hawaiian and British schools of biometrical genetics. The merits and demerits of Christopher Jencks' study are also discussed. There follows an account of the principle of genetic variation with age, a new concept to the…

  20. Existence and stability, and discrete BB and rank conditions, for general mixed-hybrid finite elements in elasticity

    NASA Technical Reports Server (NTRS)

    Xue, W.-M.; Atluri, S. N.

    1985-01-01

    In this paper, all possible forms of mixed-hybrid finite element methods that are based on multi-field variational principles are examined as to the conditions for existence, stability, and uniqueness of their solutions. The reasons as to why certain 'simplified hybrid-mixed methods' in general, and the so-called 'simplified hybrid-displacement method' in particular (based on the so-called simplified variational principles), become unstable, are discussed. A comprehensive discussion of the 'discrete' BB-conditions, and the rank conditions, of the matrices arising in mixed-hybrid methods, is given. Some recent studies aimed at the assurance of such rank conditions, and the related problem of the avoidance of spurious kinematic modes, are presented.

  1. The Logarithmic Tail of Néel Walls

    NASA Astrophysics Data System (ADS)

    Melcher, Christof

    We study the multiscale problem of a parametrized planar 180° rotation of magnetization states in a thin ferromagnetic film. In an appropriate scaling and when the film thickness is comparable to the Bloch line width, the underlying variational principle has the form where the reduced stray-field operator Q approximates (-Δ)1/2 as the quality factor Q tends to zero. We show that the associated Néel wall profile u exhibits a very long logarithmic tail. The proof relies on limiting elliptic regularity methods on the basis of the associated Euler-Lagrange equation and symmetrization arguments on the basis of the variational principle. Finally we study the renormalized limit behavior as Q tends to zero.

  2. Electromagnetic finite elements based on a four-potential variational principle

    NASA Technical Reports Server (NTRS)

    Schuler, James J.; Felippa, Carlos A.

    1991-01-01

    Electromagnetic finite elements based on a variational principle that uses the electromagnetic four-potential as a primary variable are derived. This choice is used to construct elements suitable for downstream coupling with mechanical and thermal finite elements for the analysis of electromagnetic/mechanical systems that involve superconductors. The main advantages of the four-potential as a basis for finite element formulation are that the number of degrees of freedom per node remains modest as the problem dimensionally increases, that jump discontinuities on interfaces are naturally accommodated, and that statics as well as dynamics may be treated without any a priori approximations. The new elements are tested on an axisymmetric problem under steady state forcing conditions. The results are in excellent agreement with analytical solutions.

  3. Variational principle for scattering of light by dielectric particles

    NASA Technical Reports Server (NTRS)

    Yung, Y. L.

    1978-01-01

    Consideration is given to the work of Purcell and Pennypacker (1973) where a dielectric particle is taken to be an aggregate of N polarizable elements mounted on a cubic lattice. The simultaneous equations which result from the scattering problem are presented. This theory has been discussed in the case of nonspherical and inhomogeneous objects whose dimensions are smaller than or comparable to the wavelength of incident light. A more precise numerical treatment is derived for further progress. The variational principle is invoked and the practical limit for the current version of the scheme is a dipole array on the order of 10,000 atoms. Limits to the scattering parameter due to the phase difference between neighboring atoms are discussed.

  4. Rotational stellar structures based on the Lagrangian variational principle

    NASA Astrophysics Data System (ADS)

    Yasutake, Nobutoshi; Fujisawa, Kotaro; Yamada, Shoichi

    2017-06-01

    A new method for multi-dimensional stellar structures is proposed in this study. As for stellar evolution calculations, the Heney method is the defacto standard now, but basically assumed to be spherical symmetric. It is one of the difficulties for deformed stellar-evolution calculations to trace the potentially complex movements of each fluid element. On the other hand, our new method is very suitable to follow such movements, since it is based on the Lagrange coordinate. This scheme is also based on the variational principle, which is adopted to the studies for the pasta structures inside of neutron stars. Our scheme could be a major break through for evolution calculations of any types of deformed stars: proto-planets, proto-stars, and proto-neutron stars, etc.

  5. Dynamic behavior of the mercury damper

    NASA Technical Reports Server (NTRS)

    Crout, P. D.; Newkirk, H. L.

    1971-01-01

    The dynamic behavior of the mercury nutation damper is investigated. Particular attention is paid to the eccentric annular mercury configuration, which is the final continuous ring phase that occurs in the operation of all mercury dampers. In this phase, damping is poorest, and the system is closely linear. During the investigation, the hydrodynamic problem is treated as three dimensional, and extensive use is made of a variational principle of least-viscous frictional power loss. A variational principle of least-constraint is also used to advantage. Formulas for calculating the behavior of the mercury damper are obtained. Some confirmatory experiments were performed with transparent ring channels on a laboratory gyroscope. Selected movie frames taken during wobble damping are shown along with the results of film measurements.

  6. How does a scanning ribosomal particle move along the 5'-untranslated region of eukaryotic mRNA? Brownian Ratchet model.

    PubMed

    Spirin, Alexander S

    2009-11-17

    A model of the ATP-dependent unidirectional movement of the 43S ribosomal initiation complex (=40S ribosomal subunit + eIF1 + eIF1A + eIF2.GTP.Met-tRNA(i) + eIF3) during scanning of the 5'-untranslated region of eukaryotic mRNA is proposed. The model is based on the principles of molecular Brownian ratchet machines and explains several enigmatic data concerning the scanning complex. In this model, the one-dimensional diffusion of the ribosomal initiation complex along the mRNA chain is rectified into the net-unidirectional 5'-to-3' movement by the Feynman ratchet-and-pawl mechanism. The proposed mechanism is organized by the heterotrimeric protein eIF4F (=eIF4A + eIF4E + eIF4G), attached to the scanning ribosomal particle via eIF3, and the RNA-binding protein eIF4B that is postulated to play the role of the pawl. The energy for the useful work of the ratchet-and-pawl mechanism is supplied from ATP hydrolysis induced by the eIF4A subunit: ATP binding and its hydrolysis alternately change the affinities of eIF4A for eIF4B and for mRNA, resulting in the restriction of backward diffusional sliding of the 43S ribosomal complex along the mRNA chain, while stochastic movements ahead are allowed.

  7. Stationary variational estimates for the effective response and field fluctuations in nonlinear composites

    NASA Astrophysics Data System (ADS)

    Ponte Castañeda, Pedro

    2016-11-01

    This paper presents a variational method for estimating the effective constitutive response of composite materials with nonlinear constitutive behavior. The method is based on a stationary variational principle for the macroscopic potential in terms of the corresponding potential of a linear comparison composite (LCC) whose properties are the trial fields in the variational principle. When used in combination with estimates for the LCC that are exact to second order in the heterogeneity contrast, the resulting estimates for the nonlinear composite are also guaranteed to be exact to second-order in the contrast. In addition, the new method allows full optimization with respect to the properties of the LCC, leading to estimates that are fully stationary and exhibit no duality gaps. As a result, the effective response and field statistics of the nonlinear composite can be estimated directly from the appropriately optimized linear comparison composite. By way of illustration, the method is applied to a porous, isotropic, power-law material, and the results are found to compare favorably with earlier bounds and estimates. However, the basic ideas of the method are expected to work for broad classes of composites materials, whose effective response can be given appropriate variational representations, including more general elasto-plastic and soft hyperelastic composites and polycrystals.

  8. Modeling Human Decision Processes in Command and Control

    DTIC Science & Technology

    1983-02-14

    principle , calculus of variations, least squares, etc. 7 SALPHATECH, INC. Thus, the...34"""~~~~~~~~~~~~~~~~~. ." -. " ""’./’ ’ • "" -" - -".. "-.... •..... ""........."-. -....... -.............. r ALPHATECH, INC. to equate what a human does with what he should do. Employing this principle "of bounded rationality, the norm’tive-descriptive...mechanism to describe these salient task features. In essence, the SHOR paradigm is derived from the stimulus-response (S-R) principle

  9. Schwinger-variational-principle theory of collisions in the presence of multiple potentials

    NASA Astrophysics Data System (ADS)

    Robicheaux, F.; Giannakeas, P.; Greene, Chris H.

    2015-08-01

    A theoretical method for treating collisions in the presence of multiple potentials is developed by employing the Schwinger variational principle. The current treatment agrees with the local (regularized) frame transformation theory and extends its capabilities. Specifically, the Schwinger variational approach gives results without the divergences that need to be regularized in other methods. Furthermore, it provides a framework to identify the origin of these singularities and possibly improve the local frame transformation. We have used the method to obtain the scattering parameters for different confining potentials symmetric in x ,y . The method is also used to treat photodetachment processes in the presence of various confining potentials, thereby highlighting effects of the infinitely many closed channels. Two general features predicted are the vanishing of the total photoabsorption probability at every channel threshold and the occurrence of resonances below the channel thresholds for negative scattering lengths. In addition, the case of negative-ion photodetachment in the presence of uniform magnetic fields is also considered where unique features emerge at large scattering lengths.

  10. Renormalization group and Ward identities for infrared QED4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mastropietro, Vieri

    2007-10-15

    A regularized version of Euclidean QED4 in the Feynman gauge is considered, with a fixed ultraviolet cutoff, photon mass of the size of the cutoff, and any value, including zero, of the electron mass. We will prove that the Schwinger functions are expressed by convergent series for small values of the charge and verify the Ward identities, up to corrections which are small for momentum scales far from the ultraviolet cutoff.

  11. Free field theory as a string theory?

    NASA Astrophysics Data System (ADS)

    Gopakumar, Rajesh

    2004-11-01

    An approach to systematically implement open-closed string duality for free large N gauge theories is summarised. We show how the relevant closed string moduli space emerges from a reorganisation of the Feynman diagrams contributing to free field correlators. We also indicate why the resulting integrand on moduli space has the right features to be that of a string theory on AdS. To cite this article: R. Gopakumar, C. R. Physique 5 (2004).

  12. MPL-A program for computations with iterated integrals on moduli spaces of curves of genus zero

    NASA Astrophysics Data System (ADS)

    Bogner, Christian

    2016-06-01

    We introduce the Maple program MPL for computations with multiple polylogarithms. The program is based on homotopy invariant iterated integrals on moduli spaces M0,n of curves of genus 0 with n ordered marked points. It includes the symbol map and procedures for the analytic computation of period integrals on M0,n. It supports the automated computation of a certain class of Feynman integrals.

  13. Coherent nonlinear optical studies of elementary processes in biological complexes: diagrammatic techniques based on the wave function versus the density matrix

    PubMed Central

    Biggs, Jason D.; Voll, Judith A.; Mukamel, Shaul

    2012-01-01

    Two types of diagrammatic approaches for the design and simulation of nonlinear optical experiments (closed-time path loops based on the wave function and double-sided Feynman diagrams for the density matrix) are presented and compared. We give guidelines for the assignment of relevant pathways and provide rules for the interpretation of existing nonlinear experiments in carotenoids. PMID:22753822

  14. A quantum description of linear, and non-linear optical interactions in arrays of plasmonic nanoparticles

    NASA Astrophysics Data System (ADS)

    Arabahmadi, Ehsan; Ahmadi, Zabihollah; Rashidian, Bizhan

    2018-06-01

    A quantum theory for describing the interaction of photons and plasmons, in one- and two-dimensional arrays is presented. Ohmic losses and inter-band transitions are not considered. We use macroscopic approach, and quantum field theory methods including S-matrix expansion, and Feynman diagrams for this purpose. Non-linear interactions are also studied, and increasing the probability of such interactions, and its application are also discussed.

  15. Landau singularities from the amplituhedron

    DOE PAGES

    Dennen, T.; Prlina, I.; Spradlin, M.; ...

    2017-06-28

    We propose a simple geometric algorithm for determining the complete set of branch points of amplitudes in planar N = 4 super-Yang-Mills theory directly from the amplituhedron, without resorting to any particular representation in terms of local Feynman integrals. This represents a step towards translating integrands directly into integrals. In particular, the algorithm provides information about the symbol alphabets of general amplitudes. We illustrate the algorithm applied to the one- and two-loop MHV amplitudes.

  16. Review of computer simulations of isotope effects on biochemical reactions: From the Bigeleisen equation to Feynman's path integral.

    PubMed

    Wong, Kin-Yiu; Xu, Yuqing; Xu, Liang

    2015-11-01

    Enzymatic reactions are integral components in many biological functions and malfunctions. The iconic structure of each reaction path for elucidating the reaction mechanism in details is the molecular structure of the rate-limiting transition state (RLTS). But RLTS is very hard to get caught or to get visualized by experimentalists. In spite of the lack of explicit molecular structure of the RLTS in experiment, we still can trace out the RLTS unique "fingerprints" by measuring the isotope effects on the reaction rate. This set of "fingerprints" is considered as a most direct probe of RLTS. By contrast, for computer simulations, oftentimes molecular structures of a number of TS can be precisely visualized on computer screen, however, theoreticians are not sure which TS is the actual rate-limiting one. As a result, this is an excellent stage setting for a perfect "marriage" between experiment and theory for determining the structure of RLTS, along with the reaction mechanism, i.e., experimentalists are responsible for "fingerprinting", whereas theoreticians are responsible for providing candidates that match the "fingerprints". In this Review, the origin of isotope effects on a chemical reaction is discussed from the perspectives of classical and quantum worlds, respectively (e.g., the origins of the inverse kinetic isotope effects and all the equilibrium isotope effects are purely from quantum). The conventional Bigeleisen equation for isotope effect calculations, as well as its refined version in the framework of Feynman's path integral and Kleinert's variational perturbation (KP) theory for systematically incorporating anharmonicity and (non-parabolic) quantum tunneling, are also presented. In addition, the outstanding interplay between theory and experiment for successfully deducing the RLTS structures and the reaction mechanisms is demonstrated by applications on biochemical reactions, namely models of bacterial squalene-to-hopene polycyclization and RNA 2'-O-transphosphorylation. For all these applications, we used our recently-developed path-integral method based on the KP theory, called automated integration-free path-integral (AIF-PI) method, to perform ab initio path-integral calculations of isotope effects. As opposed to the conventional path-integral molecular dynamics (PIMD) and Monte Carlo (PIMC) simulations, values calculated from our AIF-PI path-integral method can be as precise as (not as accurate as) the numerical precision of the computing machine. Lastly, comments are made on the general challenges in theoretical modeling of candidates matching the experimental "fingerprints" of RLTS. This article is part of a Special Issue entitled: Enzyme Transition States from Theory and Experiment. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Numerical realization of the variational method for generating self-trapped beams

    NASA Astrophysics Data System (ADS)

    Duque, Erick I.; Lopez-Aguayo, Servando; Malomed, Boris A.

    2018-03-01

    We introduce a numerical variational method based on the Rayleigh-Ritz optimization principle for predicting two-dimensional self-trapped beams in nonlinear media. This technique overcomes the limitation of the traditional variational approximation in performing analytical Lagrangian integration and differentiation. Approximate soliton solutions of a generalized nonlinear Schr\\"odinger equation are obtained, demonstrating robustness of the beams of various types (fundamental, vortices, multipoles, azimuthons) in the course of their propagation. The algorithm offers possibilities to produce more sophisticated soliton profiles in general nonlinear models.

  18. Variational divergence in wave scattering theory with Kirchhoffean trial functions

    NASA Technical Reports Server (NTRS)

    Bird, J. F.

    1986-01-01

    In a recent study of variational improvement of the Kirchhoff approximation for electromagnetic scattering by rough surfaces, a key ingredient in the variational principle was found to diverge for important configurations (e.g., backscatter) if the polarization had any vertical component. The cause and a cure of this divergence are discussed here. The divergence is demonstrated to occur for arbitrary perfectly conducting scatterers and its universal characterstics are determined, by means of a general divergence criterion that is derived. A variational cure for the divergence is prescribed, and it is tested successfully on a standard scattering model.

  19. Time-dependent variational principle in matrix-product state manifolds: Pitfalls and potential

    NASA Astrophysics Data System (ADS)

    Kloss, Benedikt; Lev, Yevgeny Bar; Reichman, David

    2018-01-01

    We study the applicability of the time-dependent variational principle in matrix-product state manifolds for the long time description of quantum interacting systems. By studying integrable and nonintegrable systems for which the long time dynamics are known we demonstrate that convergence of long time observables is subtle and needs to be examined carefully. Remarkably, for the disordered nonintegrable system we consider the long time dynamics are in good agreement with the rigorously obtained short time behavior and with previous obtained numerically exact results, suggesting that at least in this case, the apparent convergence of this approach is reliable. Our study indicates that, while great care must be exercised in establishing the convergence of the method, it may still be asymptotically accurate for a class of disordered nonintegrable quantum systems.

  20. Gaussian-based techniques for quantum propagation from the time-dependent variational principle: Formulation in terms of trajectories of coupled classical and quantum variables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shalashilin, Dmitrii V.; Burghardt, Irene

    2008-08-28

    In this article, two coherent-state based methods of quantum propagation, namely, coupled coherent states (CCS) and Gaussian-based multiconfiguration time-dependent Hartree (G-MCTDH), are put on the same formal footing, using a derivation from a variational principle in Lagrangian form. By this approach, oscillations of the classical-like Gaussian parameters and oscillations of the quantum amplitudes are formally treated in an identical fashion. We also suggest a new approach denoted here as coupled coherent states trajectories (CCST), which completes the family of Gaussian-based methods. Using the same formalism for all related techniques allows their systematization and a straightforward comparison of their mathematical structuremore » and cost.« less

  1. Boundary term in metric f ( R) gravity: field equations in the metric formalism

    NASA Astrophysics Data System (ADS)

    Guarnizo, Alejandro; Castañeda, Leonardo; Tejeiro, Juan M.

    2010-11-01

    The main goal of this paper is to get in a straightforward form the field equations in metric f ( R) gravity, using elementary variational principles and adding a boundary term in the action, instead of the usual treatment in an equivalent scalar-tensor approach. We start with a brief review of the Einstein-Hilbert action, together with the Gibbons-York-Hawking boundary term, which is mentioned in some literature, but is generally missing. Next we present in detail the field equations in metric f ( R) gravity, including the discussion about boundaries, and we compare with the Gibbons-York-Hawking term in General Relativity. We notice that this boundary term is necessary in order to have a well defined extremal action principle under metric variation.

  2. Coupled Structural, Thermal, Phase-change and Electromagnetic Analysis for Superconductors, Volume 2

    NASA Technical Reports Server (NTRS)

    Felippa, C. A.; Farhat, C.; Park, K. C.; Militello, C.; Schuler, J. J.

    1996-01-01

    Described are the theoretical development and computer implementation of reliable and efficient methods for the analysis of coupled mechanical problems that involve the interaction of mechanical, thermal, phase-change and electromag subproblems. The focus application has been the modeling of superconductivity and associated quantum-state phase change phenomena. In support of this objective the work has addressed the following issues: (1) development of variational principles for finite elements, (2) finite element modeling of the electromagnetic problem, (3) coupling of thermel and mechanical effects, and (4) computer implementation and solution of the superconductivity transition problem. The main accomplishments have been: (1) the development of the theory of parametrized and gauged variational principles, (2) the application of those principled to the construction of electromagnetic, thermal and mechanical finite elements, and (3) the coupling of electromagnetic finite elements with thermal and superconducting effects, and (4) the first detailed finite element simulations of bulk superconductors, in particular the Meissner effect and the nature of the normal conducting boundary layer. The theoretical development is described in two volumes. Volume 1 describes mostly formulation specific problems. Volume 2 describes generalization of those formulations.

  3. Highly accurate symplectic element based on two variational principles

    NASA Astrophysics Data System (ADS)

    Qing, Guanghui; Tian, Jia

    2018-02-01

    For the stability requirement of numerical resultants, the mathematical theory of classical mixed methods are relatively complex. However, generalized mixed methods are automatically stable, and their building process is simple and straightforward. In this paper, based on the seminal idea of the generalized mixed methods, a simple, stable, and highly accurate 8-node noncompatible symplectic element (NCSE8) was developed by the combination of the modified Hellinger-Reissner mixed variational principle and the minimum energy principle. To ensure the accuracy of in-plane stress results, a simultaneous equation approach was also suggested. Numerical experimentation shows that the accuracy of stress results of NCSE8 are nearly the same as that of displacement methods, and they are in good agreement with the exact solutions when the mesh is relatively fine. NCSE8 has advantages of the clearing concept, easy calculation by a finite element computer program, higher accuracy and wide applicability for various linear elasticity compressible and nearly incompressible material problems. It is possible that NCSE8 becomes even more advantageous for the fracture problems due to its better accuracy of stresses.

  4. A weak Hamiltonian finite element method for optimal control problems

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Bless, Robert R.

    1989-01-01

    A temporal finite element method based on a mixed form of the Hamiltonian weak principle is developed for dynamics and optimal control problems. The mixed form of Hamilton's weak principle contains both displacements and momenta as primary variables that are expanded in terms of nodal values and simple polynomial shape functions. Unlike other forms of Hamilton's principle, however, time derivatives of the momenta and displacements do not appear therein; instead, only the virtual momenta and virtual displacements are differentiated with respect to time. Based on the duality that is observed to exist between the mixed form of Hamilton's weak principle and variational principles governing classical optimal control problems, a temporal finite element formulation of the latter can be developed in a rather straightforward manner. Several well-known problems in dynamics and optimal control are illustrated. The example dynamics problem involves a time-marching problem. As optimal control examples, elementary trajectory optimization problems are treated.

  5. A weak Hamiltonian finite element method for optimal control problems

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Bless, Robert R.

    1990-01-01

    A temporal finite element method based on a mixed form of the Hamiltonian weak principle is developed for dynamics and optimal control problems. The mixed form of Hamilton's weak principle contains both displacements and momenta as primary variables that are expanded in terms of nodal values and simple polynomial shape functions. Unlike other forms of Hamilton's principle, however, time derivatives of the momenta and displacements do not appear therein; instead, only the virtual momenta and virtual displacements are differentiated with respect to time. Based on the duality that is observed to exist between the mixed form of Hamilton's weak principle and variational principles governing classical optimal control problems, a temporal finite element formulation of the latter can be developed in a rather straightforward manner. Several well-known problems in dynamics and optimal control are illustrated. The example dynamics problem involves a time-marching problem. As optimal control examples, elementary trajectory optimization problems are treated.

  6. Weak Hamiltonian finite element method for optimal control problems

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Bless, Robert R.

    1991-01-01

    A temporal finite element method based on a mixed form of the Hamiltonian weak principle is developed for dynamics and optimal control problems. The mixed form of Hamilton's weak principle contains both displacements and momenta as primary variables that are expanded in terms of nodal values and simple polynomial shape functions. Unlike other forms of Hamilton's principle, however, time derivatives of the momenta and displacements do not appear therein; instead, only the virtual momenta and virtual displacements are differentiated with respect to time. Based on the duality that is observed to exist between the mixed form of Hamilton's weak principle and variational principles governing classical optimal control problems, a temporal finite element formulation of the latter can be developed in a rather straightforward manner. Several well-known problems in dynamics and optimal control are illustrated. The example dynamics problem involves a time-marching problem. As optimal control examples, elementary trajectory optimization problems are treated.

  7. Ultimate computing. Biomolecular consciousness and nano Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hameroff, S.R.

    1987-01-01

    The book advances the premise that the cytoskeleton is the cell's nervous system, the biological controller/computer. If indeed cytoskeletal dynamics in the nanoscale (billionth meter, billionth second) are the texture of intracellular information processing, emerging ''NanoTechnologies'' (scanning tunneling microscopy, Feynman machines, von Neumann replicators, etc.) should enable direct monitoring, decoding and interfacing between biological and technological information devices. This in turn could result in important biomedical applications and perhaps a merger of mind and machine: Ultimate Computing.

  8. Remarks on a New Possible Discretization Scheme for Gauge Theories

    NASA Astrophysics Data System (ADS)

    Magnot, Jean-Pierre

    2018-03-01

    We propose here a new discretization method for a class of continuum gauge theories which action functionals are polynomials of the curvature. Based on the notion of holonomy, this discretization procedure appears gauge-invariant for discretized analogs of Yang-Mills theories, and hence gauge-fixing is fully rigorous for these discretized action functionals. Heuristic parts are forwarded to the quantization procedure via Feynman integrals and the meaning of the heuristic infinite dimensional Lebesgue integral is questioned.

  9. New developments in FeynCalc 9.0

    NASA Astrophysics Data System (ADS)

    Shtabovenko, Vladyslav; Mertig, Rolf; Orellana, Frederik

    2016-10-01

    In this note we report on the new version of FEYNCALC, a MATHEMATICA package for symbolic semi-automatic evaluation of Feynman diagrams and algebraic expressions in quantum field theory. The main features of version 9.0 are: improved tensor reduction and partial fractioning of loop integrals, new functions for using FEYNCALC together with tools for reduction of scalar loop integrals using integration-by-parts (IBP) identities, better interface to FEYNARTS and support for SU(N) generators with explicit fundamental indices.

  10. Feynman perturbation expansion for the price of coupon bond options and swaptions in quantum finance. II. Empirical

    NASA Astrophysics Data System (ADS)

    Baaquie, Belal E.; Liang, Cui

    2007-01-01

    The quantum finance pricing formulas for coupon bond options and swaptions derived by Baaquie [Phys. Rev. E 75, 016703 (2006)] are reviewed. We empirically study the swaption market and propose an efficient computational procedure for analyzing the data. Empirical results of the swaption price, volatility, and swaption correlation are compared with the predictions of quantum finance. The quantum finance model generates the market swaption price to over 90% accuracy.

  11. Editorial, Forum and Book Reviews

    NASA Astrophysics Data System (ADS)

    Caulfield, H. J.

    1983-12-01

    In his usual delightful fashion, Professor Richard Feynman recently recounted stories, insights, and observations from his life in science during a one hour interview on U.S. public television. All of what he said was enjoyable, but I think he erred in at least one judgment. He expressed disdain for organizations that form committees to determine who is worthy of an honor. With due deference to his insight, let me offer my own analysis in support of another view.

  12. Bi-local holography in the SYK model

    DOE PAGES

    Jevicki, Antal; Suzuki, Kenta; Yoon, Junggi

    2016-07-01

    We discuss large N rules of the Sachdev-Ye-Kitaev model and the bi-local representation of holography of this theory. This is done by establishing 1/N Feynman rules in terms of bi-local propagators and vertices, which can be evaluated following the recent procedure of Polchinski and Rosenhaus. Lastly, these rules can be interpreted as Witten type diagrams of the dual AdS theory, which we are able to define at IR fixed point and off.

  13. Remarks on a New Possible Discretization Scheme for Gauge Theories

    NASA Astrophysics Data System (ADS)

    Magnot, Jean-Pierre

    2018-07-01

    We propose here a new discretization method for a class of continuum gauge theories which action functionals are polynomials of the curvature. Based on the notion of holonomy, this discretization procedure appears gauge-invariant for discretized analogs of Yang-Mills theories, and hence gauge-fixing is fully rigorous for these discretized action functionals. Heuristic parts are forwarded to the quantization procedure via Feynman integrals and the meaning of the heuristic infinite dimensional Lebesgue integral is questioned.

  14. A rederivation of the conformal anomaly for spin-{\\frac{1}{2}}

    NASA Astrophysics Data System (ADS)

    Godazgar, Hadi; Nicolai, Hermann

    2018-05-01

    We rederive the conformal anomaly for spin- fermions by a genuine Feynman graph calculation, which has not been available so far. Although our calculation merely confirms a result that has been known for a long time, the derivation is new, and thus furnishes a method to investigate more complicated cases (in particular concerning the significance of the quantum trace of the stress tensor in non-conformal theories) where there remain several outstanding and unresolved issues.

  15. Feynman perturbation expansion for the price of coupon bond options and swaptions in quantum finance. II. Empirical.

    PubMed

    Baaquie, Belal E; Liang, Cui

    2007-01-01

    The quantum finance pricing formulas for coupon bond options and swaptions derived by Baaquie [Phys. Rev. E 75, 016703 (2006)] are reviewed. We empirically study the swaption market and propose an efficient computational procedure for analyzing the data. Empirical results of the swaption price, volatility, and swaption correlation are compared with the predictions of quantum finance. The quantum finance model generates the market swaption price to over 90% accuracy.

  16. Weak measurements measure probability amplitudes (and very little else)

    NASA Astrophysics Data System (ADS)

    Sokolovski, D.

    2016-04-01

    Conventional quantum mechanics describes a pre- and post-selected system in terms of virtual (Feynman) paths via which the final state can be reached. In the absence of probabilities, a weak measurement (WM) determines the probability amplitudes for the paths involved. The weak values (WV) can be identified with these amplitudes, or their linear combinations. This allows us to explain the ;unusual; properties of the WV, and avoid the ;paradoxes; often associated with the WM.

  17. Proton and antiproton production in deep inelastic muon-nucleon scattering at 280 GeV

    NASA Astrophysics Data System (ADS)

    Arneodo, M.; Arvidson, A.; Aubert, J. J.; Badelek, B.; Beaufays, J.; Bee, C. P.; Benchouk, C.; Berghoff, G.; Bird, I.; Blum, D.; Böhm, E.; de Bouard, X.; Brasse, F. W.; Braun, H.; Broll, C.; Brown, S.; Brück, H.; Calen, H.; Chima, J. S.; Ciborowski, J.; Clifft, R.; Coignet, G.; Combley, F.; Coughlan, J.; D'Agostini, G.; Dahlgren, S.; Dengler, F.; Derado, I.; Dreyer, T.; Drees, J.; Düren, M.; Eckardt, V.; Edwards, A.; Edwards, M.; Ernst, T.; Eszes, G.; Favier, J.; Ferrero, M. I.; Figiel, J.; Flauger, W.; Foster, J.; Gabathuler, E.; Gajewski, J.; Gamet, R.; Gayler, J.; Geddes, N.; Grafström, P.; Grard, F.; Haas, J.; Hagberg, E.; Hasert, F. J.; Hayman, P.; Heusse, P.; Jaffré, M.; Jacholkowska, A.; Janata, F.; Jansco, G.; Johnson, A. S.; Kabuss, E. M.; Kellner, G.; Korbel, V.; Krüger, A.; Krüger, J.; Kullander, S.; Landgraf, U.; Lanske, D.; Loken, J.; Long, K.; Maire, M.; Malecki, P.; Manz, A.; Maselli, S.; Mohr, W.; Montanet, F.; Montgomery, H. E.; Nagy, E.; Nassalski, J.; Norton, P. R.; Oakham, F. G.; Osborne, A. M.; Pascaud, C.; Pawlik, B.; Payre, P.; Peroni, C.; Peschel, H.; Pessard, H.; Pettingale, J.; Pietrzyk, B.; Poensgen, B.; Pötsch, M.; Renton, P.; Ribarics, P.; Rith, K.; Rondio, E.; Sandacz, A.; Scheer, M.; Schlagböhmer, A.; Schiemann, H.; Schmitz, N.; Schneegans, M.; Scholz, M.; Schouten, M.; Schröder, T.; Schultze, K.; Sloan, T.; Stier, H. E.; Studt, M.; Taylor, G. N.; Thénard, J. M.; Thompson, J. C.; de La Torre, A.; Toth, J.; Urban, L.; Wallucks, W.; Whalley, M.; Wheeler, S.; Williams, W. S. C.; Wimpenny, S. J.; Windmolders, R.; Wolf, G.

    1987-12-01

    New results on proton and antiproton production in the target and current fragmentation regions of high energy muon-nucleon scattering are presented. Proton and antiproton production is investigated as a function of Feynman x and rapidity. No significant difference is observed between production on hydrogen and deuterium targets. Correlations between pp,pbar p andbar pbar p pairs are analysed and the results are compared with the predictions of the Lund fragmentation model.

  18. Feynman-diagrams approach to the quantum Rabi model for ultrastrong cavity QED: stimulated emission and reabsorption of virtual particles dressing a physical excitation

    NASA Astrophysics Data System (ADS)

    Di Stefano, Omar; Stassi, Roberto; Garziano, Luigi; Frisk Kockum, Anton; Savasta, Salvatore; Nori, Franco

    2017-05-01

    In quantum field theory, bare particles are dressed by a cloud of virtual particles to form physical particles. The virtual particles affect properties such as the mass and charge of the physical particles, and it is only these modified properties that can be measured in experiments, not the properties of the bare particles. The influence of virtual particles is prominent in the ultrastrong-coupling regime of cavity quantum electrodynamics (QED), which has recently been realised in several condensed-matter systems. In some of these systems, the effective interaction between atom-like transitions and the cavity photons can be switched on or off by external control pulses. This offers unprecedented possibilities for exploring quantum vacuum fluctuations and the relation between physical and bare particles. We consider a single three-level quantum system coupled to an optical resonator. Here we show that, by applying external electromagnetic pulses of suitable amplitude and frequency, each virtual photon dressing a physical excitation in cavity-QED systems can be converted into a physical observable photon, and back again. In this way, the hidden relationship between the bare and the physical excitations can be unravelled and becomes experimentally testable. The conversion between virtual and physical photons can be clearly pictured using Feynman diagrams with cut loops.

  19. 100th anniversary of the birth of E M Lifshitz (Scientific session of the Physical Sciences Division of the Russian Academy of Sciences, 26 March 2015)

    NASA Astrophysics Data System (ADS)

    2015-09-01

    A scientific session of the Physical Sciences Division of the Russian Academy of Sciences dedicated to the 100th anniversary of the birth of Academician E M Lifshitz was held in the conference hall of the institute of Physical Problems, RAS, on 26 March 2015. The agenda of the session announced on the website www.gpad.ac.ru of the PSD RAS contains the reports: (1) Khalatnikov I M (Landau Institute for Theoretical Physics, RAS, Moscow) "Problem of singularity in cosmology"; (2) Kats E I (Landau Institute for Theoretical Physics, RAS, Moscow) "Van der Waals, Casimir, and Lifshitz forces in soft matter"; (3) Volovik G E (Landau Institute for Theoretical Physics, RAS, Moscow) "Superfluids in rotation: Onsager-Feynman vortices and Landau-Lifshitz vortex sheets." Papers written on the basis of oral presentations 1-3 are published below. • Stochastic cosmology, perturbation theories, and Lifshitz gravity, I M Khalatnikov, A Yu Kamenshchik Physics-Uspekhi, 2015, Volume 58, Number 9, Pages 878-891 • Van der Waals, Casimir, and Lifshitz forces in soft matter, E I Kats Physics-Uspekhi, 2015, Volume 58, Number 9, Pages 892-896 • Superfluids in rotation: Landau-Lifshitz vortex sheets vs Onsager-Feynman vortices, G E Volovik Physics-Uspekhi, 2015, Volume 58, Number 9, Pages 897-905

  20. Calculating massive 3-loop graphs for operator matrix elements by the method of hyperlogarithms

    NASA Astrophysics Data System (ADS)

    Ablinger, Jakob; Blümlein, Johannes; Raab, Clemens; Schneider, Carsten; Wißbrock, Fabian

    2014-08-01

    We calculate convergent 3-loop Feynman diagrams containing a single massive loop equipped with twist τ=2 local operator insertions corresponding to spin N. They contribute to the massive operator matrix elements in QCD describing the massive Wilson coefficients for deep-inelastic scattering at large virtualities. Diagrams of this kind can be computed using an extended version of the method of hyperlogarithms, originally being designed for massless Feynman diagrams without operators. The method is applied to Benz- and V-type graphs, belonging to the genuine 3-loop topologies. In case of the V-type graphs with five massive propagators, new types of nested sums and iterated integrals emerge. The sums are given in terms of finite binomially and inverse binomially weighted generalized cyclotomic sums, while the 1-dimensionally iterated integrals are based on a set of ∼30 square-root valued letters. We also derive the asymptotic representations of the nested sums and present the solution for N∈C. Integrals with a power-like divergence in N-space ∝aN,a∈R,a>1, for large values of N emerge. They still possess a representation in x-space, which is given in terms of root-valued iterated integrals in the present case. The method of hyperlogarithms is also used to calculate higher moments for crossed box graphs with different operator insertions.

  1. Shrunk loop theorem for the topology probabilities of closed Brownian (or Feynman) paths on the twice punctured plane

    NASA Astrophysics Data System (ADS)

    Giraud, O.; Thain, A.; Hannay, J. H.

    2004-02-01

    The shrunk loop theorem proved here is an integral identity which facilitates the calculation of the relative probability (or probability amplitude) of any given topology that a free, closed Brownian (or Feynman) path of a given 'duration' might have on the twice punctured plane (plane with two marked points). The result is expressed as a 'scattering' series of integrals of increasing dimensionality based on the maximally shrunk version of the path. Physically, this applies in different contexts: (i) the topology probability of a closed ideal polymer chain on a plane with two impassable points, (ii) the trace of the Schrödinger Green function, and thence spectral information, in the presence of two Aharonov-Bohm fluxes and (iii) the same with two branch points of a Riemann surface instead of fluxes. Our theorem starts from the Stovicek scattering expansion for the Green function in the presence of two Aharonov-Bohm flux lines, which itself is based on the famous Sommerfeld one puncture point solution of 1896 (the one puncture case has much easier topology, just one winding number). Stovicek's expansion itself can supply the results at the expense of choosing a base point on the loop and then integrating it away. The shrunk loop theorem eliminates this extra two-dimensional integration, distilling the topology from the geometry.

  2. Effective response of classical, auxetic and chiral magnetoelastic materials by use of a new variational principle

    NASA Astrophysics Data System (ADS)

    Danas, K.

    2017-08-01

    This work provides a rigorous analysis of the effective response, i.e., average magnetization and magnetostriction, of magnetoelastic composites that are subjected to overall magnetic and mechanical loads. It clarifies the differences between a coupled magnetomechanical analysis in which one applies a Eulerian (current) magnetic field and an electroactive one where the Lagrangian (reference) electric field is usually applied. For this, we propose an augmented vector potential variational formulation to carry out numerical periodic homogenization studies of magnetoelastic solids at finite strains and magnetic fields. We show that the developed variational principle can be used for bottom-up design of microstructures with desired magnetomechanical coupling by properly canceling out the macro-geometry and specimen shape effects. To achieve that, we properly treat the average Maxwell stresses arising from the medium surrounding the magnetoelastic representative volume element (RVE), while at the same time we impose a uniform average Eulerian and not Lagrangian magnetic field. The developed variational principle is then used to study a large number of ideal as well as more realistic two-dimensional microstructures. We study the effect of particle volume fraction, particle distribution and particle shape and orientation upon the effective magnetoelastic response at finite strains. We consider also unstructured isotropic microstructures based on random adsorption algorithms and we carry out a convergence study of the representativity of the proposed unit cells. Finally, three-phase two-dimensional auxetic microstructures are analyzed. The first consists of a periodic distribution of voids and particle chains in a polymer matrix, while the second takes advantage of particle shape and chirality to produce negative and positive swelling by proper change of the chirality and the applied magnetic field.

  3. Design principles for elementary gene circuits: Elements, methods, and examples

    NASA Astrophysics Data System (ADS)

    Savageau, Michael A.

    2001-03-01

    The control of gene expression involves complex circuits that exhibit enormous variation in design. For years the most convenient explanation for these variations was historical accident. According to this view, evolution is a haphazard process in which many different designs are generated by chance; there are many ways to accomplish the same thing, and so no further meaning can be attached to such different but equivalent designs. In recent years a more satisfying explanation based on design principles has been found for at least certain aspects of gene circuitry. By design principle we mean a rule that characterizes some biological feature exhibited by a class of systems such that discovery of the rule allows one not only to understand known instances but also to predict new instances within the class. The central importance of gene regulation in modern molecular biology provides strong motivation to search for more of these underlying design principles. The search is in its infancy and there are undoubtedly many design principles that remain to be discovered. The focus of this three-part review will be the class of elementary gene circuits in bacteria. The first part reviews several elements of design that enter into the characterization of elementary gene circuits in prokaryotic organisms. Each of these elements exhibits a variety of realizations whose meaning is generally unclear. The second part reviews mathematical methods used to represent, analyze, and compare alternative designs. Emphasis is placed on particular methods that have been used successfully to identify design principles for elementary gene circuits. The third part reviews four design principles that make specific predictions regarding (1) two alternative modes of gene control, (2) three patterns of coupling gene expression in elementary circuits, (3) two types of switches in inducible gene circuits, and (4) the realizability of alternative gene circuits and their response to phased environmental cues. In each case, the predictions are supported by experimental evidence. These results are important for understanding the function, design, and evolution of elementary gene circuits.

  4. Multimodal electromechanical model of piezoelectric transformers by Hamilton's principle.

    PubMed

    Nadal, Clement; Pigache, Francois

    2009-11-01

    This work deals with a general energetic approach to establish an accurate electromechanical model of a piezoelectric transformer (PT). Hamilton's principle is used to obtain the equations of motion for free vibrations. The modal characteristics (mass, stiffness, primary and secondary electromechanical conversion factors) are also deduced. Then, to illustrate this general electromechanical method, the variational principle is applied to both homogeneous and nonhomogeneous Rosen-type PT models. A comparison of modal parameters, mechanical displacements, and electrical potentials are presented for both models. Finally, the validity of the electrodynamical model of nonhomogeneous Rosen-type PT is confirmed by a numerical comparison based on a finite elements method and an experimental identification.

  5. Stochastic digital holography for visualizing inside strongly refracting transparent objects.

    PubMed

    Desse, Jean-Michel; Picart, Pascal

    2015-01-01

    This paper presents a digital holographic method to visualize and measure refractive index variations, convection currents, or thermal gradients, occurring inside a transparent and refracting object. The proof of principle is provided through the visualization of refractive index variation inside a lighting bulb. Comparison with transmission and reflection holography is also provided. A very good agreement is obtained, thus validating the proposed approach.

  6. Green's formula and variational principles for cosmic-ray transport with application to rotating and shearing flows

    NASA Technical Reports Server (NTRS)

    Webb, G. M.; Jokipii, J. R.; Morfill, G. E.

    1994-01-01

    Green's theorem and Green's formula for the diffusive cosmic-ray transport equation in relativistic flows are derived. Green's formula gives the solution of the transport equation in terms of the Green's function of the adjoint transport equation, and in terms of distributed sources throughout the region R of interest, plus terms involving the particle intensity and streaming on the boundary. The adjoint transport equation describes the time-reversed particle transport. An Euler-Lagrange variational principle is then obtained for both the mean scattering frame distribution function f, and its adjoint f(dagger). Variations of the variational functional with respect to f(dagger) yield the transport equation, whereas variations of f yield the adjoint transport equation. The variational principle, when combined with Noether's theorem, yields the conservation law associated with Green's theorem. An investigation of the transport equation for steady, azimuthal, rotating flows suggests the introduction of a new independent variable H to replace the comoving frame momentum variable p'. For the case of rigid rotating flows, H is conserved and is shown to be analogous to the Hamiltonian for a bead on a rigidly rotating wire. The variable H corresponds to a balance between the centrifugal force and the particle inertia in the rotating frame. The physical interpretation of H includes a discussion of nonrelativistic and special relativistic rotating flows as well as the cases of aziuthal, differentially rotating flows about Schwarzs-child and Kerr black holes. Green's formula is then applied to the problem of the acceleration of ultra-high-energy cosmic rays by galactic rotation. The model for galactic rotation assumes an angular velocity law Omega = Omega(sub 0)(omega(sub 0)/omega), where omega denotes radial distance from the axis of rotation. Green's functions for the galactic rotation problem are used to investigate the spectrum of accelerated particles arising from monoenergetic and truncated power-law sources. We conclude that it is possible to accelerate particles beyond the knee by galactic rotation, but not in sufficient number to adequately explain the observed spectrum.

  7. Black hole thermodynamics from a variational principle: asymptotically conical backgrounds

    DOE PAGES

    An, Ok Song; Cvetič, Mirjam; Papadimitriou, Ioannis

    2016-03-14

    The variational problem of gravity theories is directly related to black hole thermodynamics. For asymptotically locally AdS backgrounds it is known that holographic renormalization results in a variational principle in terms of equivalence classes of boundary data under the local asymptotic symmetries of the theory, which automatically leads to finite conserved charges satisfying the first law of thermodynamics. We show that this connection holds well beyond asymptotically AdS black holes. In particular, we formulate the variational problem for N = 2 STU supergravity in four dimensions with boundary conditions corresponding to those obeyed by the so called ‘subtracted geometries’. Wemore » show that such boundary conditions can be imposed covariantly in terms of a set of asymptotic second class constraints, and we derive the appropriate boundary terms that render the variational problem well posed in two different duality frames of the STU model. This allows us to define finite conserved charges associated with any asymptotic Killing vector and to demonstrate that these charges satisfy the Smarr formula and the first law of thermodynamics. Moreover, by uplifting the theory to five dimensions and then reducing on a 2-sphere, we provide a precise map between the thermodynamic observables of the subtracted geometries and those of the BTZ black hole. Finally, surface terms play a crucial role in this identification.« less

  8. Black hole thermodynamics from a variational principle: asymptotically conical backgrounds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    An, Ok Song; Cvetič, Mirjam; Papadimitriou, Ioannis

    The variational problem of gravity theories is directly related to black hole thermodynamics. For asymptotically locally AdS backgrounds it is known that holographic renormalization results in a variational principle in terms of equivalence classes of boundary data under the local asymptotic symmetries of the theory, which automatically leads to finite conserved charges satisfying the first law of thermodynamics. We show that this connection holds well beyond asymptotically AdS black holes. In particular, we formulate the variational problem for N = 2 STU supergravity in four dimensions with boundary conditions corresponding to those obeyed by the so called ‘subtracted geometries’. Wemore » show that such boundary conditions can be imposed covariantly in terms of a set of asymptotic second class constraints, and we derive the appropriate boundary terms that render the variational problem well posed in two different duality frames of the STU model. This allows us to define finite conserved charges associated with any asymptotic Killing vector and to demonstrate that these charges satisfy the Smarr formula and the first law of thermodynamics. Moreover, by uplifting the theory to five dimensions and then reducing on a 2-sphere, we provide a precise map between the thermodynamic observables of the subtracted geometries and those of the BTZ black hole. Finally, surface terms play a crucial role in this identification.« less

  9. Stress Management: Tai Chi

    MedlinePlus

    ... in constant motion. Tai chi has many different styles. Each style may subtly emphasize various tai chi principles and methods. There are variations within each style. Some styles may focus on health maintenance, while ...

  10. Variational method of determining effective moduli of polycrystals with tetragonal symmetry

    USGS Publications Warehouse

    Meister, R.; Peselnick, L.

    1966-01-01

    Variational principles have been applied to aggregates of randomly oriented pure-phase polycrystals having tetragonal symmetry. The bounds of the effective elastic moduli obtained in this way show a substantial improvement over the bounds obtained by means of the Voigt and Reuss assumptions. The Hill average is found to be a good approximation in most cases when compared to the bounds found from the variational method. The new bounds reduce in their limits to the Voigt and Reuss values. ?? 1966 The American Institute of Physics.

  11. Numerical realization of the variational method for generating self-trapped beams.

    PubMed

    Duque, Erick I; Lopez-Aguayo, Servando; Malomed, Boris A

    2018-03-19

    We introduce a numerical variational method based on the Rayleigh-Ritz optimization principle for predicting two-dimensional self-trapped beams in nonlinear media. This technique overcomes the limitation of the traditional variational approximation in performing analytical Lagrangian integration and differentiation. Approximate soliton solutions of a generalized nonlinear Schrödinger equation are obtained, demonstrating robustness of the beams of various types (fundamental, vortices, multipoles, azimuthons) in the course of their propagation. The algorithm offers possibilities to produce more sophisticated soliton profiles in general nonlinear models.

  12. Synthesis: Intertwining product and process

    NASA Technical Reports Server (NTRS)

    Weiss, David M.

    1990-01-01

    Synthesis is a proposed systematic process for rapidly creating different members of a program family. Family members are described by variations in their requirements. Requirements variations are mapped to variations on a standard design to generate production quality code and documentation. The approach is made feasible by using principles underlying design for change. Synthesis incorporates ideas from rapid prototyping, application generators, and domain analysis. The goals of Synthesis and the Synthesis process are discussed. The technology needed and the feasibility of the approach are also briefly discussed. The status of current efforts to implement Synthesis methodologies is presented.

  13. Book Review:

    NASA Astrophysics Data System (ADS)

    Das, Ashok

    2007-01-01

    It is not usual for someone to write a book on someone else's Ph.D. thesis, but then Feynman was not a usual physicist. He was without doubt one of the most original physicists of the twentieth century, who has strongly influenced the developments in quantum field theory through his many ingenious contributions. Path integral approach to quantum theories is one such contribution which pervades almost all areas of physics. What is astonishing is that he developed this idea as a graduate student for his Ph.D. thesis which has been printed, for the first time, in the present book along with two other related articles. The early developments in quantum theory, by Heisenberg and Schrödinger, were based on the Hamiltonian formulation, where one starts with the Hamiltonian description of a classical system and then promotes the classical observables to noncommuting quantum operators. However, Dirac had already stressed in an article in 1932 (this article is also reproduced in the present book) that the Lagrangian is more fundamental than the Hamiltonian, at least from the point of view of relativistic invariance and he wondered how the Lagrangian may enter into the quantum description. He had developed this idea through his 'transformation matrix' theory and had even hinted on how the action of the classical theory may enter such a description. However, although the brief paper by Dirac contained the basic essential ideas, it did not fully develop the idea of a Lagrangian description in detail in the functional language. Feynman, on the other hand, was interested in the electromagnetic interactions of the electron from a completely different point of view rooted in a theory involving action-at-a-distance. His theory (along with John Wheeler) did not have a Hamiltonian description and, in order to quantize such a theory, he needed an alternative formulation of quantum mechanics. When the article by Dirac was brought to his attention, he immediately realized what he was looking for and developed fully what is known today as the path integral approach to quantum theories. Although his main motivation was in the study of theories involving the concept of action-at-a-distance, as he emphasizes in his thesis, his formulation of quantum theories applies to any theory in general. The thesis develops quite systematically and in detail all the concepts of functionals necessary for this formulation. The motivation and the physical insights are described in the brilliant 'Feynman' style. It is incredible that even at that young age, the signs of his legendary teaching style were evident in his presentation of the material in the thesis. The path integral approach is now something that every graduate student in theoretical physics is supposed to know. There are several books on the subject, even one by Feynman himself (and Hibbs). Nonetheless, the thesis provides a very good background for the way these ideas came about. The two companion articles, although available in print, also gives a complete picture of the development of this line of thinking. The helpful introductory remarks by the editor also puts things in the proper historical perspective. This book would be very helpful to anyone interested in the development of modern ideas in physics.

  14. Computational fluid mechanics utilizing the variational principle of modeling damping seals

    NASA Technical Reports Server (NTRS)

    Abernathy, J. M.; Farmer, R.

    1985-01-01

    An analysis for modeling damping seals for use in Space Shuttle main engine turbomachinery is being produced. Development of a computational fluid mechanics code for turbulent, incompressible flow is required.

  15. Colors of the Sky.

    ERIC Educational Resources Information Center

    Bohren, Craig F.; Fraser, Alistair B.

    1985-01-01

    Explains the physical principles which result in various colors of the sky. Topics addressed include: blueness, mystical properties of water vapor, ozone, fluctuation theory of scattering, variation of purity and brightness, and red sunsets and sunrises. (DH)

  16. Variational principles for stochastic fluid dynamics

    PubMed Central

    Holm, Darryl D.

    2015-01-01

    This paper derives stochastic partial differential equations (SPDEs) for fluid dynamics from a stochastic variational principle (SVP). The paper proceeds by taking variations in the SVP to derive stochastic Stratonovich fluid equations; writing their Itô representation; and then investigating the properties of these stochastic fluid models in comparison with each other, and with the corresponding deterministic fluid models. The circulation properties of the stochastic Stratonovich fluid equations are found to closely mimic those of the deterministic ideal fluid models. As with deterministic ideal flows, motion along the stochastic Stratonovich paths also preserves the helicity of the vortex field lines in incompressible stochastic flows. However, these Stratonovich properties are not apparent in the equivalent Itô representation, because they are disguised by the quadratic covariation drift term arising in the Stratonovich to Itô transformation. This term is a geometric generalization of the quadratic covariation drift term already found for scalar densities in Stratonovich's famous 1966 paper. The paper also derives motion equations for two examples of stochastic geophysical fluid dynamics; namely, the Euler–Boussinesq and quasi-geostropic approximations. PMID:27547083

  17. Nonlinear flap-lag axial equations of a rotating beam

    NASA Technical Reports Server (NTRS)

    Kaza, K. R. V.; Kvaternik, R. G.

    1977-01-01

    It is possible to identify essentially four approaches by which analysts have established either the linear or nonlinear governing equations of motion for a particular problem related to the dynamics of rotating elastic bodies. The approaches include the effective applied load artifice in combination with a variational principle and the use of Newton's second law, written as D'Alembert's principle, applied to the deformed configuration. A third approach is a variational method in which nonlinear strain-displacement relations and a first-degree displacement field are used. The method introduced by Vigneron (1975) for deriving the linear flap-lag equations of a rotating beam constitutes the fourth approach. The reported investigation shows that all four approaches make use of the geometric nonlinear theory of elasticity. An alternative method for deriving the nonlinear coupled flap-lag-axial equations of motion is also discussed.

  18. First-principle study of effect of variation of `x' on the band alignment in CZTS1-xSex

    NASA Astrophysics Data System (ADS)

    Ghemud, Vipul; Kshirsagar, Anjali

    2018-04-01

    The present work concentrates on the electronic structure study of CZTS1-xSex alloy with x ranging from 0 to 1. For the alloy study, we have carried out first-principles calculations employing generalized gradient approximation for structural optimization and further hybrid functional approach to compare the optical band gap with that obtained from the experiments. A systematic increase in the lattice parameters with lowering of band gap from 1.52eV to 1.04eV is seen with increasing Se concentration from 0 to 100%, however the lowering of valence band edge and conduction band edge is not linear with the concentration variation. Our results indicate that the lowering of band gap is a result increased Cu:d and Se:p hybridization with increasing `x'.

  19. A first-principles model for orificed hollow cathode operation

    NASA Technical Reports Server (NTRS)

    Salhi, A.; Turchi, P. J.

    1992-01-01

    A theoretical model describing orificed hollow cathode discharge is presented. The approach adopted is based on a purely analytical formulation founded on first principles. The present model predicts the emission surface temperature and plasma properties such as electron temperature, number densities and plasma potential. In general, good agreements between theory and experiment are obtained. Comparison of the results with the available related experimental data shows a maximum difference of 10 percent in emission surface temperature, 20 percent in electron temperature and 35 percent in plasma potential. In case of the variation of the electron number density with the discharge current a maximum discrepancy of 36 percent is obtained. However, in the case of the variation with the cathode internal pressure, the predicted electron number density is higher than the experimental data by a maximum factor of 2.

  20. Vacuum energy in Einstein-Gauss-Bonnet anti-de Sitter gravity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kofinas, Georgios; Olea, Rodrigo

    2006-10-15

    A finite action principle for Einstein-Gauss-Bonnet anti-de Sitter gravity is achieved by supplementing the bulk Lagrangian by a suitable boundary term, whose form substantially differs in odd and even dimensions. For even dimensions, this term is given by the boundary contribution in the Euler theorem with a coupling constant fixed, demanding the spacetime to have constant (negative) curvature in the asymptotic region. For odd dimensions, the action is stationary under a boundary condition on the variation of the extrinsic curvature. A well-posed variational principle leads to an appropriate definition of energy and other conserved quantities using the Noether theorem, andmore » to a correct description of black hole thermodynamics. In particular, this procedure assigns a nonzero energy to anti-de Sitter spacetime in all odd dimensions.« less

  1. Emergency medicine: an operations management view.

    PubMed

    Soremekun, Olan A; Terwiesch, Christian; Pines, Jesse M

    2011-12-01

    Operations management (OM) is the science of understanding and improving business processes. For the emergency department (ED), OM principles can be used to reduce and alleviate the effects of crowding. A fundamental principle of OM is the waiting time formula, which has clear implications in the ED given that waiting time is fundamental to patient-centered emergency care. The waiting time formula consists of the activity time (how long it takes to complete a process), the utilization rate (the proportion of time a particular resource such a staff is working), and two measures of variation: the variation in patient interarrival times and the variation in patient processing times. Understanding the waiting time formula is important because it presents the fundamental parameters that can be managed to reduce waiting times and length of stay. An additional useful OM principle that is applicable to the ED is the efficient frontier. The efficient frontier compares the performance of EDs with respect to two dimensions: responsiveness (i.e., 1/wait time) and utilization rates. Some EDs may be "on the frontier," maximizing their responsiveness at their given utilization rates. However, most EDs likely have opportunities to move toward the frontier. Increasing capacity is a movement along the frontier and to truly move toward the frontier (i.e., improving responsiveness at a fixed capacity), we articulate three possible options: eliminating waste, reducing variability, or increasing flexibility. When conceptualizing ED crowding interventions, these are the major strategies to consider. © 2011 by the Society for Academic Emergency Medicine.

  2. Variation effect on the insecticide activity of DDT analogues. A chemometric approach

    NASA Astrophysics Data System (ADS)

    Itoh, S.; Nagashima, U.

    2002-08-01

    We investigated a variation effect on the insecticide activity of DDT analogues by using the first principles electronic structure calculations and the neural network analysis. It has been found that the charge distribution at the specific atomic sites in the DDT molecule is related to their toxicity. This approach can contribute to designing a new insecticide and a new harmlessness process of the DDT analogues.

  3. Optimal Control of Evolution Mixed Variational Inclusions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alduncin, Gonzalo, E-mail: alduncin@geofisica.unam.mx

    2013-12-15

    Optimal control problems of primal and dual evolution mixed variational inclusions, in reflexive Banach spaces, are studied. The solvability analysis of the mixed state systems is established via duality principles. The optimality analysis is performed in terms of perturbation conjugate duality methods, and proximation penalty-duality algorithms to mixed optimality conditions are further presented. Applications to nonlinear diffusion constrained problems as well as quasistatic elastoviscoplastic bilateral contact problems exemplify the theory.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morante, S., E-mail: morante@roma2.infn.it; Rossi, G.C., E-mail: rossig@roma2.infn.it; Centro Fermi-Museo Storico della Fisica e Centro Studi e Ricerche E. Fermi, Compendio del Viminale, Piazza del Viminale 1, I-00184 Rome

    We give a novel and simple proof of the DFT expression for the interatomic force field that drives the motion of atoms in classical Molecular Dynamics, based on the observation that the ground state electronic energy, seen as a functional of the external potential, is the Legendre transform of the Hohenberg–Kohn functional, which in turn is a functional of the electronic density. We show in this way that the so-called Hellmann–Feynman analytical formula, currently used in numerical simulations, actually provides the exact expression of the interatomic force.

  5. Stochastic Calculus and Differential Equations for Physics and Finance

    NASA Astrophysics Data System (ADS)

    McCauley, Joseph L.

    2013-02-01

    1. Random variables and probability distributions; 2. Martingales, Markov, and nonstationarity; 3. Stochastic calculus; 4. Ito processes and Fokker-Planck equations; 5. Selfsimilar Ito processes; 6. Fractional Brownian motion; 7. Kolmogorov's PDEs and Chapman-Kolmogorov; 8. Non Markov Ito processes; 9. Black-Scholes, martingales, and Feynman-Katz; 10. Stochastic calculus with martingales; 11. Statistical physics and finance, a brief history of both; 12. Introduction to new financial economics; 13. Statistical ensembles and time series analysis; 14. Econometrics; 15. Semimartingales; References; Index.

  6. The Feynman-Vernon Influence Functional Approach in QED

    NASA Astrophysics Data System (ADS)

    Biryukov, Alexander; Shleenkov, Mark

    2016-10-01

    In the path integral approach we describe evolution of interacting electromagnetic and fermionic fields by the use of density matrix formalism. The equation for density matrix and transitions probability for fermionic field is obtained as average of electromagnetic field influence functional. We obtain a formula for electromagnetic field influence functional calculating for its various initial and final state. We derive electromagnetic field influence functional when its initial and final states are vacuum. We present Lagrangian for relativistic fermionic field under influence of electromagnetic field vacuum.

  7. About Schrödinger Equation on Fractals Curves Imbedding in R 3

    NASA Astrophysics Data System (ADS)

    Golmankhaneh, Alireza Khalili; Golmankhaneh, Ali Khalili; Baleanu, Dumitru

    2015-04-01

    In this paper we introduced the quantum mechanics on fractal time-space. In a suggested formalism the time and space vary on Cantor-set and Von-Koch curve, respectively. Using Feynman path method in quantum mechanics and F α -calculus we find Schrëdinger equation on on fractal time-space. The Hamiltonian and momentum fractal operator has been indicated. More, the continuity equation and the probability density is given in view of F α -calculus.

  8. Landau singularities and symbology: One- and two-loop MHV amplitudes in SYM theory

    DOE PAGES

    Dennen, Tristan; Spradlin, Marcus; Volovich, Anastasia

    2016-03-14

    We apply the Landau equations, whose solutions parameterize the locus of possible branch points, to the one- and two-loop Feynman integrals relevant to MHV amplitudes in planar N = 4 super-Yang-Mills theory. We then identify which of the Landau singularities appear in the symbols of the amplitudes, and which do not. Finally, we observe that all of the symbol entries in the two-loop MHV amplitudes are already present as Landau singularities of one-loop pentagon integrals.

  9. Yang-Mills gauge conditions from Witten's open string field theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng Haidong; Siegel, Warren

    2007-02-15

    We construct the Zinn-Justin-Batalin-Vilkovisky action for tachyons and gauge bosons from Witten's 3-string vertex of the bosonic open string without gauge fixing. Through canonical transformations, we find the off-shell, local, gauge-covariant action up to 3-point terms, satisfying the usual field theory gauge transformations. Perturbatively, it can be extended to higher-point terms. It also gives a new gauge condition in field theory which corresponds to the Feynman-Siegel gauge on the world-sheet.

  10. High energy behavior of gravity at large N

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Canfora, F.

    2006-09-15

    A first step in the analysis of the renormalizability of gravity at large N is carried out. Suitable resummations of planar diagrams give rise to a theory in which there is only a finite number of primitive, superficially divergent, Feynman diagrams. The mechanism is similar to the one which makes the 3D Gross-Neveu model renormalizable at large N. The connections with gravitational confinement and Kawai-Lewellen-Tye relations are briefly analyzed. Some potential problems in fulfilling the Zinn-Justin equations are pointed out.

  11. The Origin of Complex Quantum Amplitudes

    NASA Astrophysics Data System (ADS)

    Goyal, Philip; Knuth, Kevin H.; Skilling, John

    2009-12-01

    Physics is real. Measurement produces real numbers. Yet quantum mechanics uses complex arithmetic, in which √-1 is necessary but mysteriously relates to nothing else. By applying the same sort of symmetry arguments that Cox [1, 2] used to justify probability calculus, we are now able to explain this puzzle. The dual device/object nature of observation requires us to describe the world in terms of pairs of real numbers about which we never have full knowledge. These pairs combine according to complex arithmetic, using Feynman's rules.

  12. One-loop Parke-Taylor factors for quadratic propagators from massless scattering equations

    NASA Astrophysics Data System (ADS)

    Gomez, Humberto; Lopez-Arcos, Cristhiam; Talavera, Pedro

    2017-10-01

    In this paper we reconsider the Cachazo-He-Yuan construction (CHY) of the so called scattering amplitudes at one-loop, in order to obtain quadratic propagators. In theories with colour ordering the key ingredient is the redefinition of the Parke-Taylor factors. After classifying all the possible one-loop CHY-integrands we conjecture a new one-loop amplitude for the massless Bi-adjoint Φ3 theory. The prescription directly reproduces the quadratic propagators of the traditional Feynman approach.

  13. A Maple package for computing Gröbner bases for linear recurrence relations

    NASA Astrophysics Data System (ADS)

    Gerdt, Vladimir P.; Robertz, Daniel

    2006-04-01

    A Maple package for computing Gröbner bases of linear difference ideals is described. The underlying algorithm is based on Janet and Janet-like monomial divisions associated with finite difference operators. The package can be used, for example, for automatic generation of difference schemes for linear partial differential equations and for reduction of multiloop Feynman integrals. These two possible applications are illustrated by simple examples of the Laplace equation and a one-loop scalar integral of propagator type.

  14. Measurement of isotope abundance variations in nature by gravimetric spiking isotope dilution analysis (GS-IDA).

    PubMed

    Chew, Gina; Walczyk, Thomas

    2013-04-02

    Subtle variations in the isotopic composition of elements carry unique information about physical and chemical processes in nature and are now exploited widely in diverse areas of research. Reliable measurement of natural isotope abundance variations is among the biggest challenges in inorganic mass spectrometry as they are highly sensitive to methodological bias. For decades, double spiking of the sample with a mix of two stable isotopes has been considered the reference technique for measuring such variations both by multicollector-inductively coupled plasma mass spectrometry (MC-ICPMS) and multicollector-thermal ionization mass spectrometry (MC-TIMS). However, this technique can only be applied to elements having at least four stable isotopes. Here we present a novel approach that requires measurement of three isotope signals only and which is more robust than the conventional double spiking technique. This became possible by gravimetric mixing of the sample with an isotopic spike in different proportions and by applying principles of isotope dilution for data analysis (GS-IDA). The potential and principle use of the technique is demonstrated for Mg in human urine using MC-TIMS for isotopic analysis. Mg is an element inaccessible to double spiking methods as it consists of three stable isotopes only and shows great potential for metabolically induced isotope effects waiting to be explored.

  15. Boltzmann, Darwin and Directionality theory

    NASA Astrophysics Data System (ADS)

    Demetrius, Lloyd A.

    2013-09-01

    Boltzmann’s statistical thermodynamics is a mathematical theory which relates the macroscopic properties of aggregates of interacting molecules with the laws of their interaction. The theory is based on the concept thermodynamic entropy, a statistical measure of the extent to which energy is spread throughout macroscopic matter. Macroscopic evolution of material aggregates is quantitatively explained in terms of the principle: Thermodynamic entropy increases as the composition of the aggregate changes under molecular collision. Darwin’s theory of evolution is a qualitative theory of the origin of species and the adaptation of populations to their environment. A central concept in the theory is fitness, a qualitative measure of the capacity of an organism to contribute to the ancestry of future generations. Macroscopic evolution of populations of living organisms can be qualitatively explained in terms of a neo-Darwinian principle: Fitness increases as the composition of the population changes under variation and natural selection. Directionality theory is a quantitative model of the Darwinian argument of evolution by variation and selection. This mathematical theory is based on the concept evolutionary entropy, a statistical measure which describes the rate at which an organism appropriates energy from the environment and reinvests this energy into survivorship and reproduction. According to directionality theory, microevolutionary dynamics, that is evolution by mutation and natural selection, can be quantitatively explained in terms of a directionality principle: Evolutionary entropy increases when the resources are diverse and of constant abundance; but decreases when the resource is singular and of variable abundance. This report reviews the analytical and empirical support for directionality theory, and invokes the microevolutionary dynamics of variation and selection to delineate the principles which govern macroevolutionary dynamics of speciation and extinction. We also elucidate the relation between thermodynamic entropy, which pertains to the extent of energy spreading and sharing within inanimate matter, and evolutionary entropy, which refers to the rate of energy appropriation from the environment and allocation within living systems. We show that the entropic principle of thermodynamics is the limit as R→0, M→∞, (where R denote the resource production rate, and M denote population size) of the entropic principle of evolution. We exploit this relation between the thermodynamic and evolutionary tenets to propose a physico-chemical model of the transition from inanimate matter which is under thermodynamic selection, to living systems which are subject to evolutionary selection. Life history variation and the evolution of senescence The evolutionary dynamics of speciation and extinction Evolutionary trends in body size. The origin of sporadic forms of cancer and neurological diseases, and the evolution of cooperation are important recent applications of directionality theory. These applications, which draw from the medical sciences and sociobiology, appeal to methods which lie outside the formalism described in this report. A companion review, Demetrius and Gundlach (submitted for publication), gives an account of these applications.An important aspect of this report pertains to the connection between statistical mechanics and evolutionary theory and its implications towards understanding the processes which underlie the emergence of living systems from inanimate matter-a problem which has recently attracted considerable attention, Morowitz (1992), Eigen (1992), Dyson (2000), Pross (2012).The connection between the two disciplines can be addressed by appealing to certain extremal principles which are considered the mainstay of the respective theories.The extremal principle in statistical mechanics can be stated as follows:

  16. An analytical derivation of MC-SCF vibrational wave functions for the quantum dynamical simulation of multiple proton transfer reactions: Initial application to protonated water chains

    NASA Astrophysics Data System (ADS)

    Drukker, Karen; Hammes-Schiffer, Sharon

    1997-07-01

    This paper presents an analytical derivation of a multiconfigurational self-consistent-field (MC-SCF) solution of the time-independent Schrödinger equation for nuclear motion (i.e. vibrational modes). This variational MC-SCF method is designed for the mixed quantum/classical molecular dynamics simulation of multiple proton transfer reactions, where the transferring protons are treated quantum mechanically while the remaining degrees of freedom are treated classically. This paper presents a proof that the Hellmann-Feynman forces on the classical degrees of freedom are identical to the exact forces (i.e. the Pulay corrections vanish) when this MC-SCF method is used with an appropriate choice of basis functions. This new MC-SCF method is applied to multiple proton transfer in a protonated chain of three hydrogen-bonded water molecules. The ground state and the first three excited state energies and the ground state forces agree well with full configuration interaction calculations. Sample trajectories are obtained using adiabatic molecular dynamics methods, and nonadiabatic effects are found to be insignificant for these sample trajectories. The accuracy of the excited states will enable this MC-SCF method to be used in conjunction with nonadiabatic molecular dynamics methods. This application differs from previous work in that it is a real-time quantum dynamical nonequilibrium simulation of multiple proton transfer in a chain of water molecules.

  17. Wannier-function-based constrained DFT with nonorthogonality-correcting Pulay forces in application to the reorganization effects in graphene-adsorbed pentacene

    NASA Astrophysics Data System (ADS)

    Roychoudhury, Subhayan; O'Regan, David D.; Sanvito, Stefano

    2018-05-01

    Pulay terms arise in the Hellmann-Feynman forces in electronic-structure calculations when one employs a basis set made of localized orbitals that move with their host atoms. If the total energy of the system depends on a subspace population defined in terms of the localized orbitals across multiple atoms, then unconventional Pulay terms will emerge due to the variation of the orbital nonorthogonality with ionic translation. Here, we derive the required exact expressions for such terms, which cannot be eliminated by orbital orthonormalization. We have implemented these corrected ionic forces within the linear-scaling density functional theory (DFT) package onetep, and we have used constrained DFT to calculate the reorganization energy of a pentacene molecule adsorbed on a graphene flake. The calculations are performed by including ensemble DFT, corrections for periodic boundary conditions, and empirical Van der Waals interactions. For this system we find that tensorially invariant population analysis yields an adsorbate subspace population that is very close to integer-valued when based upon nonorthogonal Wannier functions, and also but less precisely so when using pseudoatomic functions. Thus, orbitals can provide a very effective population analysis for constrained DFT. Our calculations show that the reorganization energy of the adsorbed pentacene is typically lower than that of pentacene in the gas phase. We attribute this effect to steric hindrance.

  18. Path integral Monte Carlo and the electron gas

    NASA Astrophysics Data System (ADS)

    Brown, Ethan W.

    Path integral Monte Carlo is a proven method for accurately simulating quantum mechanical systems at finite-temperature. By stochastically sampling Feynman's path integral representation of the quantum many-body density matrix, path integral Monte Carlo includes non-perturbative effects like thermal fluctuations and particle correlations in a natural way. Over the past 30 years, path integral Monte Carlo has been successfully employed to study the low density electron gas, high-pressure hydrogen, and superfluid helium. For systems where the role of Fermi statistics is important, however, traditional path integral Monte Carlo simulations have an exponentially decreasing efficiency with decreased temperature and increased system size. In this thesis, we work towards improving this efficiency, both through approximate and exact methods, as specifically applied to the homogeneous electron gas. We begin with a brief overview of the current state of atomic simulations at finite-temperature before we delve into a pedagogical review of the path integral Monte Carlo method. We then spend some time discussing the one major issue preventing exact simulation of Fermi systems, the sign problem. Afterwards, we introduce a way to circumvent the sign problem in PIMC simulations through a fixed-node constraint. We then apply this method to the homogeneous electron gas at a large swatch of densities and temperatures in order to map out the warm-dense matter regime. The electron gas can be a representative model for a host of real systems, from simple medals to stellar interiors. However, its most common use is as input into density functional theory. To this end, we aim to build an accurate representation of the electron gas from the ground state to the classical limit and examine its use in finite-temperature density functional formulations. The latter half of this thesis focuses on possible routes beyond the fixed-node approximation. As a first step, we utilize the variational principle inherent in the path integral Monte Carlo method to optimize the nodal surface. By using a ansatz resembling a free particle density matrix, we make a unique connection between a nodal effective mass and the traditional effective mass of many-body quantum theory. We then propose and test several alternate nodal ansatzes and apply them to single atomic systems. Finally, we propose a method to tackle the sign problem head on, by leveraging the relatively simple structure of permutation space. Using this method, we find we can perform exact simulations this of the electron gas and 3He that were previously impossible.

  19. First-Principles Momentum-Dependent Local Ansatz Wavefunction and Momentum Distribution Function Bands of Iron

    NASA Astrophysics Data System (ADS)

    Kakehashi, Yoshiro; Chandra, Sumal

    2016-04-01

    We have developed a first-principles local ansatz wavefunction approach with momentum-dependent variational parameters on the basis of the tight-binding LDA+U Hamiltonian. The theory goes beyond the first-principles Gutzwiller approach and quantitatively describes correlated electron systems. Using the theory, we find that the momentum distribution function (MDF) bands of paramagnetic bcc Fe along high-symmetry lines show a large deviation from the Fermi-Dirac function for the d electrons with eg symmetry and yield the momentum-dependent mass enhancement factors. The calculated average mass enhancement m*/m = 1.65 is consistent with low-temperature specific heat data as well as recent angle-resolved photoemission spectroscopy (ARPES) data.

  20. Effect of Automatic Processing on Specification of Problem Solutions for Computer Programs.

    DTIC Science & Technology

    1981-03-01

    Number 7 ± 2" item limitaion on human short-term memory capability (Miller, 1956) should be a guiding principle in program design. Yourdon and...input either a single example solution or multiple exam’- le solutions in sequence. If a participant’s P1 has a low value - near 0 - it may be concluded... Principles in Experimental Design, Winer ,1971). 55 Table 12 ANOVA Resultt, For Performance Measure 2 Sb DF MS F Source of Variation Between Subjects

  1. The research of statistical properties of colorimetric features of screens with a three-component color formation principle

    NASA Astrophysics Data System (ADS)

    Zharinov, I. O.; Zharinov, O. O.

    2017-12-01

    The problem of the research is concerned with quantitative analysis of influence of technological variation of the screen color profile parameters on chromaticity coordinates of the displayed image. Some mathematical expressions which approximate the two-dimensional distribution of chromaticity coordinates of an image, which is displayed on the screen with a three-component color formation principle were proposed. Proposed mathematical expressions show the way to development of correction techniques to improve reproducibility of the colorimetric features of displays.

  2. In defence of moral imperialism: four equal and universal prima facie principles.

    PubMed

    Dawson, A; Garrard, E

    2006-04-01

    Raanan Gillon is a noted defender of the four principles approach to healthcare ethics. His general position has always been that these principles are to be considered to be both universal and prima facie in nature. In recent work, however, he has made two claims that seem to present difficulties for this view. His first claim is that one of these four principles, respect for autonomy, has a special position in relation to the others: he holds that it is first among equals. We argue that this claim makes little sense if the principles are to retain their prima facie nature. His second claim is that cultural variation can play an independent normative role in the construction of our moral judgments. This, he argues, enables us to occupy a middle ground between what he sees as the twin pitfalls of moral relativism and (what he calls) moral imperialism. We argue that there is no such middle ground, and while Gillon ultimately seems committed to relativism, it is some form of moral imperialism (in the form of moral objectivism) that will provide the only satisfactory construal of the four principles as prima facie universal moral principles.

  3. In defence of moral imperialism: four equal and universal prima facie principles

    PubMed Central

    Dawson, A; Garrard, E

    2006-01-01

    Raanan Gillon is a noted defender of the four principles approach to healthcare ethics. His general position has always been that these principles are to be considered to be both universal and prima facie in nature. In recent work, however, he has made two claims that seem to present difficulties for this view. His first claim is that one of these four principles, respect for autonomy, has a special position in relation to the others: he holds that it is first among equals. We argue that this claim makes little sense if the principles are to retain their prima facie nature. His second claim is that cultural variation can play an independent normative role in the construction of our moral judgments. This, he argues, enables us to occupy a middle ground between what he sees as the twin pitfalls of moral relativism and (what he calls) moral imperialism. We argue that there is no such middle ground, and while Gillon ultimately seems committed to relativism, it is some form of moral imperialism (in the form of moral objectivism) that will provide the only satisfactory construal of the four principles as prima facie universal moral principles. PMID:16574872

  4. Dynamics, morphogenesis and convergence of evolutionary quantum Prisoner's Dilemma games on networks

    PubMed Central

    Yong, Xi

    2016-01-01

    The authors proposed a quantum Prisoner's Dilemma (PD) game as a natural extension of the classic PD game to resolve the dilemma. Here, we establish a new Nash equilibrium principle of the game, propose the notion of convergence and discover the convergence and phase-transition phenomena of the evolutionary games on networks. We investigate the many-body extension of the game or evolutionary games in networks. For homogeneous networks, we show that entanglement guarantees a quick convergence of super cooperation, that there is a phase transition from the convergence of defection to the convergence of super cooperation, and that the threshold for the phase transitions is principally determined by the Nash equilibrium principle of the game, with an accompanying perturbation by the variations of structures of networks. For heterogeneous networks, we show that the equilibrium frequencies of super-cooperators are divergent, that entanglement guarantees emergence of super-cooperation and that there is a phase transition of the emergence with the threshold determined by the Nash equilibrium principle, accompanied by a perturbation by the variations of structures of networks. Our results explore systematically, for the first time, the dynamics, morphogenesis and convergence of evolutionary games in interacting and competing systems. PMID:27118882

  5. Coupled Structural, Thermal, Phase-Change and Electromagnetic Analysis for Superconductors. Volume 1

    NASA Technical Reports Server (NTRS)

    Felippa, C. A.; Farhat, C.; Park, K. C.; Militello, C.; Schuler, J. J.

    1996-01-01

    Described are the theoretical development and computer implementation of reliable and efficient methods for the analysis of coupled mechanical problems that involve the interaction of mechanical, thermal, phase-change and electromagnetic subproblems. The focus application has been the modeling of superconductivity and associated quantum-state phase-change phenomena. In support of this objective the work has addressed the following issues: (1) development of variational principles for finite elements, (2) finite element modeling of the electromagnetic problem, (3) coupling of thermal and mechanical effects, and (4) computer implementation and solution of the superconductivity transition problem. The main accomplishments have been: (1) the development of the theory of parametrized and gauged variational principles, (2) the application of those principled to the construction of electromagnetic, thermal and mechanical finite elements, and (3) the coupling of electromagnetic finite elements with thermal and superconducting effects, and (4) the first detailed finite element simulations of bulk superconductors, in particular the Meissner effect and the nature of the normal conducting boundary layer. The theoretical development is described in two volumes. This volume, Volume 1, describes mostly formulations for specific problems. Volume 2 describes generalization of those formulations.

  6. Ab-initio study on the absorption spectrum of color change sapphire based on first-principles calculations with considering lattice relaxation-effect

    NASA Astrophysics Data System (ADS)

    Novita, Mega; Nagoshi, Hikari; Sudo, Akiho; Ogasawara, Kazuyoshi

    2018-01-01

    In this study, we performed an investigation on α-Al2O3: V3+ material, or the so-called color change sapphire, based on first-principles calculations without referring to any experimental parameter. The molecular orbital (MO) structure was estimated by the one-electron MO calculations using the discrete variational-Xα (DV-Xα) method. Next, the absorption spectra were estimated by the many-electron calculations using the discrete variational multi-electron (DVME) method. The effect of lattice relaxation on the crystal structures was estimated based on the first-principles band structure calculations. We performed geometry optimizations on the pure α-Al2O3 and with the impurity V3+ ion using Cambridge Serial Total Energy Package (CASTEP) code. The effect of energy corrections such as configuration dependence correction and correlation correction was also investigated in detail. The results revealed that the structural change on the α-Al2O3: V3+ resulted from the geometry optimization improved the calculated absorption spectra. By a combination of both the lattice relaxation-effect and the energy correction-effect improve the agreement to the experiment fact.

  7. Thermodynamic framework to assess low abundance DNA mutation detection by hybridization.

    PubMed

    Willems, Hanny; Jacobs, An; Hadiwikarta, Wahyu Wijaya; Venken, Tom; Valkenborg, Dirk; Van Roy, Nadine; Vandesompele, Jo; Hooyberghs, Jef

    2017-01-01

    The knowledge of genomic DNA variations in patient samples has a high and increasing value for human diagnostics in its broadest sense. Although many methods and sensors to detect or quantify these variations are available or under development, the number of underlying physico-chemical detection principles is limited. One of these principles is the hybridization of sample target DNA versus nucleic acid probes. We introduce a novel thermodynamics approach and develop a framework to exploit the specific detection capabilities of nucleic acid hybridization, using generic principles applicable to any platform. As a case study, we detect point mutations in the KRAS oncogene on a microarray platform. For the given platform and hybridization conditions, we demonstrate the multiplex detection capability of hybridization and assess the detection limit using thermodynamic considerations; DNA containing point mutations in a background of wild type sequences can be identified down to at least 1% relative concentration. In order to show the clinical relevance, the detection capabilities are confirmed on challenging formalin-fixed paraffin-embedded clinical tumor samples. This enzyme-free detection framework contains the accuracy and efficiency to screen for hundreds of mutations in a single run with many potential applications in molecular diagnostics and the field of personalised medicine.

  8. Divergent conservation laws in hyperbolic thermoelasticity

    NASA Astrophysics Data System (ADS)

    Murashkin, E. V.; Radayev, Y. N.

    2018-05-01

    The present study is devoted to the problem of formulation of conservation laws in divergent form for hyperbolic thermoelastic continua. The field formalism is applied to study the problem. A natural density of thermoelastic action and the corresponding variational least action principle are formulated. A special form of the first variation of the action is employed to obtain 4-covariant divergent conservation laws. Differential field equations and constitutive laws are derived from a special form of the first variation of the action integral. The objectivity of constitutive equations is provided by the rotationally invariant forms of the Lagrangian employed.

  9. Variational method of determining effective moduli of polycrystals: (A) hexagonal symmetry, (B) trigonal symmetry

    USGS Publications Warehouse

    Peselnick, L.; Meister, R.

    1965-01-01

    Variational principles of anisotropic elasticity have been applied to aggregates of randomly oriented pure-phase polycrystals having hexagonal symmetry and trigonal symmetry. The bounds of the effective elastic moduli obtained in this way show a considerable improvement over the bounds obtained by means of the Voigt and Reuss assumptions. The Hill average is found to be in most cases a good approximation when compared to the bounds found from the variational method. The new bounds reduce in their limits to the Voigt and Reuss values. ?? 1965 The American Institute of Physics.

  10. An MHD variational principle that admits reconnection

    NASA Technical Reports Server (NTRS)

    Rilee, M. L.; Sudan, R. N.; Pfirsch, D.

    1997-01-01

    The variational approach of Pfirsch and Sudan's averaged magnetohydrodynamics (MHD) to the stability of a line-tied current layer is summarized. The effect of line-tying on current sheets that might arise in line-tied magnetic flux tubes by estimating the growth rates of a resistive instability using a variational method. The results show that this method provides a potentially new technique to gauge the stability of nearly ideal magnetohydrodynamic systems. The primary implication for the stability of solar coronal structures is that tearing modes are probably constant at work removing magnetic shear from the solar corona.

  11. Geomagnetic field models incorporating physical constraints on the secular variation

    NASA Technical Reports Server (NTRS)

    Constable, Catherine; Parker, Robert L.

    1993-01-01

    This proposal has been concerned with methods for constructing geomagnetic field models that incorporate physical constraints on the secular variation. The principle goal that has been accomplished is the development of flexible algorithms designed to test whether the frozen flux approximation is adequate to describe the available geomagnetic data and their secular variation throughout this century. These have been applied to geomagnetic data from both the early and middle part of this century and convincingly demonstrate that there is no need to invoke violations of the frozen flux hypothesis in order to satisfy the available geomagnetic data.

  12. Analysis of Thermal Track Buckling in the Lateral Plane

    DOT National Transportation Integrated Search

    1976-09-01

    The post-buckling equilibrium states are determined analytically. To obtain a consistent formulation of the problem, use is made of the principle of virtual displacements and the variational calculus for variable matching points. The obtained formula...

  13. Mathematical Learning Disabilities in Special Populations: Phenotypic Variation and Cross-Disorder Comparisons

    PubMed Central

    Dennis, Maureen; Berch, Daniel B.; Mazzocco, Michèle M.M.

    2011-01-01

    What is mathematical learning disability (MLD)? The reviews in this special issue adopt different approaches to defining the construct of MLD. Collectively, they demonstrate the current status of efforts to establish a consensus definition and the challenges faced in this endeavor. In this commentary, we reflect upon the proposed pathways to mathematical learning difficulties and disabilities presented across the reviews. Specifically we consider how each of the reviews contributes to identifying the MLD phenotype by specifying the range of assets and deficits in mathematics, identifying sources of individual variation, and characterizing the natural progression of MLD over the life course. We show how principled comparisons across disorders address issues about the cognitive and behavioral co-morbidities of MLD, and whether commonalities in brain dysmorphology are associated with common mathematics performance profiles. We project the status of MLD research ten years hence with respect to theoretical gains, advances in methodology, and principled intervention studies. PMID:19213019

  14. Tests of Mach's Principle With a Mechanical Oscillator

    NASA Technical Reports Server (NTRS)

    Millis, Marc G. (Technical Monitor); Cramer, John G.; Fey, Curran W.; Casissi, Damon V.

    2004-01-01

    James F. Woodward has made a prediction, based on Sciama's formulation of Mach's Principle in the framework of general relativity, that in the presence of an energy flow the inertial mass of an object may undergo sizable variations, changing as the second time derivative of the energy. We describe an attempt to test for the predicted effect with a charging capacitor, using a technique that does not require an unbalanced force or any local violation of Newton s 3rd law of motion. We attempt to observe: (1) the gravitational effect of the varying mass and (2) the effect of the mass variation on a driven harmonic oscillator with the charging capacitor as the oscillating mass. We report on the predicted effect, the design and implementation of the measurement apparatus, and initial experience with the apparatus. At this time, however, we will not report on observations of the presence or absence of the Woodward effect.

  15. Path Integrals for Electronic Densities, Reactivity Indices, and Localization Functions in Quantum Systems

    PubMed Central

    Putz, Mihai V.

    2009-01-01

    The density matrix theory, the ancestor of density functional theory, provides the immediate framework for Path Integral (PI) development, allowing the canonical density be extended for the many-electronic systems through the density functional closure relationship. Yet, the use of path integral formalism for electronic density prescription presents several advantages: assures the inner quantum mechanical description of the system by parameterized paths; averages the quantum fluctuations; behaves as the propagator for time-space evolution of quantum information; resembles Schrödinger equation; allows quantum statistical description of the system through partition function computing. In this framework, four levels of path integral formalism were presented: the Feynman quantum mechanical, the semiclassical, the Feynman-Kleinert effective classical, and the Fokker-Planck non-equilibrium ones. In each case the density matrix or/and the canonical density were rigorously defined and presented. The practical specializations for quantum free and harmonic motions, for statistical high and low temperature limits, the smearing justification for the Bohr’s quantum stability postulate with the paradigmatic Hydrogen atomic excursion, along the quantum chemical calculation of semiclassical electronegativity and hardness, of chemical action and Mulliken electronegativity, as well as by the Markovian generalizations of Becke-Edgecombe electronic focalization functions – all advocate for the reliability of assuming PI formalism of quantum mechanics as a versatile one, suited for analytically and/or computationally modeling of a variety of fundamental physical and chemical reactivity concepts characterizing the (density driving) many-electronic systems. PMID:20087467

  16. Path integrals for electronic densities, reactivity indices, and localization functions in quantum systems.

    PubMed

    Putz, Mihai V

    2009-11-10

    The density matrix theory, the ancestor of density functional theory, provides the immediate framework for Path Integral (PI) development, allowing the canonical density be extended for the many-electronic systems through the density functional closure relationship. Yet, the use of path integral formalism for electronic density prescription presents several advantages: assures the inner quantum mechanical description of the system by parameterized paths; averages the quantum fluctuations; behaves as the propagator for time-space evolution of quantum information; resembles Schrödinger equation; allows quantum statistical description of the system through partition function computing. In this framework, four levels of path integral formalism were presented: the Feynman quantum mechanical, the semiclassical, the Feynman-Kleinert effective classical, and the Fokker-Planck non-equilibrium ones. In each case the density matrix or/and the canonical density were rigorously defined and presented. The practical specializations for quantum free and harmonic motions, for statistical high and low temperature limits, the smearing justification for the Bohr's quantum stability postulate with the paradigmatic Hydrogen atomic excursion, along the quantum chemical calculation of semiclassical electronegativity and hardness, of chemical action and Mulliken electronegativity, as well as by the Markovian generalizations of Becke-Edgecombe electronic focalization functions - all advocate for the reliability of assuming PI formalism of quantum mechanics as a versatile one, suited for analytically and/or computationally modeling of a variety of fundamental physical and chemical reactivity concepts characterizing the (density driving) many-electronic systems.

  17. Efficient geometry optimization by Hellmann-Feynman forces with the anti-Hermitian contracted Schrödinger equation

    NASA Astrophysics Data System (ADS)

    Foley, Jonathan J.; Mazziotti, David A.

    2010-10-01

    An efficient method for geometry optimization based on solving the anti-Hermitian contracted Schrödinger equation (ACSE) is presented. We formulate a reduced version of the Hellmann-Feynman theorem (HFT) in terms of the two-electron reduced Hamiltonian operator and the two-electron reduced density matrix (2-RDM). The HFT offers a considerable reduction in computational cost over methods which rely on numerical derivatives. While previous geometry optimizations with numerical gradients required 2M evaluations of the ACSE where M is the number of nuclear degrees of freedom, the HFT requires only a single ACSE calculation of the 2-RDM per gradient. Synthesizing geometry optimization techniques with recent extensions of the ACSE theory to arbitrary electronic and spin states provides an important suite of tools for accurately determining equilibrium and transition-state structures of ground- and excited-state molecules in closed- and open-shell configurations. The ability of the ACSE to balance single- and multi-reference correlation is particularly advantageous in the determination of excited-state geometries where the electronic configurations differ greatly from the ground-state reference. Applications are made to closed-shell molecules N2, CO, H2O, the open-shell molecules B2 and CH, and the excited state molecules N2, B2, and BH. We also study the HCN ↔ HNC isomerization and the geometry optimization of hydroxyurea, a molecule which has a significant role in the treatment of sickle-cell anaemia.

  18. Scalar formalism for non-Abelian gauge theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hostler, L.C.

    1986-09-01

    The gauge field theory of an N-italic-dimensional multiplet of spin- 1/2 particles is investigated using the Klein--Gordon-type wave equation )Pi x (1+i-italicsigma) x Pi+m-italic/sup 2/)Phi = 0, Pi/sub ..mu../equivalentpartial/partiali-italicx-italic/sub ..mu../-e-italicA-italic/sub ..mu../, investigated before by a number of authors, to describe the fermions. Here Phi is a 2 x 1 Pauli spinor, and sigma repesents a Lorentz spin tensor whose components sigma/sub ..mu..//sub ..nu../ are ordinary 2 x 2 Pauli spin matrices. Feynman rules for the scalar formalism for non-Abelian gauge theory are derived starting from the conventional field theory of the multiplet and converting it to the new description. Themore » equivalence of the new and the old formalism for arbitrary radiative processes is thereby established. The conversion to the scalar formalism is accomplished in a novel way by working in terms of the path integral representation of the generating functional of the vacuum tau-functions, tau(2,1, xxx 3 xxx)equivalent<0-chemically bondT-italic(Psi/sub in/(2) Psi-bar/sub in/(1) xxx A-italic/sub ..mu../(3)/sub in/ xxx S-italic)chemically bond0->, where Psi/sub in/ is a Heisenberg operator belonging to a 4N-italic x 1 Dirac wave function of the multiplet. The Feynman rules obtained generalize earlier results for the Abelian case of quantum electrodynamics.« less

  19. A variational approach to niche construction.

    PubMed

    Constant, Axel; Ramstead, Maxwell J D; Veissière, Samuel P L; Campbell, John O; Friston, Karl J

    2018-04-01

    In evolutionary biology, niche construction is sometimes described as a genuine evolutionary process whereby organisms, through their activities and regulatory mechanisms, modify their environment such as to steer their own evolutionary trajectory, and that of other species. There is ongoing debate, however, on the extent to which niche construction ought to be considered a bona fide evolutionary force, on a par with natural selection. Recent formulations of the variational free-energy principle as applied to the life sciences describe the properties of living systems, and their selection in evolution, in terms of variational inference. We argue that niche construction can be described using a variational approach. We propose new arguments to support the niche construction perspective, and to extend the variational approach to niche construction to current perspectives in various scientific fields. © 2018 The Authors.

  20. A variational approach to niche construction

    PubMed Central

    Ramstead, Maxwell J. D.; Veissière, Samuel P. L.; Campbell, John O.; Friston, Karl J.

    2018-01-01

    In evolutionary biology, niche construction is sometimes described as a genuine evolutionary process whereby organisms, through their activities and regulatory mechanisms, modify their environment such as to steer their own evolutionary trajectory, and that of other species. There is ongoing debate, however, on the extent to which niche construction ought to be considered a bona fide evolutionary force, on a par with natural selection. Recent formulations of the variational free-energy principle as applied to the life sciences describe the properties of living systems, and their selection in evolution, in terms of variational inference. We argue that niche construction can be described using a variational approach. We propose new arguments to support the niche construction perspective, and to extend the variational approach to niche construction to current perspectives in various scientific fields. PMID:29643221

Top