Sample records for path integral quantization

  1. Topological charge quantization via path integration: An application of the Kustaanheimo-Stiefel transformation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Inomata, A.; Junker, G.; Wilson, R.

    1993-08-01

    The unified treatment of the Dirac monopole, the Schwinger monopole, and the Aharonov-Bahn problem by Barut and Wilson is revisited via a path integral approach. The Kustaanheimo-Stiefel transformation of space and time is utilized to calculate the path integral for a charged particle in the singular vector potential. In the process of dimensional reduction, a topological charge quantization rule is derived, which contains Dirac's quantization condition as a special case. 32 refs.

  2. Feynman formulae and phase space Feynman path integrals for tau-quantization of some Lévy-Khintchine type Hamilton functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Butko, Yana A., E-mail: yanabutko@yandex.ru, E-mail: kinderknecht@math.uni-sb.de; Grothaus, Martin, E-mail: grothaus@mathematik.uni-kl.de; Smolyanov, Oleg G., E-mail: Smolyanov@yandex.ru

    2016-02-15

    Evolution semigroups generated by pseudo-differential operators are considered. These operators are obtained by different (parameterized by a number τ) procedures of quantization from a certain class of functions (or symbols) defined on the phase space. This class contains Hamilton functions of particles with variable mass in magnetic and potential fields and more general symbols given by the Lévy-Khintchine formula. The considered semigroups are represented as limits of n-fold iterated integrals when n tends to infinity. Such representations are called Feynman formulae. Some of these representations are constructed with the help of another pseudo-differential operator, obtained by the same procedure ofmore » quantization; such representations are called Hamiltonian Feynman formulae. Some representations are based on integral operators with elementary kernels; these are called Lagrangian Feynman formulae. Langrangian Feynman formulae provide approximations of evolution semigroups, suitable for direct computations and numerical modeling of the corresponding dynamics. Hamiltonian Feynman formulae allow to represent the considered semigroups by means of Feynman path integrals. In the article, a family of phase space Feynman pseudomeasures corresponding to different procedures of quantization is introduced. The considered evolution semigroups are represented as phase space Feynman path integrals with respect to these Feynman pseudomeasures, i.e., different quantizations correspond to Feynman path integrals with the same integrand but with respect to different pseudomeasures. This answers Berezin’s problem of distinguishing a procedure of quantization on the language of Feynman path integrals. Moreover, the obtained Lagrangian Feynman formulae allow also to calculate these phase space Feynman path integrals and to connect them with some functional integrals with respect to probability measures.« less

  3. Quantization of Simple Parametrized Systems

    NASA Astrophysics Data System (ADS)

    Ruffini, Giulio

    1995-01-01

    I study the canonical formulation and quantization of some simple parametrized systems using Dirac's formalism and the Becchi-Rouet-Stora-Tyutin (BRST) extended phase space method. These systems include the parametrized particle and minisuperspace. Using Dirac's formalism I first analyze for each case the construction of the classical reduced phase space. There are two separate features of these systems that may make this construction difficult: (a) Because of the boundary conditions used, the actions are not gauge invariant at the boundaries. (b) The constraints may have a disconnected solution space. The relativistic particle and minisuperspace have such complicated constraints, while the non-relativistic particle displays only the first feature. I first show that a change of gauge fixing is equivalent to a canonical transformation in the reduced phase space, thus resolving the problems associated with the first feature above. Then I consider the quantization of these systems using several approaches: Dirac's method, Dirac-Fock quantization, and the BRST formalism. In the cases of the relativistic particle and minisuperspace I consider first the quantization of one branch of the constraint at the time and then discuss the backgrounds in which it is possible to quantize simultaneously both branches. I motivate and define the inner product, and obtain, for example, the Klein-Gordon inner product for the relativistic case. Then I show how to construct phase space path integral representations for amplitudes in these approaches--the Batalin-Fradkin-Vilkovisky (BFV) and the Faddeev path integrals --from which one can then derive the path integrals in coordinate space--the Faddeev-Popov path integral and the geometric path integral. In particular I establish the connection between the Hilbert space representation and the range of the lapse in the path integrals. I also examine the class of paths that contribute in the path integrals and how they affect space-time covariance, concluding that it is consistent to take paths that move forward in time only when there is no electric field. The key elements in this analysis are the space-like paths and the behavior of the action under the non-trivial ( Z_2) element of the reparametrization group.

  4. Gauge fixing and BFV quantization

    NASA Astrophysics Data System (ADS)

    Rogers, Alice

    2000-01-01

    Non-singularity conditions are established for the Batalin-Fradkin-Vilkovisky (BFV) gauge-fixing fermion which are sufficient for it to lead to the correct path integral for a theory with constraints canonically quantized in the BFV approach. The conditions ensure that the anticommutator of this fermion with the BRST charge regularizes the path integral by regularizing the trace over non-physical states in each ghost sector. The results are applied to the quantization of a system which has a Gribov problem, using a non-standard form of the gauge-fixing fermion.

  5. Quantization of simple parametrized systems

    NASA Astrophysics Data System (ADS)

    Ruffini, G.

    2005-11-01

    I study the canonical formulation and quantization of some simple parametrized systems, including the non-relativistic parametrized particle and the relativistic parametrized particle. Using Dirac's formalism I construct for each case the classical reduced phase space and study the dependence on the gauge fixing used. Two separate features of these systems can make this construction difficult: the actions are not invariant at the boundaries, and the constraints may have disconnected solution spaces. The relativistic particle is affected by both, while the non-relativistic particle displays only by the first. Analyzing the role of canonical transformations in the reduced phase space, I show that a change of gauge fixing is equivalent to a canonical transformation. In the relativistic case, quantization of one branch of the constraint at the time is applied and I analyze the electromagenetic backgrounds in which it is possible to quantize simultaneously both branches and still obtain a covariant unitary quantum theory. To preserve unitarity and space-time covariance, second quantization is needed unless there is no electric field. I motivate a definition of the inner product in all these cases and derive the Klein-Gordon inner product for the relativistic case. I construct phase space path integral representations for amplitudes for the BFV and the Faddeev path integrals, from which the path integrals in coordinate space (Faddeev-Popov and geometric path integrals) are derived.

  6. Application of path-integral quantization to indistinguishable particle systems topologically confined by a magnetic field

    NASA Astrophysics Data System (ADS)

    Jacak, Janusz E.

    2018-01-01

    We demonstrate an original development of path-integral quantization in the case of a multiply connected configuration space of indistinguishable charged particles on a 2D manifold and exposed to a strong perpendicular magnetic field. The system occurs to be exceptionally homotopy-rich and the structure of the homotopy essentially depends on the magnetic field strength resulting in multiloop trajectories at specific conditions. We have proved, by a generalization of the Bohr-Sommerfeld quantization rule, that the size of a magnetic field flux quantum grows for multiloop orbits like (2 k +1 ) h/c with the number of loops k . Utilizing this property for electrons on the 2D substrate jellium, we have derived upon the path integration a complete FQHE hierarchy in excellent consistence with experiments. The path integral has been next developed to a sum over configurations, displaying various patterns of trajectory homotopies (topological configurations), which in the nonstationary case of quantum kinetics, reproduces some unclear formerly details in the longitudinal resistivity observed in experiments.

  7. Path-integral representation for the relativistic particle propagators and BFV quantization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fradkin, E.S.; Gitman, D.M.

    1991-11-15

    The path-integral representations for the propagators of scalar and spinor fields in an external electromagnetic field are derived. The Hamiltonian form of such expressions can be interpreted in the sense of Batalin-Fradkin-Vilkovisky quantization of one-particle theory. The Lagrangian representation as derived allows one to extract in a natural way the expressions for the corresponding gauge-invariant (reparametrization- and supergauge-invariant) actions for pointlike scalar and spinning particles. At the same time, the measure and ranges of integrations, admissible gauge conditions, and boundary conditions can be exactly established.

  8. Simplified path integral for supersymmetric quantum mechanics and type-A trace anomalies

    NASA Astrophysics Data System (ADS)

    Bastianelli, Fiorenzo; Corradini, Olindo; Iacconi, Laura

    2018-05-01

    Particles in a curved space are classically described by a nonlinear sigma model action that can be quantized through path integrals. The latter require a precise regularization to deal with the derivative interactions arising from the nonlinear kinetic term. Recently, for maximally symmetric spaces, simplified path integrals have been developed: they allow to trade the nonlinear kinetic term with a purely quadratic kinetic term (linear sigma model). This happens at the expense of introducing a suitable effective scalar potential, which contains the information on the curvature of the space. The simplified path integral provides a sensible gain in the efficiency of perturbative calculations. Here we extend the construction to models with N = 1 supersymmetry on the worldline, which are applicable to the first quantized description of a Dirac fermion. As an application we use the simplified worldline path integral to compute the type-A trace anomaly of a Dirac fermion in d dimensions up to d = 16.

  9. Universe creation from the third-quantized vacuum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGuigan, M.

    1989-04-15

    Third quantization leads to a Hilbert space containing a third-quantized vacuum in which no universes are present as well as multiuniverse states. We consider the possibility of universe creation for the special case where the universe emerges in a no-particle state. The probability of such a creation is computed from both the path-integral and operator formalisms.

  10. Universe creation from the third-quantized vacuum

    NASA Astrophysics Data System (ADS)

    McGuigan, Michael

    1989-04-01

    Third quantization leads to a Hilbert space containing a third-quantized vacuum in which no universes are present as well as multiuniverse states. We consider the possibility of universe creation for the special case where the universe emerges in a no-particle state. The probability of such a creation is computed from both the path-integral and operator formalisms.

  11. Instant-Form and Light-Front Quantization of Field Theories

    NASA Astrophysics Data System (ADS)

    Kulshreshtha, Usha; Kulshreshtha, Daya Shankar; Vary, James

    2018-05-01

    In this work we consider the instant-form and light-front quantization of some field theories. As an example, we consider a class of gauged non-linear sigma models with different regularizations. In particular, we present the path integral quantization of the gauged non-linear sigma model in the Faddeevian regularization. We also make a comparision of the possible differences in the instant-form and light-front quantization at appropriate places.

  12. Quantization of Non-Lagrangian Systems

    NASA Astrophysics Data System (ADS)

    Kochan, Denis

    A novel method for quantization of non-Lagrangian (open) systems is proposed. It is argued that the essential object, which provides both classical and quantum evolution, is a certain canonical two-form defined in extended velocity space. In this setting classical dynamics is recovered from the stringy-type variational principle, which employs umbilical surfaces instead of histories of the system. Quantization is then accomplished in accordance with the introduced variational principle. The path integral for the transition probability amplitude (propagator) is rearranged to a surface functional integral. In the standard case of closed (Lagrangian) systems the presented method reduces to the standard Feynman's approach. The inverse problem of the calculus of variation, the problem of quantization ambiguity and the quantum mechanics in the presence of friction are analyzed in detail.

  13. Superfield quantization

    NASA Astrophysics Data System (ADS)

    Batalin, I. A.; Bering, K.; Damgaard, P. H.

    1998-03-01

    We present a superfield formulation of the quantization program for theories with first-class constraints. An exact operator formulation is given, and we show how to set up a phase-space path integral entirely in terms of superfields. BRST transformations and canonical transformations enter on equal footing, and they allow us to establish a superspace analog of the BFV theorem. We also present a formal derivation of the Lagrangian superfield analogue of the field-antifield formalism by an integration over half of the phase-space variables.

  14. Course 4: Anyons

    NASA Astrophysics Data System (ADS)

    Myrheim, J.

    Contents 1 Introduction 1.1 The concept of particle statistics 1.2 Statistical mechanics and the many-body problem 1.3 Experimental physics in two dimensions 1.4 The algebraic approach: Heisenberg quantization 1.5 More general quantizations 2 The configuration space 2.1 The Euclidean relative space for two particles 2.2 Dimensions d=1,2,3 2.3 Homotopy 2.4 The braid group 3 Schroedinger quantization in one dimension 4 Heisenberg quantization in one dimension 4.1 The coordinate representation 5 Schroedinger quantization in dimension d ≥ 2 5.1 Scalar wave functions 5.2 Homotopy 5.3 Interchange phases 5.4 The statistics vector potential 5.5 The N-particle case 5.6 Chern-Simons theory 6 The Feynman path integral for anyons 6.1 Eigenstates for position and momentum 6.2 The path integral 6.3 Conjugation classes in SN 6.4 The non-interacting case 6.5 Duality of Feynman and Schroedinger quantization 7 The harmonic oscillator 7.1 The two-dimensional harmonic oscillator 7.2 Two anyons in a harmonic oscillator potential 7.3 More than two anyons 7.4 The three-anyon problem 8 The anyon gas 8.1 The cluster and virial expansions 8.2 First and second order perturbative results 8.3 Regularization by periodic boundary conditions 8.4 Regularization by a harmonic oscillator potential 8.5 Bosons and fermions 8.6 Two anyons 8.7 Three anyons 8.8 The Monte Carlo method 8.9 The path integral representation of the coefficients GP 8.10 Exact and approximate polynomials 8.11 The fourth virial coefficient of anyons 8.12 Two polynomial theorems 9 Charged particles in a constant magnetic field 9.1 One particle in a magnetic field 9.2 Two anyons in a magnetic field 9.3 The anyon gas in a magnetic field 10 Interchange phases and geometric phases 10.1 Introduction to geometric phases 10.2 One particle in a magnetic field 10.3 Two particles in a magnetic field 10.4 Interchange of two anyons in potential wells 10.5 Laughlin's theory of the fractional quantum Hall effect

  15. Reduced Order Podolsky Model

    NASA Astrophysics Data System (ADS)

    Thibes, Ronaldo

    2017-02-01

    We perform the canonical and path integral quantizations of a lower-order derivatives model describing Podolsky's generalized electrodynamics. The physical content of the model shows an auxiliary massive vector field coupled to the usual electromagnetic field. The equivalence with Podolsky's original model is studied at classical and quantum levels. Concerning the dynamical time evolution, we obtain a theory with two first-class and two second-class constraints in phase space. We calculate explicitly the corresponding Dirac brackets involving both vector fields. We use the Senjanovic procedure to implement the second-class constraints and the Batalin-Fradkin-Vilkovisky path integral quantization scheme to deal with the symmetries generated by the first-class constraints. The physical interpretation of the results turns out to be simpler due to the reduced derivatives order permeating the equations of motion, Dirac brackets and effective action.

  16. Light-cone quantization of two dimensional field theory in the path integral approach

    NASA Astrophysics Data System (ADS)

    Cortés, J. L.; Gamboa, J.

    1999-05-01

    A quantization condition due to the boundary conditions and the compatification of the light cone space-time coordinate x- is identified at the level of the classical equations for the right-handed fermionic field in two dimensions. A detailed analysis of the implications of the implementation of this quantization condition at the quantum level is presented. In the case of the Thirring model one has selection rules on the excitations as a function of the coupling and in the case of the Schwinger model a double integer structure of the vacuum is derived in the light-cone frame. Two different quantized chiral Schwinger models are found, one of them without a θ-vacuum structure. A generalization of the quantization condition to theories with several fermionic fields and to higher dimensions is presented.

  17. Methods of Contemporary Gauge Theory

    NASA Astrophysics Data System (ADS)

    Makeenko, Yuri

    2002-08-01

    Preface; Part I. Path Integrals: 1. Operator calculus; 2. Second quantization; 3. Quantum anomalies from path integral; 4. Instantons in quantum mechanics; Part II. Lattice Gauge Theories: 5. Observables in gauge theories; 6. Gauge fields on a lattice; 7. Lattice methods; 8. Fermions on a lattice; 9. Finite temperatures; Part III. 1/N Expansion: 10. O(N) vector models; 11. Multicolor QCD; 12. QCD in loop space; 13. Matrix models; Part IV. Reduced Models: 14. Eguchi-Kawai model; 15. Twisted reduced models; 16. Non-commutative gauge theories.

  18. Methods of Contemporary Gauge Theory

    NASA Astrophysics Data System (ADS)

    Makeenko, Yuri

    2005-11-01

    Preface; Part I. Path Integrals: 1. Operator calculus; 2. Second quantization; 3. Quantum anomalies from path integral; 4. Instantons in quantum mechanics; Part II. Lattice Gauge Theories: 5. Observables in gauge theories; 6. Gauge fields on a lattice; 7. Lattice methods; 8. Fermions on a lattice; 9. Finite temperatures; Part III. 1/N Expansion: 10. O(N) vector models; 11. Multicolor QCD; 12. QCD in loop space; 13. Matrix models; Part IV. Reduced Models: 14. Eguchi-Kawai model; 15. Twisted reduced models; 16. Non-commutative gauge theories.

  19. On the Path Integral in Non-Commutative (nc) Qft

    NASA Astrophysics Data System (ADS)

    Dehne, Christoph

    2008-09-01

    As is generally known, different quantization schemes applied to field theory on NC spacetime lead to Feynman rules with different physical properties, if time does not commute with space. In particular, the Feynman rules that are derived from the path integral corresponding to the T*-product (the so-called naïve Feynman rules) violate the causal time ordering property. Within the Hamiltonian approach to quantum field theory, we show that we can (formally) modify the time ordering encoded in the above path integral. The resulting Feynman rules are identical to those obtained in the canonical approach via the Gell-Mann-Low formula (with T-ordering). They preserve thus unitarity and causal time ordering.

  20. The uniform quantized electron gas revisited

    NASA Astrophysics Data System (ADS)

    Lomba, Enrique; Høye, Johan S.

    2017-11-01

    In this article we continue and extend our recent work on the correlation energy of the quantized electron gas of uniform density at temperature T=0 . As before, we utilize the methods, properties, and results obtained by means of classical statistical mechanics. These were extended to quantized systems via the Feynman path integral formalism. The latter translates the quantum problem into a classical polymer problem in four dimensions. Again, the well known RPA (random phase approximation) is recovered as a basic result which we then modify and improve upon. Here we analyze the condition of thermodynamic self-consistency. Our numerical calculations exhibit a remarkable agreement with well known results of a standard parameterization of Monte Carlo correlation energies.

  1. Three paths toward the quantum angle operator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gazeau, Jean Pierre, E-mail: gazeau@apc.univ-paris7.fr; Szafraniec, Franciszek Hugon, E-mail: franciszek.szafraniec@uj.edu.pl

    2016-12-15

    We examine mathematical questions around angle (or phase) operator associated with a number operator through a short list of basic requirements. We implement three methods of construction of quantum angle. The first one is based on operator theory and parallels the definition of angle for the upper half-circle through its cosine and completed by a sign inversion. The two other methods are integral quantization generalizing in a certain sense the Berezin–Klauder approaches. One method pertains to Weyl–Heisenberg integral quantization of the plane viewed as the phase space of the motion on the line. It depends on a family of “weight”more » functions on the plane. The third method rests upon coherent state quantization of the cylinder viewed as the phase space of the motion on the circle. The construction of these coherent states depends on a family of probability distributions on the line.« less

  2. Koopman-von Neumann formulation of classical Yang-Mills theories: I

    NASA Astrophysics Data System (ADS)

    Carta, P.; Gozzi, E.; Mauro, D.

    2006-03-01

    In this paper we present the Koopman-von Neumann (KvN) formulation of classical non-Abelian gauge field theories. In particular we shall explore the functional (or classical path integral) counterpart of the KvN method. In the quantum path integral quantization of Yang-Mills theories concepts like gauge-fixing and Faddeev-Popov determinant appear in a quite natural way. We will prove that these same objects are needed also in this classical path integral formulation for Yang-Mills theories. We shall also explore the classical path integral counterpart of the BFV formalism and build all the associated universal and gauge charges. These last are quite different from the analog quantum ones and we shall show the relation between the two. This paper lays the foundation of this formalism which, due to the many auxiliary fields present, is rather heavy. Applications to specific topics outlined in the paper will appear in later publications.

  3. The Spin-Foam Approach to Quantum Gravity.

    PubMed

    Perez, Alejandro

    2013-01-01

    This article reviews the present status of the spin-foam approach to the quantization of gravity. Special attention is payed to the pedagogical presentation of the recently-introduced new models for four-dimensional quantum gravity. The models are motivated by a suitable implementation of the path integral quantization of the Plebanski formulation of gravity on a simplicial regularization. The article also includes a self-contained treatment of 2+1 gravity. The simple nature of the latter provides the basis and a perspective for the analysis of both conceptual and technical issues that remain open in four dimensions.

  4. Treatment of constraints in the stochastic quantization method and covariantized Langevin equation

    NASA Astrophysics Data System (ADS)

    Ikegami, Kenji; Kimura, Tadahiko; Mochizuki, Riuji

    1993-04-01

    We study the treatment of the constraints in the stochastic quantization method. We improve the treatment of the stochastic consistency condition proposed by Namiki et al. by suitably taking into account the Ito calculus. Then we obtain an improved Langevi equation and the Fokker-Planck equation which naturally leads to the correct path integral quantization of the constrained system as the stochastic equilibrium state. This treatment is applied to an O( N) non-linear α model and it is shown that singular terms appearing in the improved Langevin equation cancel out the σ n(O) divergences in one loop order. We also ascertain that the above Langevin equation, rewritten in terms of idependent variables, is actually equivalent to the one in the general-coordinate transformation covariant and vielbein-rotation invariant formalish.

  5. Nine formulations of quantum mechanics

    NASA Astrophysics Data System (ADS)

    Styer, Daniel F.; Balkin, Miranda S.; Becker, Kathryn M.; Burns, Matthew R.; Dudley, Christopher E.; Forth, Scott T.; Gaumer, Jeremy S.; Kramer, Mark A.; Oertel, David C.; Park, Leonard H.; Rinkoski, Marie T.; Smith, Clait T.; Wotherspoon, Timothy D.

    2002-03-01

    Nine formulations of nonrelativistic quantum mechanics are reviewed. These are the wavefunction, matrix, path integral, phase space, density matrix, second quantization, variational, pilot wave, and Hamilton-Jacobi formulations. Also mentioned are the many-worlds and transactional interpretations. The various formulations differ dramatically in mathematical and conceptual overview, yet each one makes identical predictions for all experimental results.

  6. On coherent states for the simplest quantum groups

    NASA Astrophysics Data System (ADS)

    Jurčo, Branislav

    1991-01-01

    The coherent states for the simplest quantum groups ( q-Heisenberg-Weyl, SU q (2) and the discrete series of representations of SU q (1, 1)) are introduced and their properties investigated. The corresponding analytic representations, path integrals, and q-deformation of Berezin's quantization on ℂ, a sphere, and the Lobatchevsky plane are discussed.

  7. The quantization of the chiral Schwinger model based on the BFT-BFV formalism II

    NASA Astrophysics Data System (ADS)

    Park, Mu-In; Park, Young-Jai; Yoon, Sean J.

    1998-12-01

    We apply an improved version of Batalin-Fradkin-Tyutin Hamiltonian method to the a = 1 chiral Schwinger model, which is much more nontrivial than the a>1 one. Furthermore, through the path integral quantization, we newly resolve the problem of the nontrivial 0954-3899/24/12/002/img6-function as well as that of the unwanted Fourier parameter 0954-3899/24/12/002/img7 in the measure. As a result, we explicitly obtain the fully gauge invariant partition function, which includes a new type of Wess-Zumino term irrelevant to the gauge symmetry as well as the usual WZ action.

  8. From conformal blocks to path integrals in the Vaidya geometry

    NASA Astrophysics Data System (ADS)

    Anous, Tarek; Hartman, Thomas; Rovai, Antonin; Sonner, Julian

    2017-09-01

    Correlators in conformal field theory are naturally organized as a sum over conformal blocks. In holographic theories, this sum must reorganize into a path integral over bulk fields and geometries. We explore how these two sums are related in the case of a point particle moving in the background of a 3d collapsing black hole. The conformal block expansion is recast as a sum over paths of the first-quantized particle moving in the bulk geometry. Off-shell worldlines of the particle correspond to subdominant contributions in the Euclidean conformal block expansion, but these same operators must be included in order to correctly reproduce complex saddles in the Lorentzian theory. During thermalization, a complex saddle dominates under certain circumstances; in this case, the CFT correlator is not given by the Virasoro identity block in any channel, but can be recovered by summing heavy operators. This effectively converts the conformal block expansion in CFT from a sum over intermediate states to a sum over channels that mimics the bulk path integral.

  9. ER = EPR and non-perturbative action integrals for quantum gravity

    NASA Astrophysics Data System (ADS)

    Alsaleh, Salwa; Alasfar, Lina

    In this paper, we construct and calculate non-perturbative path integrals in a multiply-connected spacetime. This is done by summing over homotopy classes of paths. The topology of the spacetime is defined by Einstein-Rosen bridges (ERB) forming from the entanglement of quantum foam described by virtual black holes. As these “bubbles” are entangled, they are connected by Planckian ERBs because of the ER = EPR conjecture. Hence, the spacetime will possess a large first Betti number B1. For any compact 2-surface in the spacetime, the topology (in particular the homotopy) of that surface is non-trivial due to the large number of Planckian ERBs that define homotopy through this surface. The quantization of spacetime with this topology — along with the proper choice of the 2-surfaces — is conjectured to allow non-perturbative path integrals of quantum gravity theory over the spacetime manifold.

  10. Perturbative Yang-Mills theory without Faddeev-Popov ghost fields

    NASA Astrophysics Data System (ADS)

    Huffel, Helmuth; Markovic, Danijel

    2018-05-01

    A modified Faddeev-Popov path integral density for the quantization of Yang-Mills theory in the Feynman gauge is discussed, where contributions of the Faddeev-Popov ghost fields are replaced by multi-point gauge field interactions. An explicit calculation to O (g2) shows the equivalence of the usual Faddeev-Popov scheme and its modified version.

  11. Prebifurcation periodic ghost orbits in semiclassical quantization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kus, M.; Haake, F.; Delande, D.

    1993-10-04

    Classical periodic orbits are stationary-phase points in path integral representations of quantum propagators. We show that complex solutions of the stationary-phase equation, not corresponding to real classical periodic orbits, give additional contributions to the propagator which can be important, especially near bifurcations. We reveal the existence and relevance of such periodic ghost orbits for a kicked top.

  12. Uniform quantized electron gas

    NASA Astrophysics Data System (ADS)

    Høye, Johan S.; Lomba, Enrique

    2016-10-01

    In this work we study the correlation energy of the quantized electron gas of uniform density at temperature T  =  0. To do so we utilize methods from classical statistical mechanics. The basis for this is the Feynman path integral for the partition function of quantized systems. With this representation the quantum mechanical problem can be interpreted as, and is equivalent to, a classical polymer problem in four dimensions where the fourth dimension is imaginary time. Thus methods, results, and properties obtained in the statistical mechanics of classical fluids can be utilized. From this viewpoint we recover the well known RPA (random phase approximation). Then to improve it we modify the RPA by requiring the corresponding correlation function to be such that electrons with equal spins can not be on the same position. Numerical evaluations are compared with well known results of a standard parameterization of Monte Carlo correlation energies.

  13. Relativistic top: An application of the BFV quantization procedure for systems with degenerate constraints

    NASA Astrophysics Data System (ADS)

    Nielsen, N. K.; Quaade, U. J.

    1995-07-01

    The physical phase space of the relativistic top, as defined by Hansson and Regge, is expressed in terms of canonical coordinates of the Poincaré group manifold. The system is described in the Hamiltonian formalism by the mass-shell condition and constraints that reduce the number of spin degrees of freedom. The constraints are second class and are modified into a set of first class constraints by adding combinations of gauge-fixing functions. The Batalin-Fradkin-Vilkovisky method is then applied to quantize the system in the path integral formalism in Hamiltonian form. It is finally shown that different gauge choices produce different equivalent forms of the constraints.

  14. New Spin Foam Models of Quantum Gravity

    NASA Astrophysics Data System (ADS)

    Miković, A.

    We give a brief and a critical review of the Barret-Crane spin foam models of quantum gravity. Then we describe two new spin foam models which are obtained by direct quantization of General Relativity and do not have some of the drawbacks of the Barret-Crane models. These are the model of spin foam invariants for the embedded spin networks in loop quantum gravity and the spin foam model based on the integration of the tetrads in the path integral for the Palatini action.

  15. From conformal blocks to path integrals in the Vaidya geometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anous, Tarek; Hartman, Thomas; Rovai, Antonin

    Correlators in conformal field theory are naturally organized as a sum over conformal blocks. In holographic theories, this sum must reorganize into a path integral over bulk fields and geometries. We explore how these two sums are related in the case of a point particle moving in the background of a 3d collapsing black hole. The conformal block expansion is recast as a sum over paths of the first-quantized particle moving in the bulk geometry. Off-shell worldlines of the particle correspond to subdominant contributions in the Euclidean conformal block expansion, but these same operators must be included in order tomore » correctly reproduce complex saddles in the Lorentzian theory. During thermalization, a complex saddle dominates under certain circumstances; in this case, the CFT correlator is not given by the Virasoro identity block in any channel, but can be recovered by summing heavy operators. This effectively converts the conformal block expansion in CFT from a sum over intermediate states to a sum over channels that mimics the bulk path integral.« less

  16. From conformal blocks to path integrals in the Vaidya geometry

    DOE PAGES

    Anous, Tarek; Hartman, Thomas; Rovai, Antonin; ...

    2017-09-04

    Correlators in conformal field theory are naturally organized as a sum over conformal blocks. In holographic theories, this sum must reorganize into a path integral over bulk fields and geometries. We explore how these two sums are related in the case of a point particle moving in the background of a 3d collapsing black hole. The conformal block expansion is recast as a sum over paths of the first-quantized particle moving in the bulk geometry. Off-shell worldlines of the particle correspond to subdominant contributions in the Euclidean conformal block expansion, but these same operators must be included in order tomore » correctly reproduce complex saddles in the Lorentzian theory. During thermalization, a complex saddle dominates under certain circumstances; in this case, the CFT correlator is not given by the Virasoro identity block in any channel, but can be recovered by summing heavy operators. This effectively converts the conformal block expansion in CFT from a sum over intermediate states to a sum over channels that mimics the bulk path integral.« less

  17. Path integral solution for a Klein-Gordon particle in vector and scalar deformed radial Rosen-Morse-type potentials

    NASA Astrophysics Data System (ADS)

    Khodja, A.; Kadja, A.; Benamira, F.; Guechi, L.

    2017-12-01

    The problem of a Klein-Gordon particle moving in equal vector and scalar Rosen-Morse-type potentials is solved in the framework of Feynman's path integral approach. Explicit path integration leads to a closed form for the radial Green's function associated with different shapes of the potentials. For q≤-1, and 1/2α ln | q|0, it is shown that the quantization conditions for the bound state energy levels E_{nr} are transcendental equations which can be solved numerically. Three special cases such as the standard radial Manning-Rosen potential (| q| =1), the standard radial Rosen-Morse potential (V2→ -V2,q=1) and the radial Eckart potential (V1→ -V1,q=1) are also briefly discussed.

  18. Nilpotent symmetries in supergroup field cosmology

    NASA Astrophysics Data System (ADS)

    Upadhyay, Sudhaker

    2015-06-01

    In this paper, we study the gauge invariance of the third quantized supergroup field cosmology which is a model for multiverse. Further, we propose both the infinitesimal (usual) as well as the finite superfield-dependent BRST symmetry transformations which leave the effective theory invariant. The effects of finite superfield-dependent BRST transformations on the path integral (so-called void functional in the case of third quantization) are implemented. Within the finite superfield-dependent BRST formulation, the finite superfield-dependent BRST transformations with specific parameter switch the void functional from one gauge to another. We establish this result for the most general gauge with the help of explicit calculations which holds for all possible sets of gauge choices at both the classical and the quantum levels.

  19. Quantum mechanical free energy profiles with post-quantization restraints: Binding free energy of the water dimer over a broad range of temperatures

    NASA Astrophysics Data System (ADS)

    Bishop, Kevin P.; Roy, Pierre-Nicholas

    2018-03-01

    Free energy calculations are a crucial part of understanding chemical systems but are often computationally expensive for all but the simplest of systems. Various enhanced sampling techniques have been developed to improve the efficiency of these calculations in numerical simulations. However, the majority of these approaches have been applied using classical molecular dynamics. There are many situations where nuclear quantum effects impact the system of interest and a classical description fails to capture these details. In this work, path integral molecular dynamics has been used in conjunction with umbrella sampling, and it has been observed that correct results are only obtained when the umbrella sampling potential is applied to a single path integral bead post quantization. This method has been validated against a Lennard-Jones benchmark system before being applied to the more complicated water dimer system over a broad range of temperatures. Free energy profiles are obtained, and these are utilized in the calculation of the second virial coefficient as well as the change in free energy from the separated water monomers to the dimer. Comparisons to experimental and ground state calculation values from the literature are made for the second virial coefficient at higher temperature and the dissociation energy of the dimer in the ground state.

  20. Quantum mechanical free energy profiles with post-quantization restraints: Binding free energy of the water dimer over a broad range of temperatures.

    PubMed

    Bishop, Kevin P; Roy, Pierre-Nicholas

    2018-03-14

    Free energy calculations are a crucial part of understanding chemical systems but are often computationally expensive for all but the simplest of systems. Various enhanced sampling techniques have been developed to improve the efficiency of these calculations in numerical simulations. However, the majority of these approaches have been applied using classical molecular dynamics. There are many situations where nuclear quantum effects impact the system of interest and a classical description fails to capture these details. In this work, path integral molecular dynamics has been used in conjunction with umbrella sampling, and it has been observed that correct results are only obtained when the umbrella sampling potential is applied to a single path integral bead post quantization. This method has been validated against a Lennard-Jones benchmark system before being applied to the more complicated water dimer system over a broad range of temperatures. Free energy profiles are obtained, and these are utilized in the calculation of the second virial coefficient as well as the change in free energy from the separated water monomers to the dimer. Comparisons to experimental and ground state calculation values from the literature are made for the second virial coefficient at higher temperature and the dissociation energy of the dimer in the ground state.

  1. Proceedings of the 1993 Particle Accelerator Conference Held in Washington, DC on May 17-20, 1993. Volume 3

    DTIC Science & Technology

    1993-05-20

    mm-mrad 10 mm-md I. INTRODUCTION Higher -order modes (HOMs) of the SR single cell cavityThe 20-MeV linac beam at the Argonne Chemistry are studied by...energy proton storage rings: RHIC and the SSC. I. INTRODUCTION We have proposed to build spin rotators with two k transverse wigglers of many poles...integral of gauge field; beam Is always polarized, the magnetic potential flux line of (4) edot quantization of path integral in sate sae bunch spin are

  2. Gravitational Scattering Amplitudes and Closed String Field Theory in the Proper-Time Gauge

    NASA Astrophysics Data System (ADS)

    Lee, Taejin

    2018-01-01

    We construct a covariant closed string field theory by extending recent works on the covariant open string field theory in the proper-time gauge. Rewriting the string scattering amplitudes generated by the closed string field theory in terms of the Polyakov string path integrals, we identify the Fock space representations of the closed string vertices. We show that the Fock space representations of the closed string field theory may be completely factorized into those of the open string field theory. It implies that the well known Kawai-Lewellen-Tye (KLT) relations of the first quantized string theory may be promoted to the second quantized closed string theory. We explicitly calculate the scattering amplitudes of three gravitons by using the closed string field theory in the proper-time gauge.

  3. Batalin-Vilkovisky quantization and generalizations

    NASA Astrophysics Data System (ADS)

    Bering, Klaus

    Gauge theories play an important role in modern physics. Whenever a gauge symmetry is present, one should provide for a manifestly gauge independent formalism. It turns out that the BRST symmetry plays a prominent part in providing the gauge independence. The importance of gauge independence in the Hamiltonian Batalin-Fradkin-Fradkina- Vilkovisky formalism and in the Lagrangian Batalin- Vilkovisky formalism is stressed. Parallels are drawn between the various theories. A Hamiltonian path integral that takes into account quantum ordering effects arising in the operator formalism, should be written with the help of the star- multiplication or the Moyal bracket. It is generally believed, that this leads to higher order quantum corrections in the corresponding Lagrangian path integral. A higher order Lagrangian path integral based on a nilpotent higher order odd Laplacian is proposed. A new gauge independence mechanism that adapts to the higher order formalism, and that by-passes the problem of constructing a BRST transformation of the path integral in the higher order case, is developed. The new gauge mechanism is closely related to the cohomology of the odd Laplacian operator. Various cohomology aspects of the odd Laplacian are investigated. Whereas for instance the role of the ghost-cohomology properties of the BFV-BRST charge has been emphasized by several authors, the cohomology of the odd Laplacian are in general not well known.

  4. Canonical field anticommutators in the extended gauged Rarita-Schwinger theory

    NASA Astrophysics Data System (ADS)

    Adler, Stephen L.; Henneaux, Marc; Pais, Pablo

    2017-10-01

    We reexamine canonical quantization of the gauged Rarita-Schwinger theory using the extended theory, incorporating a dimension 1/2 auxiliary spin-1/2 field Λ , in which there is an exact off-shell gauge invariance. In Λ =0 gauge, which reduces to the original unextended theory, our results agree with those found by Johnson and Sudarshan, and later verified by Velo and Zwanziger, which give a canonical Rarita-Schwinger field Dirac bracket that is singular for small gauge fields. In gauge covariant radiation gauge, the Dirac bracket of the Rarita-Schwinger fields is nonsingular, but does not correspond to a positive semidefinite anticommutator, and the Dirac bracket of the auxiliary fields has a singularity of the same form as found in the unextended theory. These results indicate that gauged Rarita-Schwinger theory is somewhat pathological, and cannot be canonically quantized within a conventional positive semidefinite metric Hilbert space. We leave open the questions of whether consistent quantizations can be achieved by using an indefinite metric Hilbert space, by path integral methods, or by appropriate couplings to conventional dimension 3/2 spin-1/2 fields.

  5. Comparative study of the requantization of the time-dependent mean field for the dynamics of nuclear pairing

    NASA Astrophysics Data System (ADS)

    Ni, Fang; Nakatsukasa, Takashi

    2018-04-01

    To describe quantal collective phenomena, it is useful to requantize the time-dependent mean-field dynamics. We study the time-dependent Hartree-Fock-Bogoliubov (TDHFB) theory for the two-level pairing Hamiltonian, and compare results of different quantization methods. The one constructing microscopic wave functions, using the TDHFB trajectories fulfilling the Einstein-Brillouin-Keller quantization condition, turns out to be the most accurate. The method is based on the stationary-phase approximation to the path integral. We also examine the performance of the collective model which assumes that the pairing gap parameter is the collective coordinate. The applicability of the collective model is limited for the nuclear pairing with a small number of single-particle levels, because the pairing gap parameter represents only a half of the pairing collective space.

  6. From Weyl to Born-Jordan quantization: The Schrödinger representation revisited

    NASA Astrophysics Data System (ADS)

    de Gosson, Maurice A.

    2016-03-01

    The ordering problem has been one of the long standing and much discussed questions in quantum mechanics from its very beginning. Nowadays, there is more or less a consensus among physicists that the right prescription is Weyl's rule, which is closely related to the Moyal-Wigner phase space formalism. We propose in this report an alternative approach by replacing Weyl quantization with the less well-known Born-Jordan quantization. This choice is actually natural if we want the Heisenberg and Schrödinger pictures of quantum mechanics to be mathematically equivalent. It turns out that, in addition, Born-Jordan quantization can be recovered from Feynman's path integral approach provided that one used short-time propagators arising from correct formulas for the short-time action, as observed by Makri and Miller. These observations lead to a slightly different quantum mechanics, exhibiting some unexpected features, and this without affecting the main existing theory; for instance quantizations of physical Hamiltonian functions are the same as in the Weyl correspondence. The differences are in fact of a more subtle nature; for instance, the quantum observables will not correspond in a one-to-one fashion to classical ones, and the dequantization of a Born-Jordan quantum operator is less straightforward than that of the corresponding Weyl operator. The use of Born-Jordan quantization moreover solves the "angular momentum dilemma", which already puzzled L. Pauling. Born-Jordan quantization has been known for some time (but not fully exploited) by mathematicians working in time-frequency analysis and signal analysis, but ignored by physicists. One of the aims of this report is to collect and synthesize these sporadic discussions, while analyzing the conceptual differences with Weyl quantization, which is also reviewed in detail. Another striking feature is that the Born-Jordan formalism leads to a redefinition of phase space quantum mechanics, where the usual Wigner distribution has to be replaced with a new quasi-distribution reducing interference effects.

  7. Wormholes and the cosmological constant problem.

    NASA Astrophysics Data System (ADS)

    Klebanov, I.

    The author reviews the cosmological constant problem and the recently proposed wormhole mechanism for its solution. Summation over wormholes in the Euclidean path integral for gravity turns all the coupling parameters into dynamical variables, sampled from a probability distribution. A formal saddle point analysis results in a distribution with a sharp peak at the cosmological constant equal to zero, which appears to solve the cosmological constant problem. He discusses the instabilities of the gravitational Euclidean path integral and the difficulties with its interpretation. He presents an alternate formalism for baby universes, based on the "third quantization" of the Wheeler-De Witt equation. This approach is analyzed in a minisuperspace model for quantum gravity, where it reduces to simple quantum mechanics. Once again, the coupling parameters become dynamical. Unfortunately, the a priori probability distribution for the cosmological constant and other parameters is typically a smooth function, with no sharp peaks.

  8. Comment on 'Controversy concerning the definition of quark and gluon angular momentum' by Elliot Leader [PRD 83, 096012 (2011)

    NASA Astrophysics Data System (ADS)

    Lin, Huey-Wen; Liu, Keh-Fei

    2012-03-01

    It is argued by the author that the canonical form of the quark energy-momentum tensor with a partial derivative instead of the covariant derivative is the correct definition for the quark momentum and angular momentum fraction of the nucleon in covariant quantization. Although it is not manifestly gauge-invariant, its matrix elements in the nucleon will be nonvanishing and are gauge-invariant. We test this idea in the path-integral quantization by calculating correlation functions on the lattice with a gauge-invariant nucleon interpolation field and replacing the gauge link in the quark lattice momentum operator with unity, which corresponds to the partial derivative in the continuum. We find that the ratios of three-point to two-point functions are zero within errors for both the u and d quarks, contrary to the case without setting the gauge links to unity.

  9. Fractional corresponding operator in quantum mechanics and applications: A uniform fractional Schrödinger equation in form and fractional quantization methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xiao; Science and Technology on Electronic Information Control Laboratory, 610036, Chengdu, Sichuan; Wei, Chaozhen

    2014-11-15

    In this paper we use Dirac function to construct a fractional operator called fractional corresponding operator, which is the general form of momentum corresponding operator. Then we give a judging theorem for this operator and with this judging theorem we prove that R–L, G–L, Caputo, Riesz fractional derivative operator and fractional derivative operator based on generalized functions, which are the most popular ones, coincide with the fractional corresponding operator. As a typical application, we use the fractional corresponding operator to construct a new fractional quantization scheme and then derive a uniform fractional Schrödinger equation in form. Additionally, we find thatmore » the five forms of fractional Schrödinger equation belong to the particular cases. As another main result of this paper, we use fractional corresponding operator to generalize fractional quantization scheme by using Lévy path integral and use it to derive the corresponding general form of fractional Schrödinger equation, which consequently proves that these two quantization schemes are equivalent. Meanwhile, relations between the theory in fractional quantum mechanics and that in classic quantum mechanics are also discussed. As a physical example, we consider a particle in an infinite potential well. We give its wave functions and energy spectrums in two ways and find that both results are the same.« less

  10. On the BRST Quantization of the Massless Bosonic Particle in Twistor-Like Formulation

    NASA Astrophysics Data System (ADS)

    Bandos, Igor; Maznytsia, Alexey; Rudychev, Igor; Sorokin, Dmitri

    We study some features of bosonic-particle path-integral quantization in a twistor-like approach by the use of the BRST-BFV-quantization prescription. In the course of the Hamiltonian analysis we observe links between various formulations of the twistor-like particle by performing a conversion of the Hamiltonian constraints of one formulation to another. A particular feature of the conversion procedure applied to turn the second-class constraints into first-class constraints is that the simplest Lorentz-covariant way to do this is to convert a full mixed set of the initial first- and second-class constraints rather than explicitly extracting and converting only the second-class constraints. Another novel feature of the conversion procedure applied below is that in the case of the D = 4 and D = 6 twistor-like particle the number of new auxiliary Lorentz-covariant coordinates, which one introduces to get a system of first-class constraints in an extended phase space, exceeds the number of independent second-class constraints of the original dynamical system. We calculate the twistor-like particle propagator in D = 3,4,6 space-time dimensions and show that it coincides with that of a conventional massless bosonic particle.

  11. Symmetries of relativistic world lines

    NASA Astrophysics Data System (ADS)

    Koch, Benjamin; Muñoz, Enrique; Reyes, Ignacio A.

    2017-10-01

    Symmetries are essential for a consistent formulation of many quantum systems. In this paper we discuss a fundamental symmetry, which is present for any Lagrangian term that involves x˙2. As a basic model that incorporates the fundamental symmetries of quantum gravity and string theory, we consider the Lagrangian action of the relativistic point particle. A path integral quantization for this seemingly simple system has long presented notorious problems. Here we show that those problems are overcome by taking into account the additional symmetry, leading directly to the exact Klein-Gordon propagator.

  12. Quantum localization of classical mechanics

    NASA Astrophysics Data System (ADS)

    Batalin, Igor A.; Lavrov, Peter M.

    2016-07-01

    Quantum localization of classical mechanics within the BRST-BFV and BV (or field-antifield) quantization methods are studied. It is shown that a special choice of gauge fixing functions (or BRST-BFV charge) together with the unitary limit leads to Hamiltonian localization in the path integral of the BRST-BFV formalism. In turn, we find that a special choice of gauge fixing functions being proportional to extremals of an initial non-degenerate classical action together with a very special solution of the classical master equation result in Lagrangian localization in the partition function of the BV formalism.

  13. Video data compression using artificial neural network differential vector quantization

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, Ashok K.; Bibyk, Steven B.; Ahalt, Stanley C.

    1991-01-01

    An artificial neural network vector quantizer is developed for use in data compression applications such as Digital Video. Differential Vector Quantization is used to preserve edge features, and a new adaptive algorithm, known as Frequency-Sensitive Competitive Learning, is used to develop the vector quantizer codebook. To develop real time performance, a custom Very Large Scale Integration Application Specific Integrated Circuit (VLSI ASIC) is being developed to realize the associative memory functions needed in the vector quantization algorithm. By using vector quantization, the need for Huffman coding can be eliminated, resulting in superior performance against channel bit errors than methods that use variable length codes.

  14. Floquet Engineering of Optical Solenoids and Quantized Charge Pumping along Tailored Paths in Two-Dimensional Chern Insulators

    NASA Astrophysics Data System (ADS)

    Wang, Botao; Ünal, F. Nur; Eckardt, André

    2018-06-01

    The insertion of a local magnetic flux, as the one created by a thin solenoid, plays an important role in gedanken experiments of quantum Hall physics. By combining Floquet engineering of artificial magnetic fields with the ability of single-site addressing in quantum gas microscopes, we propose a scheme for the realization of such local solenoid-type magnetic fields in optical lattices. We show that it can be employed to manipulate and probe elementary excitations of a topological Chern insulator. This includes quantized adiabatic charge pumping along tailored paths inside the bulk, as well as the controlled population of edge modes.

  15. Connecting dissipation and noncommutativity: A Bateman system case study

    NASA Astrophysics Data System (ADS)

    Pal, Sayan Kumar; Nandi, Partha; Chakraborty, Biswajit

    2018-06-01

    We present an approach to the problem of quantization of the damped harmonic oscillator. To start with, we adopt the standard method of doubling the degrees of freedom of the system (Bateman form) and then, by introducing some new parameters, we get a generalized coupled set of equations from the Bateman form. Using the corresponding time-independent Lagrangian, quantum effects on a pair of Bateman oscillators embedded in an ambient noncommutative space (Moyal plane) are analyzed by using both path integral and canonical quantization schemes within the framework of the Hilbert-Schmidt operator formulation. Our method is distinct from those existing in the literature and where the ambient space was taken to be commutative. Our quantization shows that we end up again with a Bateman system except that the damping factor undergoes renormalization. Strikingly, the corresponding expression shows that the renormalized damping factor can be nonzero even if "bare" one is zero to begin with. In other words, noncommutativity can act as a source of dissipation. Conversely, the noncommutative parameter θ , taken to be a free one now, can be fine tuned to get a vanishing renormalized damping factor. This indicates in some sense a "duality" between dissipation and noncommutativity. Our results match the existing results in the commutative limit.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miyao, Tadahiro; Spohn, Herbert

    The retarded van der Waals potential, as first obtained by Casimir and Polder, is usually computed on the basis of nonrelativistic quantum electrodynamics . The Hamiltonian describes two infinitely heavy nuclei, charge e, separated by a distance R and two spinless electrons, charge -e, nonrelativistically coupled to the quantized radiation field. Casimir and Polder used the dipole approximation and small coupling to the Maxwell field. We employ here the full Hamiltonian and determine the asymptotic strength of the leading -R{sup -7} potential, which is valid for all e. Our computation is based on a path integral representation and expands inmore » 1/R, rather than in e.« less

  17. Noise-Coupled Image Rejection Architecture of Complex Bandpass ΔΣAD Modulator

    NASA Astrophysics Data System (ADS)

    San, Hao; Kobayashi, Haruo

    This paper proposes a new realization technique of image rejection function by noise-coupling architecture, which is used for a complex bandpass ΔΣAD modulator. The complex bandpass ΔΣAD modulator processes just input I and Q signals, not image signals, and the AD conversion can be realized with low power dissipation. It realizes an asymmetric noise-shaped spectra, which is desirable for such low-IF receiver applications. However, the performance of the complex bandpass ΔΣAD modulator suffers from the mismatch between internal analog I and Q paths. I/Q path mismatch causes an image signal, and the quantization noise of the mirror image band aliases into the desired signal band, which degrades the SQNDR (Signal to Quantization Noise and Distortion Ratio) of the modulator. In our proposed modulator architecture, an extra notch for image rejection is realized by noise-coupled topology. We just add some passive capacitors and switches to the modulator; the additional integrator circuit composed of an operational amplifier in the conventional image rejection realization is not necessary. Therefore, the performance of the complex modulator can be effectively raised without additional power dissipation. We have performed simulation with MATLAB to confirm the validity of the proposed architecture. The simulation results show that the proposed architecture can achieve the realization of image-rejection effectively, and improve the SQNDR of the complex bandpass ΔΣAD modulator.

  18. Asymptotic Analysis of the Ponzano-Regge Model with Non-Commutative Metric Boundary Data

    NASA Astrophysics Data System (ADS)

    Oriti, Daniele; Raasakka, Matti

    2014-06-01

    We apply the non-commutative Fourier transform for Lie groups to formulate the non-commutative metric representation of the Ponzano-Regge spin foam model for 3d quantum gravity. The non-commutative representation allows to express the amplitudes of the model as a first order phase space path integral, whose properties we consider. In particular, we study the asymptotic behavior of the path integral in the semi-classical limit. First, we compare the stationary phase equations in the classical limit for three different non-commutative structures corresponding to the symmetric, Duflo and Freidel-Livine-Majid quantization maps. We find that in order to unambiguously recover discrete geometric constraints for non-commutative metric boundary data through the stationary phase method, the deformation structure of the phase space must be accounted for in the variational calculus. When this is understood, our results demonstrate that the non-commutative metric representation facilitates a convenient semi-classical analysis of the Ponzano-Regge model, which yields as the dominant contribution to the amplitude the cosine of the Regge action in agreement with previous studies. We also consider the asymptotics of the SU(2) 6j-symbol using the non-commutative phase space path integral for the Ponzano-Regge model, and explain the connection of our results to the previous asymptotic results in terms of coherent states.

  19. Integral Sliding Mode Fault-Tolerant Control for Uncertain Linear Systems Over Networks With Signals Quantization.

    PubMed

    Hao, Li-Ying; Park, Ju H; Ye, Dan

    2017-09-01

    In this paper, a new robust fault-tolerant compensation control method for uncertain linear systems over networks is proposed, where only quantized signals are assumed to be available. This approach is based on the integral sliding mode (ISM) method where two kinds of integral sliding surfaces are constructed. One is the continuous-state-dependent surface with the aim of sliding mode stability analysis and the other is the quantization-state-dependent surface, which is used for ISM controller design. A scheme that combines the adaptive ISM controller and quantization parameter adjustment strategy is then proposed. Through utilizing H ∞ control analytical technique, once the system is in the sliding mode, the nature of performing disturbance attenuation and fault tolerance from the initial time can be found without requiring any fault information. Finally, the effectiveness of our proposed ISM control fault-tolerant schemes against quantization errors is demonstrated in the simulation.

  20. Thermal properties of graphene from path-integral simulations

    NASA Astrophysics Data System (ADS)

    Herrero, Carlos P.; Ramírez, Rafael

    2018-03-01

    Thermal properties of graphene monolayers are studied by path-integral molecular dynamics simulations, which take into account the quantization of vibrational modes in the crystalline membrane and allow one to consider anharmonic effects in these properties. This system was studied at temperatures in the range from 12 to 2000 K and zero external stress, by describing the interatomic interactions through the LCBOPII effective potential. We analyze the internal energy and specific heat and compare the results derived from the simulations with those yielded by a harmonic approximation for the vibrational modes. This approximation turns out to be rather precise up to temperatures of about 400 K. At higher temperatures, we observe an influence of the elastic energy due to the thermal expansion of the graphene sheet. Zero-point and thermal effects on the in-plane and "real" surface of graphene are discussed. The thermal expansion coefficient α of the real area is found to be positive at all temperatures, in contrast to the expansion coefficient αp of the in-plane area, which is negative at low temperatures and becomes positive for T ≳ 1000 K.

  1. Classical geometry to quantum behavior correspondence in a virtual extra dimension

    NASA Astrophysics Data System (ADS)

    Dolce, Donatello

    2012-09-01

    In the Lorentz invariant formalism of compact space-time dimensions the assumption of periodic boundary conditions represents a consistent semi-classical quantization condition for relativistic fields. In Dolce (2011) [18] we have shown, for instance, that the ordinary Feynman path integral is obtained from the interference between the classical paths with different winding numbers associated with the cyclic dynamics of the field solutions. By means of the boundary conditions, the kinematical information of interactions can be encoded on the relativistic geometrodynamics of the boundary, see Dolce (2012) [8]. Furthermore, such a purely four-dimensional theory is manifestly dual to an extra-dimensional field theory. The resulting correspondence between extra-dimensional geometrodynamics and ordinary quantum behavior can be interpreted in terms of AdS/CFT correspondence. By applying this approach to a simple Quark-Gluon-Plasma freeze-out model we obtain fundamental analogies with basic aspects of AdS/QCD phenomenology.

  2. Electronic and magnetic properties of magnetoelectric compound Ca2CoSi2O7: An ab initio study

    NASA Astrophysics Data System (ADS)

    Chakraborty, Jayita

    2018-05-01

    The detailed first principle density functional theory calculations are carried out to investigate the electronic and magnetic properties of magnetoelectric compound Ca2CoSi2O7. The magnetic properties of this system are analyzed by calculating various hopping integrals as well as exchange interactions and deriving the relevant spin Hamiltonian. The dominant exchange path is visualized with Wannier functions plotting. Only intra planer nearest neighbor exchange interaction is strong in this system. The magnetocrystalline anisotropy is calculated for this system, and the results of the calculation reveal that the spin quantization axis lies in the ab plane.

  3. Theories of Matter, Space and Time, Volume 2; Quantum theories

    NASA Astrophysics Data System (ADS)

    Evans, N.; King, S. F.

    2018-06-01

    This book and its prequel Theories of Matter Space and Time: Classical Theories grew out of courses that we have both taught as part of the undergraduate degree program in Physics at Southampton University, UK. Our goal was to guide the full MPhys undergraduate cohort through some of the trickier areas of theoretical physics that we expect our undergraduates to master. Here we teach the student to understand first quantized relativistic quantum theories. We first quickly review the basics of quantum mechanics which should be familiar to the reader from a prior course. Then we will link the Schrödinger equation to the principle of least action introducing Feynman's path integral methods. Next, we present the relativistic wave equations of Klein, Gordon and Dirac. Finally, we convert Maxwell's equations of electromagnetism to a wave equation for photons and make contact with quantum electrodynamics (QED) at a first quantized level. Between the two volumes we hope to move a student's understanding from their prior courses to a place where they are ready, beyond, to embark on graduate level courses on quantum field theory.

  4. Euclideanization of Maxwell-Chern-Simons theory

    NASA Astrophysics Data System (ADS)

    Bowman, Daniel Alan

    We quantize the theory of electromagnetism in 2 + 1-spacetime dimensions with the addition of the topological Chern-Simons term using an indefinite metric formalism. In the process, we also quantize the Proca and pure Maxwell theories, which are shown to be related to the Maxwell-Chern-Simons theory. Next, we Euclideanize these three theories, obtaining path space formulae and investigating Osterwalder-Schrader positivity in each case. Finally, we obtain a characterization of those Euclidean states that correspond to physical states in the relativistic theories.

  5. Low-Dimensional Nanostructures and a Semiclassical Approach for Teaching Feynman's Sum-over-Paths Quantum Theory

    ERIC Educational Resources Information Center

    Onorato, P.

    2011-01-01

    An introduction to quantum mechanics based on the sum-over-paths (SOP) method originated by Richard P. Feynman and developed by E. F. Taylor and coworkers is presented. The Einstein-Brillouin-Keller (EBK) semiclassical quantization rules are obtained following the SOP approach for bounded systems, and a general approach to the calculation of…

  6. Simulating biochemical physics with computers: 1. Enzyme catalysis by phosphotriesterase and phosphodiesterase; 2. Integration-free path-integral method for quantum-statistical calculations

    NASA Astrophysics Data System (ADS)

    Wong, Kin-Yiu

    We have simulated two enzymatic reactions with molecular dynamics (MD) and combined quantum mechanical/molecular mechanical (QM/MM) techniques. One reaction is the hydrolysis of the insecticide paraoxon catalyzed by phosphotriesterase (PTE). PTE is a bioremediation candidate for environments contaminated by toxic nerve gases (e.g., sarin) or pesticides. Based on the potential of mean force (PMF) and the structural changes of the active site during the catalysis, we propose a revised reaction mechanism for PTE. Another reaction is the hydrolysis of the second-messenger cyclic adenosine 3'-5'-monophosphate (cAMP) catalyzed by phosphodiesterase (PDE). Cyclicnucleotide PDE is a vital protein in signal-transduction pathways and thus a popular target for inhibition by drugs (e.g., ViagraRTM). A two-dimensional (2-D) free-energy profile has been generated showing that the catalysis by PDE proceeds in a two-step SN2-type mechanism. Furthermore, to characterize a chemical reaction mechanism in experiment, a direct probe is measuring kinetic isotope effects (KIEs). KIEs primarily arise from internuclear quantum-statistical effects, e.g., quantum tunneling and quantization of vibration. To systematically incorporate the quantum-statistical effects during MD simulations, we have developed an automated integration-free path-integral (AIF-PI) method based on Kleinert's variational perturbation theory for the centroid density of Feynman's path integral. Using this analytic method, we have performed ab initio pathintegral calculations to study the origin of KIEs on several series of proton-transfer reactions from carboxylic acids to aryl substituted alpha-methoxystyrenes in water. In addition, we also demonstrate that the AIF-PI method can be used to systematically compute the exact value of zero-point energy (beyond the harmonic approximation) by simply minimizing the centroid effective potential.

  7. Superfluidity or supersolidity as a consequence of off-diagonal long-range order

    NASA Astrophysics Data System (ADS)

    Shi, Yu

    2005-07-01

    We present a general derivation of Hess-Fairbank effect or nonclassical rotational inertial (NCRI), i.e., the refusal to rotate with its container, as well as the quantization of angular momentum, as consequences of off-diagonal long-range order (ODLRO) in an interacting Bose system. Afterwards, the path integral formulation of superfluid density is rederived without ignoring the centrifugal potential. Finally and in particular, for a class of variational wave functions used for solid helium, treating the constraint of single-valuedness boundary condition carefully, we show that there is no ODLRO and, especially, demonstrate explicitly that NCRI cannot be possessed in absence of defects, even though there exist zero-point motion and exchange effect.

  8. Quenching of the Quantum Hall Effect in Graphene with Scrolled Edges

    NASA Astrophysics Data System (ADS)

    Cresti, Alessandro; Fogler, Michael M.; Guinea, Francisco; Castro Neto, A. H.; Roche, Stephan

    2012-04-01

    Edge nanoscrolls are shown to strongly influence transport properties of suspended graphene in the quantum Hall regime. The relatively long arclength of the scrolls in combination with their compact transverse size results in formation of many nonchiral transport channels in the scrolls. They short circuit the bulk current paths and inhibit the observation of the quantized two-terminal resistance. Unlike competing theoretical proposals, this mechanism of disrupting the Hall quantization in suspended graphene is not caused by ill-chosen placement of the contacts, singular elastic strains, or a small sample size.

  9. E-Invariant Quantized Motion of Valence Quarks

    NASA Astrophysics Data System (ADS)

    Kreymer, E. L.

    2018-06-01

    In sub-proton space wave processes are impossible. The analog of the Klein-Gordon equation in sub-proton space is elliptical and describes a stationary system with a constant number of particles. For dynamical processes, separation of variables is used and in each quantum of motion of the quark two states are distinguished: a localization state and a translation state with infinite velocity. Alternation of these states describes the motion of a quark. The mathematical expectations of the lifetimes of the localization states and the spatial extents of the translation states for a free quark and for a quark in a centrally symmetric potential are found. The action after one quantum of motion is equal to the Planck constant. The one-sided Laplace transform is used to determine the Green's function. Use of path integrals shows that the quantized trajectory of a quark is a broken line enveloping the classical trajectory of oscillation of the quark. Comparison of the calculated electric charge distribution in a proton with its experimental value gives satisfactory results. A hypothesis is formulated, according to which the three Grand Geometries of space correspond to the three main interactions of elementary particles.

  10. Covariant open bosonic string field theory on multiple D-branes in the proper-time gauge

    NASA Astrophysics Data System (ADS)

    Lee, Taejin

    2017-12-01

    We construct a covariant open bosonic string field theory on multiple D-branes, which reduces to a non-Abelian group Yang-Mills gauge theory in the zero-slope limit. Making use of the first quantized open bosonic string in the proper time gauge, we convert the string amplitudes given by the Polyakov path integrals on string world sheets into those of the second quantized theory. The world sheet diagrams generated by the constructed open string field theory are planar in contrast to those of the Witten's cubic string field theory. However, the constructed string field theory is yet equivalent to the Witten's cubic string field theory. Having obtained planar diagrams, we may adopt the light-cone string field theory technique to calculate the multi-string scattering amplitudes with an arbitrary number of external strings. We examine in detail the three-string vertex diagram and the effective four-string vertex diagrams generated perturbatively by the three-string vertex at tree level. In the zero-slope limit, the string scattering amplitudes are identified precisely as those of non-Abelian Yang-Mills gauge theory if the external states are chosen to be massless vector particles.

  11. Quantization of Poisson Manifolds from the Integrability of the Modular Function

    NASA Astrophysics Data System (ADS)

    Bonechi, F.; Ciccoli, N.; Qiu, J.; Tarlini, M.

    2014-10-01

    We discuss a framework for quantizing a Poisson manifold via the quantization of its symplectic groupoid, combining the tools of geometric quantization with the results of Renault's theory of groupoid C*-algebras. This setting allows very singular polarizations. In particular, we consider the case when the modular function is multiplicatively integrable, i.e., when the space of leaves of the polarization inherits a groupoid structure. If suitable regularity conditions are satisfied, then one can define the quantum algebra as the convolution algebra of the subgroupoid of leaves satisfying the Bohr-Sommerfeld conditions. We apply this procedure to the case of a family of Poisson structures on , seen as Poisson homogeneous spaces of the standard Poisson-Lie group SU( n + 1). We show that a bihamiltonian system on defines a multiplicative integrable model on the symplectic groupoid; we compute the Bohr-Sommerfeld groupoid and show that it satisfies the needed properties for applying Renault theory. We recover and extend Sheu's description of quantum homogeneous spaces as groupoid C*-algebras.

  12. Quantum circuit dynamics via path integrals: Is there a classical action for discrete-time paths?

    NASA Astrophysics Data System (ADS)

    Penney, Mark D.; Enshan Koh, Dax; Spekkens, Robert W.

    2017-07-01

    It is straightforward to compute the transition amplitudes of a quantum circuit using the sum-over-paths methodology when the gates in the circuit are balanced, where a balanced gate is one for which all non-zero transition amplitudes are of equal magnitude. Here we consider the question of whether, for such circuits, the relative phases of different discrete-time paths through the configuration space can be defined in terms of a classical action, as they are for continuous-time paths. We show how to do so for certain kinds of quantum circuits, namely, Clifford circuits where the elementary systems are continuous-variable systems or discrete systems of odd-prime dimension. These types of circuit are distinguished by having phase-space representations that serve to define their classical counterparts. For discrete systems, the phase-space coordinates are also discrete variables. We show that for each gate in the generating set, one can associate a symplectomorphism on the phase-space and to each of these one can associate a generating function, defined on two copies of the configuration space. For discrete systems, the latter association is achieved using tools from algebraic geometry. Finally, we show that if the action functional for a discrete-time path through a sequence of gates is defined using the sum of the corresponding generating functions, then it yields the correct relative phases for the path-sum expression. These results are likely to be relevant for quantizing physical theories where time is fundamentally discrete, characterizing the classical limit of discrete-time quantum dynamics, and proving complexity results for quantum circuits.

  13. Physical Projections in BRST Treatments of Reparametrization Invariant Theories

    NASA Astrophysics Data System (ADS)

    Marnelius, Robert; Sandström, Niclas

    Any regular quantum mechanical system may be cast into an Abelian gauge theory by simply reformulating it as a reparametrization invariant theory. We present a detailed study of the BRST quantization of such reparametrization invariant theories within a precise operator version of BRST which is related to the conventional BFV path integral formulation. Our treatments lead us to propose general rules for how physical wave functions and physical propagators are to be projected from the BRST singlets and propagators in the ghost extended BRST theory. These projections are performed by boundary conditions which are specified by the ingredients of BRST charge and precisely determined by the operator BRST. We demonstrate explicitly the validity of these rules for the considered class of models.

  14. TBA-like integral equations from quantized mirror curves

    NASA Astrophysics Data System (ADS)

    Okuyama, Kazumi; Zakany, Szabolcs

    2016-03-01

    Quantizing the mirror curve of certain toric Calabi-Yau (CY) three-folds leads to a family of trace class operators. The resolvent function of these operators is known to encode topological data of the CY. In this paper, we show that in certain cases, this resolvent function satisfies a system of non-linear integral equations whose structure is very similar to the Thermodynamic Bethe Ansatz (TBA) systems. This can be used to compute spectral traces, both exactly and as a semiclassical expansion. As a main example, we consider the system related to the quantized mirror curve of local P2. According to a recent proposal, the traces of this operator are determined by the refined BPS indices of the underlying CY. We use our non-linear integral equations to test that proposal.

  15. Spacetime Singularities in Quantum Gravity

    NASA Astrophysics Data System (ADS)

    Minassian, Eric A.

    2000-04-01

    Recent advances in 2+1 dimensional quantum gravity have provided tools to study the effects of quantization of spacetime on black hole and big bang/big crunch type singularities. I investigate effects of quantization of spacetime on singularities of the 2+1 dimensional BTZ black hole and the 2+1 dimensional torus universe. Hosoya has considered the BTZ black hole, and using a "quantum generalized affine parameter" (QGAP), has shown that, for some specific paths, quantum effects "smear" the singularities. Using gaussian wave functions as generic wave functions, I found that, for both BTZ black hole and the torus universe, there are families of paths that still reach the singularities with a finite QGAP, suggesting that singularities persist in quantum gravity. More realistic calculations, using modular invariant wave functions of Carlip and Nelson for the torus universe, offer further support for this conclusion. Currently work is in progress to study more realistic quantum gravity effects for BTZ black holes and other spacetime models.

  16. Channel estimation based on quantized MMP for FDD massive MIMO downlink

    NASA Astrophysics Data System (ADS)

    Guo, Yao-ting; Wang, Bing-he; Qu, Yi; Cai, Hua-jie

    2016-10-01

    In this paper, we consider channel estimation for Massive MIMO systems operating in frequency division duplexing mode. By exploiting the sparsity of propagation paths in Massive MIMO channel, we develop a compressed sensing(CS) based channel estimator which can reduce the pilot overhead. As compared with the conventional least squares (LS) and linear minimum mean square error(LMMSE) estimation, the proposed algorithm is based on the quantized multipath matching pursuit - MMP - reduced the pilot overhead and performs better than other CS algorithms. The simulation results demonstrate the advantage of the proposed algorithm over various existing methods including the LS, LMMSE, CoSaMP and conventional MMP estimators.

  17. Quantization with maximally degenerate Poisson brackets: the harmonic oscillator!

    NASA Astrophysics Data System (ADS)

    Nutku, Yavuz

    2003-07-01

    Nambu's construction of multi-linear brackets for super-integrable systems can be thought of as degenerate Poisson brackets with a maximal set of Casimirs in their kernel. By introducing privileged coordinates in phase space these degenerate Poisson brackets are brought to the form of Heisenberg's equations. We propose a definition for constructing quantum operators for classical functions, which enables us to turn the maximally degenerate Poisson brackets into operators. They pose a set of eigenvalue problems for a new state vector. The requirement of the single-valuedness of this eigenfunction leads to quantization. The example of the harmonic oscillator is used to illustrate this general procedure for quantizing a class of maximally super-integrable systems.

  18. Dielectric properties of classical and quantized ionic fluids.

    PubMed

    Høye, Johan S

    2010-06-01

    We study time-dependent correlation functions of classical and quantum gases using methods of equilibrium statistical mechanics for systems of uniform as well as nonuniform densities. The basis for our approach is the path integral formalism of quantum mechanical systems. With this approach the statistical mechanics of a quantum mechanical system becomes the equivalent of a classical polymer problem in four dimensions where imaginary time is the fourth dimension. Several nontrivial results for quantum systems have been obtained earlier by this analogy. Here, we will focus upon the presence of a time-dependent electromagnetic pair interaction where the electromagnetic vector potential that depends upon currents, will be present. Thus both density and current correlations are needed to evaluate the influence of this interaction. Then we utilize that densities and currents can be expressed by polarizations by which the ionic fluid can be regarded as a dielectric one for which a nonlocal susceptibility is found. This nonlocality has as a consequence that we find no contribution from a possible transverse electric zero-frequency mode for the Casimir force between metallic plates. Further, we establish expressions for a leading correction to ab initio calculations for the energies of the quantized electrons of molecules where now retardation effects also are taken into account.

  19. Integrability, Quantization and Moduli Spaces of Curves

    NASA Astrophysics Data System (ADS)

    Rossi, Paolo

    2017-07-01

    This paper has the purpose of presenting in an organic way a new approach to integrable (1+1)-dimensional field systems and their systematic quantization emerging from intersection theory of the moduli space of stable algebraic curves and, in particular, cohomological field theories, Hodge classes and double ramification cycles. This methods are alternative to the traditional Witten-Kontsevich framework and its generalizations by Dubrovin and Zhang and, among other advantages, have the merit of encompassing quantum integrable systems. Most of this material originates from an ongoing collaboration with A. Buryak, B. Dubrovin and J. Guéré.

  20. Path Integral Simulation of the H/D Kinetic Isotope Effect in Monoamine Oxidase B Catalyzed Decomposition of Dopamine.

    PubMed

    Mavri, Janez; Matute, Ricardo A; Chu, Zhen T; Vianello, Robert

    2016-04-14

    Brain monoamines regulate many centrally mediated body functions, and can cause adverse symptoms when they are out of balance. A starting point to address challenges raised by the increasing burden of brain diseases is to understand, at atomistic level, the catalytic mechanism of an essential amine metabolic enzyme-monoamine oxidase B (MAO B). Recently, we demonstrated that the rate-limiting step of MAO B catalyzed conversion of amines into imines represents the hydride anion transfer from the substrate α-CH2 group to the N5 atom of the flavin cofactor moiety. In this article we simulated for MAO B catalyzed dopamine decomposition the effects of nuclear tunneling by the calculation of the H/D kinetic isotope effect. We applied path integral quantization of the nuclear motion for the methylene group and the N5 atom of the flavin moiety in conjunction with the QM/MM treatment on the empirical valence bond (EVB) level for the rest of the enzyme. The calculated H/D kinetic isotope effect of 12.8 ± 0.3 is in a reasonable agreement with the available experimental data for closely related biogenic amines, which gives strong support for the proposed hydride mechanism. The results are discussed in the context of tunneling in enzyme centers and advent of deuterated drugs into clinical practice.

  1. Quantization of Space-like States in Lorentz-Violating Theories

    NASA Astrophysics Data System (ADS)

    Colladay, Don

    2018-01-01

    Lorentz violation frequently induces modified dispersion relations that can yield space-like states that impede the standard quantization procedures. In certain cases, an extended Hamiltonian formalism can be used to define observer-covariant normalization factors for field expansions and phase space integrals. These factors extend the theory to include non-concordant frames in which there are negative-energy states. This formalism provides a rigorous way to quantize certain theories containing space-like states and allows for the consistent computation of Cherenkov radiation rates in arbitrary frames and avoids singular expressions.

  2. Floating-point system quantization errors in digital control systems

    NASA Technical Reports Server (NTRS)

    Phillips, C. L.; Vallely, D. P.

    1978-01-01

    This paper considers digital controllers (filters) operating in floating-point arithmetic in either open-loop or closed-loop systems. A quantization error analysis technique is developed, and is implemented by a digital computer program that is based on a digital simulation of the system. The program can be integrated into existing digital simulations of a system.

  3. Spin foam models for quantum gravity

    NASA Astrophysics Data System (ADS)

    Perez, Alejandro

    The definition of a quantum theory of gravity is explored following Feynman's path-integral approach. The aim is to construct a well defined version of the Wheeler-Misner- Hawking ``sum over four geometries'' formulation of quantum general relativity (GR). This is done by means of exploiting the similarities between the formulation of GR in terms of tetrad-connection variables (Palatini formulation) and a simpler theory called BF theory. One can go from BF theory to GR by imposing certain constraints on the BF-theory configurations. BF theory contains only global degrees of freedom (topological theory) and it can be exactly quantized á la Feynman introducing a discretization of the manifold. Using the path integral for BF theory we define a path integration for GR imposing the BF-to-GR constraints on the BF measure. The infinite degrees of freedom of gravity are restored in the process, and the restriction to a single discretization introduces a cut- off in the summed-over configurations. In order to capture all the degrees of freedom a sum over discretization is implemented. Both the implementation of the BF-to-GR constraints and the sum over discretizations are obtained by means of the introduction of an auxiliary field theory (AFT). 4-geometries in the path integral for GR are given by the Feynman diagrams of the AFT which is in this sense dual to GR. Feynman diagrams correspond to 2-complexes labeled by unitary irreducible representations of the internal gauge group (corresponding to tetrad rotation in the connection to GR). A model for 4-dimensional Euclidean quantum gravity (QG) is defined which corresponds to a different normalization of the Barrett-Crane model. The model is perturbatively finite; divergences appearing in the Barrett-Crane model are cured by the new normalization. We extend our techniques to the Lorentzian sector, where we define two models for four-dimensional QG. The first one contains only time-like representations and is shown to be perturbatively finite. The second model contains both time-like and space-like representations. The spectrum of geometrical operators coincide with the prediction of the canonical approach of loop QG. At the moment, the convergence properties of the model are less understood and remain for future investigation.

  4. Deriving the exact nonadiabatic quantum propagator in the mapping variable representation.

    PubMed

    Hele, Timothy J H; Ananth, Nandini

    2016-12-22

    We derive an exact quantum propagator for nonadiabatic dynamics in multi-state systems using the mapping variable representation, where classical-like Cartesian variables are used to represent both continuous nuclear degrees of freedom and discrete electronic states. The resulting Liouvillian is a Moyal series that, when suitably approximated, can allow for the use of classical dynamics to efficiently model large systems. We demonstrate that different truncations of the exact Liouvillian lead to existing approximate semiclassical and mixed quantum-classical methods and we derive an associated error term for each method. Furthermore, by combining the imaginary-time path-integral representation of the Boltzmann operator with the exact Liouvillian, we obtain an analytic expression for thermal quantum real-time correlation functions. These results provide a rigorous theoretical foundation for the development of accurate and efficient classical-like dynamics to compute observables such as electron transfer reaction rates in complex quantized systems.

  5. Principles of Discrete Time Mechanics

    NASA Astrophysics Data System (ADS)

    Jaroszkiewicz, George

    2014-04-01

    1. Introduction; 2. The physics of discreteness; 3. The road to calculus; 4. Temporal discretization; 5. Discrete time dynamics architecture; 6. Some models; 7. Classical cellular automata; 8. The action sum; 9. Worked examples; 10. Lee's approach to discrete time mechanics; 11. Elliptic billiards; 12. The construction of system functions; 13. The classical discrete time oscillator; 14. Type 2 temporal discretization; 15. Intermission; 16. Discrete time quantum mechanics; 17. The quantized discrete time oscillator; 18. Path integrals; 19. Quantum encoding; 20. Discrete time classical field equations; 21. The discrete time Schrodinger equation; 22. The discrete time Klein-Gordon equation; 23. The discrete time Dirac equation; 24. Discrete time Maxwell's equations; 25. The discrete time Skyrme model; 26. Discrete time quantum field theory; 27. Interacting discrete time scalar fields; 28. Space, time and gravitation; 29. Causality and observation; 30. Concluding remarks; Appendix A. Coherent states; Appendix B. The time-dependent oscillator; Appendix C. Quaternions; Appendix D. Quantum registers; References; Index.

  6. Yang-Baxter maps, discrete integrable equations and quantum groups

    NASA Astrophysics Data System (ADS)

    Bazhanov, Vladimir V.; Sergeev, Sergey M.

    2018-01-01

    For every quantized Lie algebra there exists a map from the tensor square of the algebra to itself, which by construction satisfies the set-theoretic Yang-Baxter equation. This map allows one to define an integrable discrete quantum evolution system on quadrilateral lattices, where local degrees of freedom (dynamical variables) take values in a tensor power of the quantized Lie algebra. The corresponding equations of motion admit the zero curvature representation. The commuting Integrals of Motion are defined in the standard way via the Quantum Inverse Problem Method, utilizing Baxter's famous commuting transfer matrix approach. All elements of the above construction have a meaningful quasi-classical limit. As a result one obtains an integrable discrete Hamiltonian evolution system, where the local equation of motion are determined by a classical Yang-Baxter map and the action functional is determined by the quasi-classical asymptotics of the universal R-matrix of the underlying quantum algebra. In this paper we present detailed considerations of the above scheme on the example of the algebra Uq (sl (2)) leading to discrete Liouville equations, however the approach is rather general and can be applied to any quantized Lie algebra.

  7. Huygens-Fresnel principle: Analyzing consistency at the photon level

    NASA Astrophysics Data System (ADS)

    Santos, Elkin A.; Castro, Ferney; Torres, Rafael

    2018-04-01

    Typically the use of the Rayleigh-Sommerfeld diffraction formula as a photon propagator is widely accepted due to the abundant experimental evidence that suggests that it works. However, a direct link between the propagation of the electromagnetic field in classical optics and the propagation of photons where the square of the probability amplitude describes the transverse probability of the photon detection is still an issue to be clarified. We develop a mathematical formulation for the photon propagation using the formalism of electromagnetic field quantization and the path-integral method, whose main feature is its similarity with a fractional Fourier transform (FRFT). Here we show that because of the close relation existing between the FRFT and the Fresnel diffraction integral, this propagator can be written as a Fresnel diffraction, which brings forward a discussion of the fundamental character of it at the photon level compared to the Huygens-Fresnel principle. Finally, we carry out an experiment of photon counting by a rectangular slit supporting the result that the diffraction phenomenon in the Fresnel approximation behaves as the actual classical limit.

  8. 2-Step scalar deadzone quantization for bitplane image coding.

    PubMed

    Auli-Llinas, Francesc

    2013-12-01

    Modern lossy image coding systems generate a quality progressive codestream that, truncated at increasing rates, produces an image with decreasing distortion. Quality progressivity is commonly provided by an embedded quantizer that employs uniform scalar deadzone quantization (USDQ) together with a bitplane coding strategy. This paper introduces a 2-step scalar deadzone quantization (2SDQ) scheme that achieves same coding performance as that of USDQ while reducing the coding passes and the emitted symbols of the bitplane coding engine. This serves to reduce the computational costs of the codec and/or to code high dynamic range images. The main insights behind 2SDQ are the use of two quantization step sizes that approximate wavelet coefficients with more or less precision depending on their density, and a rate-distortion optimization technique that adjusts the distortion decreases produced when coding 2SDQ indexes. The integration of 2SDQ in current codecs is straightforward. The applicability and efficiency of 2SDQ are demonstrated within the framework of JPEG2000.

  9. Quantized conductance observed during sintering of silver nanoparticles by intense terahertz pulses

    NASA Astrophysics Data System (ADS)

    Takano, Keisuke; Harada, Hirofumi; Yoshimura, Masashi; Nakajima, Makoto

    2018-04-01

    We show that silver nanoparticles, which are deposited on a terahertz-receiving antenna, can be sintered by intense terahertz pulse irradiation. The conductance of the silver nanoparticles between the antenna electrodes is measured under the terahertz pulse irradiation. The dispersant materials surrounding the nanoparticles are peeled off, and conduction paths are created. We reveal that, during sintering, quantum point contacts are formed, leading to quantized conductance between the electrodes with the conductance quantum, which reflects the formation of atomically thin wires. The terahertz electric pulses are sufficiently intense to activate electromigration, i.e., transfer of kinetic energy from the electrons to the silver atoms. The silver atoms move and atomically thin wires form under the intense terahertz pulse irradiation. These findings may inspire nanoscale structural processing by terahertz pulse irradiation.

  10. A Low Power Digital Accumulation Technique for Digital-Domain CMOS TDI Image Sensor.

    PubMed

    Yu, Changwei; Nie, Kaiming; Xu, Jiangtao; Gao, Jing

    2016-09-23

    In this paper, an accumulation technique suitable for digital domain CMOS time delay integration (TDI) image sensors is proposed to reduce power consumption without degrading the rate of imaging. In terms of the slight variations of quantization codes among different pixel exposures towards the same object, the pixel array is divided into two groups: one is for coarse quantization of high bits only, and the other one is for fine quantization of low bits. Then, the complete quantization codes are composed of both results from the coarse-and-fine quantization. The equivalent operation comparably reduces the total required bit numbers of the quantization. In the 0.18 µm CMOS process, two versions of 16-stage digital domain CMOS TDI image sensor chains based on a 10-bit successive approximate register (SAR) analog-to-digital converter (ADC), with and without the proposed technique, are designed. The simulation results show that the average power consumption of slices of the two versions are 6 . 47 × 10 - 8 J/line and 7 . 4 × 10 - 8 J/line, respectively. Meanwhile, the linearity of the two versions are 99.74% and 99.99%, respectively.

  11. Is the Wheeler-DeWitt equation more fundamental than the Schrödinger equation?

    NASA Astrophysics Data System (ADS)

    Shestakova, Tatyana P.

    The Wheeler-DeWitt equation was proposed 50 years ago and until now it is the cornerstone of most approaches to quantization of gravity. One can find in the literature, the opinion that the Wheeler-DeWitt equation is even more fundamental than the basic equation of quantum theory, the Schrödinger equation. We still should remember that we are in the situation when no observational data can confirm or reject the fundamental status of the Wheeler-DeWitt equation, so we can give just indirect arguments in favor of or against it, grounded on mathematical consistency and physical relevance. I shall present the analysis of the situation and comparison of the standard Wheeler-DeWitt approach with the extended phase space approach to quantization of gravity. In my analysis, I suppose, first, that a future quantum theory of gravity must be applicable to all phenomena from the early universe to quantum effects in strong gravitational fields, in the latter case, the state of the observer (the choice of a reference frame) may appear to be significant. Second, I suppose that the equation for the wave function of the universe must not be postulated but derived by means of a mathematically consistent procedure, which exists in path integral quantization. When applying this procedure to any gravitating system, one should take into account features of gravity, namely, nontrivial spacetime topology and possible absence of asymptotic states. The Schrödinger equation has been derived early for cosmological models with a finite number of degrees of freedom, and just recently it has been found for the spherically symmetric model which is a simplest model with an infinite number of degrees of freedom. The structure of the Schrödinger equation and its general solution appears to be very similar in these cases. The obtained results give grounds to say that the Schrödinger equation retains its fundamental meaning in constructing quantum theory of gravity.

  12. Integrable generalizations of non-linear multiple three-wave interaction models

    NASA Astrophysics Data System (ADS)

    Jurčo, Branislav

    1989-07-01

    Integrable generalizations of multiple three-wave interaction models in terms of r-matrix formulation are investigated. The Lax representations, complete sets of first integrals in involution are constructed, the quantization leading to Gaudin's models is discussed.

  13. Riemann surfaces of complex classical trajectories and tunnelling splitting in one-dimensional systems

    NASA Astrophysics Data System (ADS)

    Harada, Hiromitsu; Mouchet, Amaury; Shudo, Akira

    2017-10-01

    The topology of complex classical paths is investigated to discuss quantum tunnelling splittings in one-dimensional systems. Here the Hamiltonian is assumed to be given as polynomial functions, so the fundamental group for the Riemann surface provides complete information on the topology of complex paths, which allows us to enumerate all the possible candidates contributing to the semiclassical sum formula for tunnelling splittings. This naturally leads to action relations among classically disjoined regions, revealing entirely non-local nature in the quantization condition. The importance of the proper treatment of Stokes phenomena is also discussed in Hamiltonians in the normal form.

  14. Particle localization, spinor two-valuedness, and Fermi quantization of tensor systems

    NASA Technical Reports Server (NTRS)

    Reifler, Frank; Morris, Randall

    1994-01-01

    Recent studies of particle localization shows that square-integrable positive energy bispinor fields in a Minkowski space-time cannot be physically distinguished from constrained tensor fields. In this paper we generalize this result by characterizing all classical tensor systems, which admit Fermi quantization, as those having unitary Lie-Poisson brackets. Examples include Euler's tensor equation for a rigid body and Dirac's equation in tensor form.

  15. Functional integral for non-Lagrangian systems

    NASA Astrophysics Data System (ADS)

    Kochan, Denis

    2010-02-01

    A functional integral formulation of quantum mechanics for non-Lagrangian systems is presented. The approach, which we call “stringy quantization,” is based solely on classical equations of motion and is free of any ambiguity arising from Lagrangian and/or Hamiltonian formulation of the theory. The functionality of the proposed method is demonstrated on several examples. Special attention is paid to the stringy quantization of systems with a general A-power friction force -κq˙A. Results for A=1 are compared with those obtained in the approaches by Caldirola-Kanai, Bateman, and Kostin. Relations to the Caldeira-Leggett model and to the Feynman-Vernon approach are discussed as well.

  16. Floating-point system quantization errors in digital control systems

    NASA Technical Reports Server (NTRS)

    Phillips, C. L.

    1973-01-01

    The results are reported of research into the effects on system operation of signal quantization in a digital control system. The investigation considered digital controllers (filters) operating in floating-point arithmetic in either open-loop or closed-loop systems. An error analysis technique is developed, and is implemented by a digital computer program that is based on a digital simulation of the system. As an output the program gives the programing form required for minimum system quantization errors (either maximum of rms errors), and the maximum and rms errors that appear in the system output for a given bit configuration. The program can be integrated into existing digital simulations of a system.

  17. Direct Images, Fields of Hilbert Spaces, and Geometric Quantization

    NASA Astrophysics Data System (ADS)

    Lempert, László; Szőke, Róbert

    2014-04-01

    Geometric quantization often produces not one Hilbert space to represent the quantum states of a classical system but a whole family H s of Hilbert spaces, and the question arises if the spaces H s are canonically isomorphic. Axelrod et al. (J. Diff. Geo. 33:787-902, 1991) and Hitchin (Commun. Math. Phys. 131:347-380, 1990) suggest viewing H s as fibers of a Hilbert bundle H, introduce a connection on H, and use parallel transport to identify different fibers. Here we explore to what extent this can be done. First we introduce the notion of smooth and analytic fields of Hilbert spaces, and prove that if an analytic field over a simply connected base is flat, then it corresponds to a Hermitian Hilbert bundle with a flat connection and path independent parallel transport. Second we address a general direct image problem in complex geometry: pushing forward a Hermitian holomorphic vector bundle along a non-proper map . We give criteria for the direct image to be a smooth field of Hilbert spaces. Third we consider quantizing an analytic Riemannian manifold M by endowing TM with the family of adapted Kähler structures from Lempert and Szőke (Bull. Lond. Math. Soc. 44:367-374, 2012). This leads to a direct image problem. When M is homogeneous, we prove the direct image is an analytic field of Hilbert spaces. For certain such M—but not all—the direct image is even flat; which means that in those cases quantization is unique.

  18. Probing topology by "heating": Quantized circular dichroism in ultracold atoms.

    PubMed

    Tran, Duc Thanh; Dauphin, Alexandre; Grushin, Adolfo G; Zoller, Peter; Goldman, Nathan

    2017-08-01

    We reveal an intriguing manifestation of topology, which appears in the depletion rate of topological states of matter in response to an external drive. This phenomenon is presented by analyzing the response of a generic two-dimensional (2D) Chern insulator subjected to a circular time-periodic perturbation. Because of the system's chiral nature, the depletion rate is shown to depend on the orientation of the circular shake; taking the difference between the rates obtained from two opposite orientations of the drive, and integrating over a proper drive-frequency range, provides a direct measure of the topological Chern number (ν) of the populated band: This "differential integrated rate" is directly related to the strength of the driving field through the quantized coefficient η 0 = ν/ ℏ 2 , where h = 2π ℏ is Planck's constant. Contrary to the integer quantum Hall effect, this quantized response is found to be nonlinear with respect to the strength of the driving field, and it explicitly involves interband transitions. We investigate the possibility of probing this phenomenon in ultracold gases and highlight the crucial role played by edge states in this effect. We extend our results to 3D lattices, establishing a link between depletion rates and the nonlinear photogalvanic effect predicted for Weyl semimetals. The quantized circular dichroism revealed in this work designates depletion rate measurements as a universal probe for topological order in quantum matter.

  19. Qiang-Dong proper quantization rule and its applications to exactly solvable quantum systems

    NASA Astrophysics Data System (ADS)

    Serrano, F. A.; Gu, Xiao-Yan; Dong, Shi-Hai

    2010-08-01

    We propose proper quantization rule, ∫x_Ax_B k(x)dx-∫x0Ax0Bk0(x)dx=nπ, where k(x )=√2M[E -V(x)] /ℏ. The xA and xB are two turning points determined by E =V(x), and n is the number of the nodes of wave function ψ(x ). We carry out the exact solutions of solvable quantum systems by this rule and find that the energy spectra of solvable systems can be determined only from its ground state energy. The previous complicated and tedious integral calculations involved in exact quantization rule are greatly simplified. The beauty and simplicity of the rule come from its meaning—whenever the number of the nodes of ϕ(x ) or the number of the nodes of the wave function ψ(x ) increases by 1, the momentum integral ∫xAxBk(x )dx will increase by π. We apply this proper quantization rule to carry out solvable quantum systems such as the one-dimensional harmonic oscillator, the Morse potential and its generalization, the Hulthén potential, the Scarf II potential, the asymmetric trigonometric Rosen-Morse potential, the Pöschl-Teller type potentials, the Rosen-Morse potential, the Eckart potential, the harmonic oscillator in three dimensions, the hydrogen atom, and the Manning-Rosen potential in D dimensions.

  20. Feature Quantization and Pooling for Videos

    DTIC Science & Technology

    2014-05-01

    does not score high on this metric. The exceptions are videos where objects move - for exam- ple, the ice skaters (“ice”) and the tennis player , tracked...convincing me that my future path should include a PhD. Martial and Fernando, your energy is exceptional! Its influence can be seen in the burning...3.17 BMW enables Interpretation of similar regions across videos ( tennis ). . . . . . . 50 3.18 Common Motion Words across videos with large camera

  1. Ghost-gluon vertex in the presence of the Gribov horizon

    NASA Astrophysics Data System (ADS)

    Mintz, B. W.; Palhares, L. F.; Sorella, S. P.; Pereira, A. D.

    2018-02-01

    We consider Yang-Mills theories quantized in the Landau gauge in the presence of the Gribov horizon via the refined Gribov-Zwanziger (RGZ) framework. As the restriction of the gauge path integral to the Gribov region is taken into account, the resulting gauge field propagators display a nontrivial infrared behavior, being very close to the ones observed in lattice gauge field theory simulations. In this work, we explore a higher correlation function in the refined Gribov-Zwanziger theory: the ghost-gluon interaction vertex, at one-loop level. We show explicit compatibility with kinematical constraints, as required by the Ward identities of the theory, and obtain analytical expressions in the limit of vanishing gluon momentum. We find that the RGZ results are nontrivial in the infrared regime, being compatible with lattice Yang-Mills simulations in both SU(2) and SU(3), as well as with solutions from Schwinger-Dyson equations in different truncation schemes, Functional Renormalization Group analysis, and the renormalization group-improved Curci-Ferrari model.

  2. Optimization and quantization in gradient symbol systems: a framework for integrating the continuous and the discrete in cognition.

    PubMed

    Smolensky, Paul; Goldrick, Matthew; Mathis, Donald

    2014-08-01

    Mental representations have continuous as well as discrete, combinatorial properties. For example, while predominantly discrete, phonological representations also vary continuously; this is reflected by gradient effects in instrumental studies of speech production. Can an integrated theoretical framework address both aspects of structure? The framework we introduce here, Gradient Symbol Processing, characterizes the emergence of grammatical macrostructure from the Parallel Distributed Processing microstructure (McClelland, Rumelhart, & The PDP Research Group, 1986) of language processing. The mental representations that emerge, Distributed Symbol Systems, have both combinatorial and gradient structure. They are processed through Subsymbolic Optimization-Quantization, in which an optimization process favoring representations that satisfy well-formedness constraints operates in parallel with a distributed quantization process favoring discrete symbolic structures. We apply a particular instantiation of this framework, λ-Diffusion Theory, to phonological production. Simulations of the resulting model suggest that Gradient Symbol Processing offers a way to unify accounts of grammatical competence with both discrete and continuous patterns in language performance. Copyright © 2013 Cognitive Science Society, Inc.

  3. A Kalman Filter Implementation for Precision Improvement in Low-Cost GPS Positioning of Tractors

    PubMed Central

    Gomez-Gil, Jaime; Ruiz-Gonzalez, Ruben; Alonso-Garcia, Sergio; Gomez-Gil, Francisco Javier

    2013-01-01

    Low-cost GPS receivers provide geodetic positioning information using the NMEA protocol, usually with eight digits for latitude and nine digits for longitude. When these geodetic coordinates are converted into Cartesian coordinates, the positions fit in a quantization grid of some decimeters in size, the dimensions of which vary depending on the point of the terrestrial surface. The aim of this study is to reduce the quantization errors of some low-cost GPS receivers by using a Kalman filter. Kinematic tractor model equations were employed to particularize the filter, which was tuned by applying Monte Carlo techniques to eighteen straight trajectories, to select the covariance matrices that produced the lowest Root Mean Square Error in these trajectories. Filter performance was tested by using straight tractor paths, which were either simulated or real trajectories acquired by a GPS receiver. The results show that the filter can reduce the quantization error in distance by around 43%. Moreover, it reduces the standard deviation of the heading by 75%. Data suggest that the proposed filter can satisfactorily preprocess the low-cost GPS receiver data when used in an assistance guidance GPS system for tractors. It could also be useful to smooth tractor GPS trajectories that are sharpened when the tractor moves over rough terrain. PMID:24217355

  4. A 1 GHz sample rate, 256-channel, 1-bit quantization, CMOS, digital correlator chip

    NASA Technical Reports Server (NTRS)

    Timoc, C.; Tran, T.; Wongso, J.

    1992-01-01

    This paper describes the development of a digital correlator chip with the following features: 1 Giga-sample/second; 256 channels; 1-bit quantization; 32-bit counters providing up to 4 seconds integration time at 1 GHz; and very low power dissipation per channel. The improvements in the performance-to-cost ratio of the digital correlator chip are achieved with a combination of systolic architecture, novel pipelined differential logic circuits, and standard 1.0 micron CMOS process.

  5. The canonical quantization of chaotic maps on the torus

    NASA Astrophysics Data System (ADS)

    Rubin, Ron Shai

    In this thesis, a quantization method for classical maps on the torus is presented. The quantum algebra of observables is defined as the quantization of measurable functions on the torus with generators exp (2/pi ix) and exp (2/pi ip). The Hilbert space we use remains the infinite-dimensional L2/ (/IR, dx). The dynamics is given by a unitary quantum propagator such that as /hbar /to 0, the classical dynamics is returned. We construct such a quantization for the Kronecker map, the cat map, the baker's map, the kick map, and the Harper map. For the cat map, we find the same for the propagator on the plane the same integral kernel conjectured in (HB) using semiclassical methods. We also define a quantum 'integral over phase space' as a trace over the quantum algebra. Using this definition, we proceed to define quantum ergodicity and mixing for maps on the torus. We prove that the quantum cat map and Kronecker map are both ergodic, but only the cat map is mixing, true to its classical origins. For Planck's constant satisfying the integrality condition h = 1/N, with N/in doubz+, we construct an explicit isomorphism between L2/ (/IR, dx) and the Hilbert space of sections of an N-dimensional vector bundle over a θ-torus T2 of boundary conditions. The basis functions are distributions in L2/ (/IR, dx), given by an infinite comb of Dirac δ-functions. In Bargmann space these distributions take on the form of Jacobi ϑ-functions. Transformations from position to momentum representation can be implemented via a finite N-dimensional discrete Fourier transform. With the θ-torus, we provide a connection between the finite-dimensional quantum maps given in the physics literature and the canonical quantization presented here and found in the language of pseudo-differential operators elsewhere in mathematics circles. Specifically, at a fixed point of the dynamics on the θ-torus, we return a finite-dimensional matrix propagator. We present this connection explicitly for several examples.

  6. Probing topology by “heating”: Quantized circular dichroism in ultracold atoms

    PubMed Central

    Tran, Duc Thanh; Dauphin, Alexandre; Grushin, Adolfo G.; Zoller, Peter; Goldman, Nathan

    2017-01-01

    We reveal an intriguing manifestation of topology, which appears in the depletion rate of topological states of matter in response to an external drive. This phenomenon is presented by analyzing the response of a generic two-dimensional (2D) Chern insulator subjected to a circular time-periodic perturbation. Because of the system’s chiral nature, the depletion rate is shown to depend on the orientation of the circular shake; taking the difference between the rates obtained from two opposite orientations of the drive, and integrating over a proper drive-frequency range, provides a direct measure of the topological Chern number (ν) of the populated band: This “differential integrated rate” is directly related to the strength of the driving field through the quantized coefficient η0 = ν/ℏ2, where h = 2π ℏ is Planck’s constant. Contrary to the integer quantum Hall effect, this quantized response is found to be nonlinear with respect to the strength of the driving field, and it explicitly involves interband transitions. We investigate the possibility of probing this phenomenon in ultracold gases and highlight the crucial role played by edge states in this effect. We extend our results to 3D lattices, establishing a link between depletion rates and the nonlinear photogalvanic effect predicted for Weyl semimetals. The quantized circular dichroism revealed in this work designates depletion rate measurements as a universal probe for topological order in quantum matter. PMID:28835930

  7. Dirac’s magnetic monopole and the Kontsevich star product

    NASA Astrophysics Data System (ADS)

    Soloviev, M. A.

    2018-03-01

    We examine relationships between various quantization schemes for an electrically charged particle in the field of a magnetic monopole. Quantization maps are defined in invariant geometrical terms, appropriate to the case of nontrivial topology, and are constructed for two operator representations. In the first setting, the quantum operators act on the Hilbert space of sections of a nontrivial complex line bundle associated with the Hopf bundle, whereas the second approach uses instead a quaternionic Hilbert module of sections of a trivial quaternionic line bundle. We show that these two quantizations are naturally related by a bundle morphism and, as a consequence, induce the same phase-space star product. We obtain explicit expressions for the integral kernels of star-products corresponding to various operator orderings and calculate their asymptotic expansions up to the third order in the Planck constant \\hbar . We also show that the differential form of the magnetic Weyl product corresponding to the symmetric ordering agrees completely with the Kontsevich formula for deformation quantization of Poisson structures and can be represented by Kontsevich’s graphs.

  8. An Implementation Method of the Fractional-Order PID Control System Considering the Memory Constraint and its Application to the Temperature Control of Heat Plate

    NASA Astrophysics Data System (ADS)

    Sasano, Koji; Okajima, Hiroshi; Matsunaga, Nobutomo

    Recently, the fractional order PID (FO-PID) control, which is the extension of the PID control, has been focused on. Even though the FO-PID requires the high-order filter, it is difficult to realize the high-order filter due to the memory limitation of digital computer. For implementation of FO-PID, approximation of the fractional integrator and differentiator are required. Short memory principle (SMP) is one of the effective approximation methods. However, there is a disadvantage that the approximated filter with SMP cannot eliminate the steady-state error. For this problem, we introduce the distributed implementation of the integrator and the dynamic quantizer to make the efficient use of permissible memory. The objective of this study is to clarify how to implement the accurate FO-PID with limited memories. In this paper, we propose the implementation method of FO-PID with memory constraint using dynamic quantizer. And the trade off between approximation of fractional elements and quantized data size are examined so as to close to the ideal FO-PID responses. The effectiveness of proposed method is evaluated by numerical example and experiment in the temperature control of heat plate.

  9. Quantized Lax Equations and Their Solutions

    NASA Astrophysics Data System (ADS)

    Jurčo, B.; Schlieker, M.

    Integrable systems on quantum groups are investigated. The Heisenberg equations possessing the Lax form are solved in terms of the solution to the factorization problem on the corresponding quantum group.

  10. Perspectives of Light-Front Quantized Field Theory: Some New Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Srivastava, Prem P.

    1999-08-13

    A review of some basic topics in the light-front (LF) quantization of relativistic field theory is made. It is argued that the LF quantization is equally appropriate as the conventional one and that they lead, assuming the microcausality principle, to the same physical content. This is confirmed in the studies on the LF of the spontaneous symmetry breaking (SSB), of the degenerate vacua in Schwinger model (SM) and Chiral SM (CSM), of the chiral boson theory, and of the QCD in covariant gauges among others. The discussion on the LF is more economical and more transparent than that found inmore » the conventional equal-time quantized theory. The removal of the constraints on the LF phase space by following the Dirac method, in fact, results in a substantially reduced number of independent dynamical variables. Consequently, the descriptions of the physical Hilbert space and the vacuum structure, for example, become more tractable. In the context of the Dyson-Wick perturbation theory the relevant propagators in the front form theory are causal. The Wick rotation can then be performed to employ the Euclidean space integrals in momentum space. The lack of manifest covariance becomes tractable, and still more so if we employ, as discussed in the text, the Fourier transform of the fermionic field based on a special construction of the LF spinor. The fact that the hyperplanes x{sup {+-}} = 0 constitute characteristic surfaces of the hyperbolic partial differential equation is found irrelevant in the quantized theory; it seems sufficient to quantize the theory on one of the characteristic hyperplanes.« less

  11. Operator Ordering and Classical Soliton Path in Two-Dimensional N = 2 Supersymmetry with KÄHLER Potential

    NASA Astrophysics Data System (ADS)

    Motoyui, Nobuyuki; Yamada, Mitsuru

    We investigate a two-dimensional N = 2 supersymmetric model which consists of n chiral superfields with Kähler potential. When we define quantum observables, we are always plagued by operator ordering problem. Among various ways to fix the operator order, we rely upon the supersymmetry. We demonstrate that the correct operator order is given by requiring the super-Poincaré algebra by carrying out the canonical Dirac bracket quantization. This is shown to be also true when the supersymmetry algebra has a central extension by the presence of topological soliton. It is also shown that the path of soliton is a straight line in the complex plane of superpotential W and triangular mass inequality holds. One half of supersymmetry is broken by the presence of soliton.

  12. Nonperturbative quantization of the electroweak model's electrodynamic sector

    NASA Astrophysics Data System (ADS)

    Fry, M. P.

    2015-04-01

    Consider the Euclidean functional integral representation of any physical process in the electroweak model. Integrating out the fermion degrees of freedom introduces 24 fermion determinants. These multiply the Gaussian functional measures of the Maxwell, Z , W , and Higgs fields to give an effective functional measure. Suppose the functional integral over the Maxwell field is attempted first. This paper is concerned with the large amplitude behavior of the Maxwell effective measure. It is assumed that the large amplitude variation of this measure is insensitive to the presence of the Z , W , and H fields; they are assumed to be a subdominant perturbation of the large amplitude Maxwell sector. Accordingly, we need only examine the large amplitude variation of a single QED fermion determinant. To facilitate this the Schwinger proper time representation of this determinant is decomposed into a sum of three terms. The advantage of this is that the separate terms can be nonperturbatively estimated for a measurable class of large amplitude random fields in four dimensions. It is found that the QED fermion determinant grows faster than exp [c e2∫d4x Fμν 2] , c >0 , in the absence of zero mode supporting random background potentials. This raises doubt on whether the QED fermion determinant is integrable with any Gaussian measure whose support does not include zero mode supporting potentials. Including zero mode supporting background potentials can result in a decaying exponential growth of the fermion determinant. This is prima facie evidence that Maxwellian zero modes are necessary for the nonperturbative quantization of QED and, by implication, for the nonperturbative quantization of the electroweak model.

  13. Single-user MIMO versus multi-user MIMO in distributed antenna systems with limited feedback

    NASA Astrophysics Data System (ADS)

    Schwarz, Stefan; Heath, Robert W.; Rupp, Markus

    2013-12-01

    This article investigates the performance of cellular networks employing distributed antennas in addition to the central antennas of the base station. Distributed antennas are likely to be implemented using remote radio units, which is enabled by a low latency and high bandwidth dedicated link to the base station. This facilitates coherent transmission from potentially all available antennas at the same time. Such distributed antenna system (DAS) is an effective way to deal with path loss and large-scale fading in cellular systems. DAS can apply precoding across multiple transmission points to implement single-user MIMO (SU-MIMO) and multi-user MIMO (MU-MIMO) transmission. The throughput performance of various SU-MIMO and MU-MIMO transmission strategies is investigated in this article, employing a Long-Term evolution (LTE) standard compliant simulation framework. The previously theoretically established cell-capacity improvement of MU-MIMO in comparison to SU-MIMO in DASs is confirmed under the practical constraints imposed by the LTE standard, even under the assumption of imperfect channel state information (CSI) at the base station. Because practical systems will use quantized feedback, the performance of different CSI feedback algorithms for DASs is investigated. It is shown that significant gains in the CSI quantization accuracy and in the throughput of especially MU-MIMO systems can be achieved with relatively simple quantization codebook constructions that exploit the available temporal correlation and channel gain differences.

  14. A constrained joint source/channel coder design and vector quantization of nonstationary sources

    NASA Technical Reports Server (NTRS)

    Sayood, Khalid; Chen, Y. C.; Nori, S.; Araj, A.

    1993-01-01

    The emergence of broadband ISDN as the network for the future brings with it the promise of integration of all proposed services in a flexible environment. In order to achieve this flexibility, asynchronous transfer mode (ATM) has been proposed as the transfer technique. During this period a study was conducted on the bridging of network transmission performance and video coding. The successful transmission of variable bit rate video over ATM networks relies on the interaction between the video coding algorithm and the ATM networks. Two aspects of networks that determine the efficiency of video transmission are the resource allocation algorithm and the congestion control algorithm. These are explained in this report. Vector quantization (VQ) is one of the more popular compression techniques to appear in the last twenty years. Numerous compression techniques, which incorporate VQ, have been proposed. While the LBG VQ provides excellent compression, there are also several drawbacks to the use of the LBG quantizers including search complexity and memory requirements, and a mismatch between the codebook and the inputs. The latter mainly stems from the fact that the VQ is generally designed for a specific rate and a specific class of inputs. In this work, an adaptive technique is proposed for vector quantization of images and video sequences. This technique is an extension of the recursively indexed scalar quantization (RISQ) algorithm.

  15. Can chaos be observed in quantum gravity?

    NASA Astrophysics Data System (ADS)

    Dittrich, Bianca; Höhn, Philipp A.; Koslowski, Tim A.; Nelson, Mike I.

    2017-06-01

    Full general relativity is almost certainly 'chaotic'. We argue that this entails a notion of non-integrability: a generic general relativistic model, at least when coupled to cosmologically interesting matter, likely possesses neither differentiable Dirac observables nor a reduced phase space. It follows that the standard notion of observable has to be extended to include non-differentiable or even discontinuous generalized observables. These cannot carry Poisson-algebraic structures and do not admit a standard quantization; one thus faces a quantum representation problem of gravitational observables. This has deep consequences for a quantum theory of gravity, which we investigate in a simple model for a system with Hamiltonian constraint that fails to be completely integrable. We show that basing the quantization on standard topology precludes a semiclassical limit and can even prohibit any solutions to the quantum constraints. Our proposed solution to this problem is to refine topology such that a complete set of Dirac observables becomes continuous. In the toy model, it turns out that a refinement to a polymer-type topology, as e.g. used in loop gravity, is sufficient. Basing quantization of the toy model on this finer topology, we find a complete set of quantum Dirac observables and a suitable semiclassical limit. This strategy is applicable to realistic candidate theories of quantum gravity and thereby suggests a solution to a long-standing problem which implies ramifications for the very concept of quantization. Our work reveals a qualitatively novel facet of chaos in physics and opens up a new avenue of research on chaos in gravity which hints at deep insights into the structure of quantum gravity.

  16. Integral transforms of the quantum mechanical path integral: Hit function and path-averaged potential.

    PubMed

    Edwards, James P; Gerber, Urs; Schubert, Christian; Trejo, Maria Anabel; Weber, Axel

    2018-04-01

    We introduce two integral transforms of the quantum mechanical transition kernel that represent physical information about the path integral. These transforms can be interpreted as probability distributions on particle trajectories measuring respectively the relative contribution to the path integral from paths crossing a given spatial point (the hit function) and the likelihood of values of the line integral of the potential along a path in the ensemble (the path-averaged potential).

  17. Integral transforms of the quantum mechanical path integral: Hit function and path-averaged potential

    NASA Astrophysics Data System (ADS)

    Edwards, James P.; Gerber, Urs; Schubert, Christian; Trejo, Maria Anabel; Weber, Axel

    2018-04-01

    We introduce two integral transforms of the quantum mechanical transition kernel that represent physical information about the path integral. These transforms can be interpreted as probability distributions on particle trajectories measuring respectively the relative contribution to the path integral from paths crossing a given spatial point (the hit function) and the likelihood of values of the line integral of the potential along a path in the ensemble (the path-averaged potential).

  18. Homing by path integration when a locomotion trajectory crosses itself.

    PubMed

    Yamamoto, Naohide; Meléndez, Jayleen A; Menzies, Derek T

    2014-01-01

    Path integration is a process with which navigators derive their current position and orientation by integrating self-motion signals along a locomotion trajectory. It has been suggested that path integration becomes disproportionately erroneous when the trajectory crosses itself. However, there is a possibility that this previous finding was confounded by effects of the length of a traveled path and the amount of turns experienced along the path, two factors that are known to affect path integration performance. The present study was designed to investigate whether the crossover of a locomotion trajectory truly increases errors of path integration. In an experiment, blindfolded human navigators were guided along four paths that varied in their lengths and turns, and attempted to walk directly back to the beginning of the paths. Only one of the four paths contained a crossover. Results showed that errors yielded from the path containing the crossover were not always larger than those observed in other paths, and the errors were attributed solely to the effects of longer path lengths or greater degrees of turns. These results demonstrated that path crossover does not always cause significant disruption in path integration processes. Implications of the present findings for models of path integration are discussed.

  19. Optimization and Quantization in Gradient Symbol Systems: A Framework for Integrating the Continuous and the Discrete in Cognition

    ERIC Educational Resources Information Center

    Smolensky, Paul; Goldrick, Matthew; Mathis, Donald

    2014-01-01

    Mental representations have continuous as well as discrete, combinatorial properties. For example, while predominantly discrete, phonological representations also vary continuously; this is reflected by gradient effects in instrumental studies of speech production. Can an integrated theoretical framework address both aspects of structure? The…

  20. Room-Temperature Quantum Ballistic Transport in Monolithic Ultrascaled Al-Ge-Al Nanowire Heterostructures.

    PubMed

    Sistani, Masiar; Staudinger, Philipp; Greil, Johannes; Holzbauer, Martin; Detz, Hermann; Bertagnolli, Emmerich; Lugstein, Alois

    2017-08-09

    Conductance quantization at room temperature is a key requirement for the utilizing of ballistic transport for, e.g., high-performance, low-power dissipating transistors operating at the upper limit of "on"-state conductance or multivalued logic gates. So far, studying conductance quantization has been restricted to high-mobility materials at ultralow temperatures and requires sophisticated nanostructure formation techniques and precise lithography for contact formation. Utilizing a thermally induced exchange reaction between single-crystalline Ge nanowires and Al pads, we achieved monolithic Al-Ge-Al NW heterostructures with ultrasmall Ge segments contacted by self-aligned quasi one-dimensional crystalline Al leads. By integration in electrostatically modulated back-gated field-effect transistors, we demonstrate the first experimental observation of room temperature quantum ballistic transport in Ge, favorable for integration in complementary metal-oxide-semiconductor platform technology.

  1. The topological particle and Morse theory

    NASA Astrophysics Data System (ADS)

    Rogers, Alice

    2000-09-01

    Canonical BRST quantization of the topological particle defined by a Morse function h is described. Stochastic calculus, using Brownian paths which implement the WKB method in a new way providing rigorous tunnelling results even in curved space, is used to give an explicit and simple expression for the matrix elements of the evolution operator for the BRST Hamiltonian. These matrix elements lead to a representation of the manifold cohomology in terms of critical points of h along lines developed by Witten (Witten E 1982 J. Diff. Geom. 17 661-92).

  2. Combining path integration and remembered landmarks when navigating without vision.

    PubMed

    Kalia, Amy A; Schrater, Paul R; Legge, Gordon E

    2013-01-01

    This study investigated the interaction between remembered landmark and path integration strategies for estimating current location when walking in an environment without vision. We asked whether observers navigating without vision only rely on path integration information to judge their location, or whether remembered landmarks also influence judgments. Participants estimated their location in a hallway after viewing a target (remembered landmark cue) and then walking blindfolded to the same or a conflicting location (path integration cue). We found that participants averaged remembered landmark and path integration information when they judged that both sources provided congruent information about location, which resulted in more precise estimates compared to estimates made with only path integration. In conclusion, humans integrate remembered landmarks and path integration in a gated fashion, dependent on the congruency of the information. Humans can flexibly combine information about remembered landmarks with path integration cues while navigating without visual information.

  3. Combining Path Integration and Remembered Landmarks When Navigating without Vision

    PubMed Central

    Kalia, Amy A.; Schrater, Paul R.; Legge, Gordon E.

    2013-01-01

    This study investigated the interaction between remembered landmark and path integration strategies for estimating current location when walking in an environment without vision. We asked whether observers navigating without vision only rely on path integration information to judge their location, or whether remembered landmarks also influence judgments. Participants estimated their location in a hallway after viewing a target (remembered landmark cue) and then walking blindfolded to the same or a conflicting location (path integration cue). We found that participants averaged remembered landmark and path integration information when they judged that both sources provided congruent information about location, which resulted in more precise estimates compared to estimates made with only path integration. In conclusion, humans integrate remembered landmarks and path integration in a gated fashion, dependent on the congruency of the information. Humans can flexibly combine information about remembered landmarks with path integration cues while navigating without visual information. PMID:24039742

  4. Path integration: effect of curved path complexity and sensory system on blindfolded walking.

    PubMed

    Koutakis, Panagiotis; Mukherjee, Mukul; Vallabhajosula, Srikant; Blanke, Daniel J; Stergiou, Nicholas

    2013-02-01

    Path integration refers to the ability to integrate continuous information of the direction and distance traveled by the system relative to the origin. Previous studies have investigated path integration through blindfolded walking along simple paths such as straight line and triangles. However, limited knowledge exists regarding the role of path complexity in path integration. Moreover, little is known about how information from different sensory input systems (like vision and proprioception) contributes to accurate path integration. The purpose of the current study was to investigate how sensory information and curved path complexity affect path integration. Forty blindfolded participants had to accurately reproduce a curved path and return to the origin. They were divided into four groups that differed in the curved path, circle (simple) or figure-eight (complex), and received either visual (previously seen) or proprioceptive (previously guided) information about the path before they reproduced it. The dependent variables used were average trajectory error, walking speed, and distance traveled. The results indicated that (a) both groups that walked on a circular path and both groups that received visual information produced greater accuracy in reproducing the path. Moreover, the performance of the group that received proprioceptive information and later walked on a figure-eight path was less accurate than their corresponding circular group. The groups that had the visual information also walked faster compared to the group that had proprioceptive information. Results of the current study highlight the roles of different sensory inputs while performing blindfolded walking for path integration. Copyright © 2012 Elsevier B.V. All rights reserved.

  5. The FBI wavelet/scalar quantization standard for gray-scale fingerprint image compression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradley, J.N.; Brislawn, C.M.; Hopper, T.

    1993-05-01

    The FBI has recently adopted a standard for the compression of digitized 8-bit gray-scale fingerprint images. The standard is based on scalar quantization of a 64-subband discrete wavelet transform decomposition of the images, followed by Huffman coding. Novel features of the algorithm include the use of symmetric boundary conditions for transforming finite-length signals and a subband decomposition tailored for fingerprint images scanned at 500 dpi. The standard is intended for use in conjunction with ANSI/NBS-CLS 1-1993, American National Standard Data Format for the Interchange of Fingerprint Information, and the FBI`s Integrated Automated Fingerprint Identification System.

  6. The FBI wavelet/scalar quantization standard for gray-scale fingerprint image compression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradley, J.N.; Brislawn, C.M.; Hopper, T.

    1993-01-01

    The FBI has recently adopted a standard for the compression of digitized 8-bit gray-scale fingerprint images. The standard is based on scalar quantization of a 64-subband discrete wavelet transform decomposition of the images, followed by Huffman coding. Novel features of the algorithm include the use of symmetric boundary conditions for transforming finite-length signals and a subband decomposition tailored for fingerprint images scanned at 500 dpi. The standard is intended for use in conjunction with ANSI/NBS-CLS 1-1993, American National Standard Data Format for the Interchange of Fingerprint Information, and the FBI's Integrated Automated Fingerprint Identification System.

  7. Probing the Topology of Density Matrices

    NASA Astrophysics Data System (ADS)

    Bardyn, Charles-Edouard; Wawer, Lukas; Altland, Alexander; Fleischhauer, Michael; Diehl, Sebastian

    2018-01-01

    The mixedness of a quantum state is usually seen as an adversary to topological quantization of observables. For example, exact quantization of the charge transported in a so-called Thouless adiabatic pump is lifted at any finite temperature in symmetry-protected topological insulators. Here, we show that certain directly observable many-body correlators preserve the integrity of topological invariants for mixed Gaussian quantum states in one dimension. Our approach relies on the expectation value of the many-body momentum-translation operator and leads to a physical observable—the "ensemble geometric phase" (EGP)—which represents a bona fide geometric phase for mixed quantum states, in the thermodynamic limit. In cyclic protocols, the EGP provides a topologically quantized observable that detects encircled spectral singularities ("purity-gap" closing points) of density matrices. While we identify the many-body nature of the EGP as a key ingredient, we propose a conceptually simple, interferometric setup to directly measure the latter in experiments with mesoscopic ensembles of ultracold atoms.

  8. Carbon nanotube-clamped metal atomic chain

    PubMed Central

    Tang, Dai-Ming; Yin, Li-Chang; Li, Feng; Liu, Chang; Yu, Wan-Jing; Hou, Peng-Xiang; Wu, Bo; Lee, Young-Hee; Ma, Xiu-Liang; Cheng, Hui-Ming

    2010-01-01

    Metal atomic chain (MAC) is an ultimate one-dimensional structure with unique physical properties, such as quantized conductance, colossal magnetic anisotropy, and quantized magnetoresistance. Therefore, MACs show great potential as possible components of nanoscale electronic and spintronic devices. However, MACs are usually suspended between two macroscale metallic electrodes; hence obvious technical barriers exist in the interconnection and integration of MACs. Here we report a carbon nanotube (CNT)-clamped MAC, where CNTs play the roles of both nanoconnector and electrodes. This nanostructure is prepared by in situ machining a metal-filled CNT, including peeling off carbon shells by spatially and elementally selective electron beam irradiation and further elongating the exposed metal nanorod. The microstructure and formation process of this CNT-clamped MAC are explored by both transmission electron microscopy observations and theoretical simulations. First-principles calculations indicate that strong covalent bonds are formed between the CNT and MAC. The electrical transport property of the CNT-clamped MAC was experimentally measured, and quantized conductance was observed. PMID:20427743

  9. A CMOS Imager with Focal Plane Compression using Predictive Coding

    NASA Technical Reports Server (NTRS)

    Leon-Salas, Walter D.; Balkir, Sina; Sayood, Khalid; Schemm, Nathan; Hoffman, Michael W.

    2007-01-01

    This paper presents a CMOS image sensor with focal-plane compression. The design has a column-level architecture and it is based on predictive coding techniques for image decorrelation. The prediction operations are performed in the analog domain to avoid quantization noise and to decrease the area complexity of the circuit, The prediction residuals are quantized and encoded by a joint quantizer/coder circuit. To save area resources, the joint quantizerlcoder circuit exploits common circuitry between a single-slope analog-to-digital converter (ADC) and a Golomb-Rice entropy coder. This combination of ADC and encoder allows the integration of the entropy coder at the column level. A prototype chip was fabricated in a 0.35 pm CMOS process. The output of the chip is a compressed bit stream. The test chip occupies a silicon area of 2.60 mm x 5.96 mm which includes an 80 X 44 APS array. Tests of the fabricated chip demonstrate the validity of the design.

  10. Quantization of an electromagnetic field in two-dimensional photonic structures based on the scattering matrix formalism ( S-quantization)

    NASA Astrophysics Data System (ADS)

    Ivanov, K. A.; Nikolaev, V. V.; Gubaydullin, A. R.; Kaliteevski, M. A.

    2017-10-01

    Based on the scattering matrix formalism, we have developed a method of quantization of an electromagnetic field in two-dimensional photonic nanostructures ( S-quantization in the two-dimensional case). In this method, the fields at the boundaries of the quantization box are expanded into a Fourier series and are related with each other by the scattering matrix of the system, which is the product of matrices describing the propagation of plane waves in empty regions of the quantization box and the scattering matrix of the photonic structure (or an arbitrary inhomogeneity). The quantization condition (similarly to the onedimensional case) is formulated as follows: the eigenvalues of the scattering matrix are equal to unity, which corresponds to the fact that the set of waves that are incident on the structure (components of the expansion into the Fourier series) is equal to the set of waves that travel away from the structure (outgoing waves). The coefficients of the matrix of scattering through the inhomogeneous structure have been calculated using the following procedure: the structure is divided into parallel layers such that the permittivity in each layer varies only along the axis that is perpendicular to the layers. Using the Fourier transform, the Maxwell equations have been written in the form of a matrix that relates the Fourier components of the electric field at the boundaries of neighboring layers. The product of these matrices is the transfer matrix in the basis of the Fourier components of the electric field. Represented in a block form, it is composed by matrices that contain the reflection and transmission coefficients for the Fourier components of the field, which, in turn, constitute the scattering matrix. The developed method considerably simplifies the calculation scheme for the analysis of the behavior of the electromagnetic field in structures with a two-dimensional inhomogeneity. In addition, this method makes it possible to obviate difficulties that arise in the analysis of the Purcell effect because of the divergence of the integral describing the effective volume of the mode in open systems.

  11. Quantization of the Kadomtsev-Petviashvili equation

    NASA Astrophysics Data System (ADS)

    Kozlowski, K.; Sklyanin, E. K.; Torrielli, A.

    2017-08-01

    We propose a quantization of the Kadomtsev-Petviashvili equation on a cylinder equivalent to an infinite system of nonrelativistic one-dimensional bosons with the masses m = 1, 2,.... The Hamiltonian is Galilei-invariant and includes the split and merge terms Ψ _{{m_1}}^\\dag Ψ _{{m_2}}^\\dag {Ψ _{{m_1} + {m_2}}} and Ψ _{{m_1} + {m_2}}^\\dag {Ψ _{{m_1}}}{Ψ _{{m_2}}} for all combinations of particles with masses m 1, m 2, and m 1 + m 2 for a special choice of coupling constants. We construct the Bethe eigenfunctions for the model and verify the consistency of the coordinate Bethe ansatz and hence the quantum integrability of the model up to the mass M=8 sector.

  12. Quantization of the Szekeres system

    NASA Astrophysics Data System (ADS)

    Paliathanasis, A.; Zampeli, Adamantia; Christodoulakis, T.; Mustafa, M. T.

    2018-06-01

    We study the quantum corrections on the Szekeres system in the context of canonical quantization in the presence of symmetries. We start from an effective point-like Lagrangian with two integrals of motion, one corresponding to the Hamiltonian and the other to a second rank killing tensor. Imposing their quantum version on the wave function results to a solution which is then interpreted in the context of Bohmian mechanics. In this semiclassical approach, it is shown that there is no quantum corrections, thus the classical trajectories of the Szekeres system are not affected at this level. Finally, we define a probability function which shows that a stationary surface of the probability corresponds to a classical exact solution.

  13. Path Integration on the Upper Half-Plane

    NASA Astrophysics Data System (ADS)

    Kubo, R.

    1987-10-01

    Feynman's path integral is considered on the Poincaré upper half-plane. It is shown that the fundermental solution to the heat equation partial f/partial t=Delta_{H}f can be expressed in terms of a path integral. A simple relation between the path integral and the Selberg trace formula is discussed briefly.

  14. Thermal Simulations, Open Boundary Conditions and Switches

    NASA Astrophysics Data System (ADS)

    Burnier, Yannis; Florio, Adrien; Kaczmarek, Olaf; Mazur, Lukas

    2018-03-01

    SU(N) gauge theories on compact spaces have a non-trivial vacuum structure characterized by a countable set of topological sectors and their topological charge. In lattice simulations, every topological sector needs to be explored a number of times which reflects its weight in the path integral. Current lattice simulations are impeded by the so-called freezing of the topological charge problem. As the continuum is approached, energy barriers between topological sectors become well defined and the simulations get trapped in a given sector. A possible way out was introduced by Lüscher and Schaefer using open boundary condition in the time extent. However, this solution cannot be used for thermal simulations, where the time direction is required to be periodic. In this proceedings, we present results obtained using open boundary conditions in space, at non-zero temperature. With these conditions, the topological charge is not quantized and the topological barriers are lifted. A downside of this method are the strong finite-size effects introduced by the boundary conditions. We also present some exploratory results which show how these conditions could be used on an algorithmic level to reshuffle the system and generate periodic configurations with non-zero topological charge.

  15. Path integration of head direction: updating a packet of neural activity at the correct speed using axonal conduction delays.

    PubMed

    Walters, Daniel; Stringer, Simon; Rolls, Edmund

    2013-01-01

    The head direction cell system is capable of accurately updating its current representation of head direction in the absence of visual input. This is known as the path integration of head direction. An important question is how the head direction cell system learns to perform accurate path integration of head direction. In this paper we propose a model of velocity path integration of head direction in which the natural time delay of axonal transmission between a linked continuous attractor network and competitive network acts as a timing mechanism to facilitate the correct speed of path integration. The model effectively learns a "look-up" table for the correct speed of path integration. In simulation, we show that the model is able to successfully learn two different speeds of path integration across two different axonal conduction delays, and without the need to alter any other model parameters. An implication of this model is that, by learning look-up tables for each speed of path integration, the model should exhibit a degree of robustness to damage. In simulations, we show that the speed of path integration is not significantly affected by degrading the network through removing a proportion of the cells that signal rotational velocity.

  16. Path Integration of Head Direction: Updating a Packet of Neural Activity at the Correct Speed Using Axonal Conduction Delays

    PubMed Central

    Walters, Daniel; Stringer, Simon; Rolls, Edmund

    2013-01-01

    The head direction cell system is capable of accurately updating its current representation of head direction in the absence of visual input. This is known as the path integration of head direction. An important question is how the head direction cell system learns to perform accurate path integration of head direction. In this paper we propose a model of velocity path integration of head direction in which the natural time delay of axonal transmission between a linked continuous attractor network and competitive network acts as a timing mechanism to facilitate the correct speed of path integration. The model effectively learns a “look-up” table for the correct speed of path integration. In simulation, we show that the model is able to successfully learn two different speeds of path integration across two different axonal conduction delays, and without the need to alter any other model parameters. An implication of this model is that, by learning look-up tables for each speed of path integration, the model should exhibit a degree of robustness to damage. In simulations, we show that the speed of path integration is not significantly affected by degrading the network through removing a proportion of the cells that signal rotational velocity. PMID:23526976

  17. Memory-efficient decoding of LDPC codes

    NASA Technical Reports Server (NTRS)

    Kwok-San Lee, Jason; Thorpe, Jeremy; Hawkins, Jon

    2005-01-01

    We present a low-complexity quantization scheme for the implementation of regular (3,6) LDPC codes. The quantization parameters are optimized to maximize the mutual information between the source and the quantized messages. Using this non-uniform quantized belief propagation algorithm, we have simulated that an optimized 3-bit quantizer operates with 0.2dB implementation loss relative to a floating point decoder, and an optimized 4-bit quantizer operates less than 0.1dB quantization loss.

  18. Analog-to-digital conversion as a source of drifts in displacements derived from digital recordings of ground acceleration

    USGS Publications Warehouse

    Boore, D.M.

    2003-01-01

    Displacements obtained from double integration of digitally recorded ground accelerations often show drifts much larger than those expected for the true ground displacements. These drifts might be due to many things, including dynamic elastic ground tilt, inelastic ground deformation, hysteresis in the instruments, and cross feed due to misalignment of nominally orthogonal sensors. This article shows that even if those effects were not present, the analog-to-digital conversion (ADC) process can produce apparent "pulses" and offsets in the acceleration baseline if the ground motion is slowly varying compared with the quantization level of the digitization. Such slowly varying signals can be produced by constant offsets that do not coincide with a quantization level and by near- and intermediate-field terms in the wave field radiated from earthquakes. Double integration of these apparent pulses and offsets leads to drifts in the displacements similar to those found in processing real recordings. These effects decrease in importance as the resolution of the ADC process increases.

  19. Vector quantizer based on brightness maps for image compression with the polynomial transform

    NASA Astrophysics Data System (ADS)

    Escalante-Ramirez, Boris; Moreno-Gutierrez, Mauricio; Silvan-Cardenas, Jose L.

    2002-11-01

    We present a vector quantization scheme acting on brightness fields based on distance/distortion criteria correspondent with psycho-visual aspects. These criteria quantify sensorial distortion between vectors that represent either portions of a digital image or alternatively, coefficients of a transform-based coding system. In the latter case, we use an image representation model, namely the Hermite transform, that is based on some of the main perceptual characteristics of the human vision system (HVS) and in their response to light stimulus. Energy coding in the brightness domain, determination of local structure, code-book training and local orientation analysis are all obtained by means of the Hermite transform. This paper, for thematic reasons, is divided in four sections. The first one will shortly highlight the importance of having newer and better compression algorithms. This section will also serve to explain briefly the most relevant characteristics of the HVS, advantages and disadvantages related with the behavior of our vision in front of ocular stimulus. The second section shall go through a quick review of vector quantization techniques, focusing their performance on image treatment, as a preview for the image vector quantizer compressor actually constructed in section 5. Third chapter was chosen to concentrate the most important data gathered on brightness models. The building of this so-called brightness maps (quantification of the human perception on the visible objects reflectance), in a bi-dimensional model, will be addressed here. The Hermite transform, a special case of polynomial transforms, and its usefulness, will be treated, in an applicable discrete form, in the fourth chapter. As we have learned from previous works 1, Hermite transform has showed to be a useful and practical solution to efficiently code the energy within an image block, deciding which kind of quantization is to be used upon them (whether scalar or vector). It will also be a unique tool to structurally classify the image block within a given lattice. This particular operation intends to be one of the main contributions of this work. The fifth section will fuse the proposals derived from the study of the three main topics- addressed in the last sections- in order to propose an image compression model that takes advantage of vector quantizers inside the brightness transformed domain to determine the most important structures, finding the energy distribution inside the Hermite domain. Sixth and last section will show some results obtained while testing the coding-decoding model. The guidelines to evaluate the image compressing performance were the compression ratio, SNR and psycho-visual quality. Some conclusions derived from the research and possible unexplored paths will be shown on this section as well.

  20. Photon induced non-linear quantized double layer charging in quaternary semiconducting quantum dots.

    PubMed

    Nair, Vishnu; Ananthoju, Balakrishna; Mohapatra, Jeotikanta; Aslam, M

    2018-03-15

    Room temperature quantized double layer charging was observed in 2 nm Cu 2 ZnSnS 4 (CZTS) quantum dots. In addition to this we observed a distinct non-linearity in the quantized double layer charging arising from UV light modulation of double layer. UV light irradiation resulted in a 26% increase in the integral capacitance at the semiconductor-dielectric (CZTS-oleylamine) interface of the quantum dot without any change in its core size suggesting that the cause be photocapacitive. The increasing charge separation at the semiconductor-dielectric interface due to highly stable and mobile photogenerated carriers cause larger electrostatic forces between the quantum dot and electrolyte leading to an enhanced double layer. This idea was supported by a decrease in the differential capacitance possible due to an enhanced double layer. Furthermore the UV illumination enhanced double layer gives us an AC excitation dependent differential double layer capacitance which confirms that the charging process is non-linear. This ultimately illustrates the utility of a colloidal quantum dot-electrolyte interface as a non-linear photocapacitor. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. An open-chain imaginary-time path-integral sampling approach to the calculation of approximate symmetrized quantum time correlation functions.

    PubMed

    Cendagorta, Joseph R; Bačić, Zlatko; Tuckerman, Mark E

    2018-03-14

    We introduce a scheme for approximating quantum time correlation functions numerically within the Feynman path integral formulation. Starting with the symmetrized version of the correlation function expressed as a discretized path integral, we introduce a change of integration variables often used in the derivation of trajectory-based semiclassical methods. In particular, we transform to sum and difference variables between forward and backward complex-time propagation paths. Once the transformation is performed, the potential energy is expanded in powers of the difference variables, which allows us to perform the integrals over these variables analytically. The manner in which this procedure is carried out results in an open-chain path integral (in the remaining sum variables) with a modified potential that is evaluated using imaginary-time path-integral sampling rather than requiring the generation of a large ensemble of trajectories. Consequently, any number of path integral sampling schemes can be employed to compute the remaining path integral, including Monte Carlo, path-integral molecular dynamics, or enhanced path-integral molecular dynamics. We believe that this approach constitutes a different perspective in semiclassical-type approximations to quantum time correlation functions. Importantly, we argue that our approximation can be systematically improved within a cumulant expansion formalism. We test this approximation on a set of one-dimensional problems that are commonly used to benchmark approximate quantum dynamical schemes. We show that the method is at least as accurate as the popular ring-polymer molecular dynamics technique and linearized semiclassical initial value representation for correlation functions of linear operators in most of these examples and improves the accuracy of correlation functions of nonlinear operators.

  2. An open-chain imaginary-time path-integral sampling approach to the calculation of approximate symmetrized quantum time correlation functions

    NASA Astrophysics Data System (ADS)

    Cendagorta, Joseph R.; Bačić, Zlatko; Tuckerman, Mark E.

    2018-03-01

    We introduce a scheme for approximating quantum time correlation functions numerically within the Feynman path integral formulation. Starting with the symmetrized version of the correlation function expressed as a discretized path integral, we introduce a change of integration variables often used in the derivation of trajectory-based semiclassical methods. In particular, we transform to sum and difference variables between forward and backward complex-time propagation paths. Once the transformation is performed, the potential energy is expanded in powers of the difference variables, which allows us to perform the integrals over these variables analytically. The manner in which this procedure is carried out results in an open-chain path integral (in the remaining sum variables) with a modified potential that is evaluated using imaginary-time path-integral sampling rather than requiring the generation of a large ensemble of trajectories. Consequently, any number of path integral sampling schemes can be employed to compute the remaining path integral, including Monte Carlo, path-integral molecular dynamics, or enhanced path-integral molecular dynamics. We believe that this approach constitutes a different perspective in semiclassical-type approximations to quantum time correlation functions. Importantly, we argue that our approximation can be systematically improved within a cumulant expansion formalism. We test this approximation on a set of one-dimensional problems that are commonly used to benchmark approximate quantum dynamical schemes. We show that the method is at least as accurate as the popular ring-polymer molecular dynamics technique and linearized semiclassical initial value representation for correlation functions of linear operators in most of these examples and improves the accuracy of correlation functions of nonlinear operators.

  3. Book Review:

    NASA Astrophysics Data System (ADS)

    Parthasarathy, R.

    2005-06-01

    This book gives a clear exposition of quantum field theory at the graduate level and the contents could be covered in a two semester course or, with some effort, in a one semester course. The book is well organized, and subtle issues are clearly explained. The margin notes are very useful, and the problems given at the end of each chapter are relevant and help the student gain an insight into the subject. The solutions to these problems are given in chapter 12. Care is taken to keep the numerical factors and notation very clear. Chapter 1 gives a clear overview and typical scales in high energy physics. Chapter 2 presents an excellent account of the Lorentz group and its representation. The decomposition of Lorentz tensors under SO(3) and the subsequent spinorial representations are introduced with clarity. After giving the field representation for scalar, Weyl, Dirac, Majorana and vector fields, the Poincaré group is introduced. Representations of 1-particle states using m2 and the Pauli Lubanski vector, although standard, are treated lucidly. Classical field theory is introduced in chapter 3 and a careful treatment of the Noether theorem and the energy momentum tensor are given. After covering real and complex scalar fields, the author impressively introduces the Dirac spinor via the Weyl spinor; Abelian gauge theory is also introduced. Chapter 4 contains the essentials of free field quantization of real and complex scalar fields, Dirac fields and massless Weyl fields. After a brief discussion of the CPT theorem, the quantization of electromagnetic field is carried out both in radiation gauge and Lorentz gauge. The presentation of the Gupta Bleuler method is particularly impressive; the margin notes on pages 85, 100 and 101 invaluable. Chapter 5 considers the essentials of perturbation theory. The derivation of the LSZ reduction formula for scalar field theory is clearly expressed. Feynman rules are obtained for the λphi4 theory in detail and those of QED briefly. The basic idea of renormalization is explained using the λphi4 theory as an example. There is a very lucid discussion on the `running coupling' constant in section 5.9. Chapter 6 explains the use of the matrix elements, formally given in the previous chapter, to compute decay rates and cross sections. The exposition is such that the reader will have no difficulty in following the steps. However, bearing in mind the continuity of the other chapters, this material could have been consigned to an appendix. In the short chapter 7, the QED Lagrangian is shown to respect P, C and T invariance. One-loop divergences are described. Dimensional and Pauli Villars regularization are introduced and explained, although there is no account of their use in evaluating a typical one-loop divergent integral. Chapter 8 describes the low energy limit of the Weinberg Salam theory. Examples for μ-→ e-barnueν μ, π+→ l+νl and K0→ π-l+νl are explicitly solved, although the serious reader should work them out independently. On page 197 the `V-A structure of the currents proposed by Feynman and Gell-Mann' is stated; the first such proposal was by E C G Sudarshan and R E Marshak. In chapter 9 the path integral quantization method is developed. After deriving the transition amplitude as the sum over all paths, in quantum mechanics, a demonstration that the integration of functions in the path integral gives the expectation value of the time ordered product of the corresponding operators is given and applied to real scalar free field theory to get the Feynman propagator. Then the Euclidean formulation is introduced and its `tailor made' role in critical phenomena is illustrated with the 2-d Ising model as an example, including the RG equation. Chapter 10 introduces Yang Mills theory. After writing down the typical gauge invariant Lagrangian and outlining the ingredients of QCD, the adjoint representation for fields is given. It could have been made complete by giving the Feynman rules for the cubic and quartic vertices for non-Abelian gauge fields, although the reader can obtain them from the last term in equation 10.27. In chapter 11, spontaneous symmetry breaking in quantum field theory is described. The difference in quantum mechanics and QFT with respect to the degenerate vacua is clearly brought out by considering the tunnelling amplitude between degenerate vacua. This is very good, as this aspect is mostly overlooked in many textbooks. The Goldstone theorem is then illustrated by an example. The Higgs mechanism is explained in Abelian and non-Abelian (SU(2)) gauge theories and the situation in SU(2)xU(1) gauge theory is discussed. This book certainly covers most of the modern developments in quantum field theory. The reader will be able to follow the content and apply it to specific problems. The bibliography is certainly useful. It will be an asset to libraries in teaching and research institutions.

  4. Geometry, Heat Equation and Path Integrals on the Poincaré Upper Half-Plane

    NASA Astrophysics Data System (ADS)

    Kubo, R.

    1988-01-01

    Geometry, heat equation and Feynman's path integrals are studied on the Poincaré upper half-plane. The fundamental solution to the heat equation partial f/partial t = Delta_{H} f is expressed in terms of a path integral defined on the upper half-plane. It is shown that Kac's statement that Feynman's path integral satisfies the Schrödinger equation is also valid for our case.

  5. Bit Grooming: statistically accurate precision-preserving quantization with compression, evaluated in the netCDF Operators (NCO, v4.4.8+)

    NASA Astrophysics Data System (ADS)

    Zender, Charles S.

    2016-09-01

    Geoscientific models and measurements generate false precision (scientifically meaningless data bits) that wastes storage space. False precision can mislead (by implying noise is signal) and be scientifically pointless, especially for measurements. By contrast, lossy compression can be both economical (save space) and heuristic (clarify data limitations) without compromising the scientific integrity of data. Data quantization can thus be appropriate regardless of whether space limitations are a concern. We introduce, implement, and characterize a new lossy compression scheme suitable for IEEE floating-point data. Our new Bit Grooming algorithm alternately shaves (to zero) and sets (to one) the least significant bits of consecutive values to preserve a desired precision. This is a symmetric, two-sided variant of an algorithm sometimes called Bit Shaving that quantizes values solely by zeroing bits. Our variation eliminates the artificial low bias produced by always zeroing bits, and makes Bit Grooming more suitable for arrays and multi-dimensional fields whose mean statistics are important. Bit Grooming relies on standard lossless compression to achieve the actual reduction in storage space, so we tested Bit Grooming by applying the DEFLATE compression algorithm to bit-groomed and full-precision climate data stored in netCDF3, netCDF4, HDF4, and HDF5 formats. Bit Grooming reduces the storage space required by initially uncompressed and compressed climate data by 25-80 and 5-65 %, respectively, for single-precision values (the most common case for climate data) quantized to retain 1-5 decimal digits of precision. The potential reduction is greater for double-precision datasets. When used aggressively (i.e., preserving only 1-2 digits), Bit Grooming produces storage reductions comparable to other quantization techniques such as Linear Packing. Unlike Linear Packing, whose guaranteed precision rapidly degrades within the relatively narrow dynamic range of values that it can compress, Bit Grooming guarantees the specified precision throughout the full floating-point range. Data quantization by Bit Grooming is irreversible (i.e., lossy) yet transparent, meaning that no extra processing is required by data users/readers. Hence Bit Grooming can easily reduce data storage volume without sacrificing scientific precision or imposing extra burdens on users.

  6. Path integration on the hyperbolic plane with a magnetic field

    NASA Astrophysics Data System (ADS)

    Grosche, Christian

    1990-08-01

    In this paper I discuss the path integrals on three formulations of hyperbolic geometry, where a constant magnetic field B is included. These are: the pseudosphere Λ2, the Poincaré disc D, and the hyperbolic strip S. The corresponding path integrals can be reformulated in terms of the path integral for the modified Pöschl-Teller potential. The wave-functions and the energy spectrum for the discrete and continuous part of the spectrum are explicitly calculated in each case. First the results are compared for the limit B → 0 with previous calculations and second with the path integration on the Poincaré upper half-plane U. This work is a continuation of the path integral calculations for the free motion on the various formulations on the hyperbolic plane and for the case of constant magnetic field on the Poincaré upper half-plane U.

  7. Dissociable cognitive mechanisms underlying human path integration.

    PubMed

    Wiener, Jan M; Berthoz, Alain; Wolbers, Thomas

    2011-01-01

    Path integration is a fundamental mechanism of spatial navigation. In non-human species, it is assumed to be an online process in which a homing vector is updated continuously during an outward journey. In contrast, human path integration has been conceptualized as a configural process in which travelers store working memory representations of path segments, with the computation of a homing vector only occurring when required. To resolve this apparent discrepancy, we tested whether humans can employ different path integration strategies in the same task. Using a triangle completion paradigm, participants were instructed either to continuously update the start position during locomotion (continuous strategy) or to remember the shape of the outbound path and to calculate home vectors on basis of this representation (configural strategy). While overall homing accuracy was superior in the configural condition, participants were quicker to respond during continuous updating, strongly suggesting that homing vectors were computed online. Corroborating these findings, we observed reliable differences in head orientation during the outbound path: when participants applied the continuous updating strategy, the head deviated significantly from straight ahead in direction of the start place, which can be interpreted as a continuous motor expression of the homing vector. Head orientation-a novel online measure for path integration-can thus inform about the underlying updating mechanism already during locomotion. In addition to demonstrating that humans can employ different cognitive strategies during path integration, our two-systems view helps to resolve recent controversies regarding the role of the medial temporal lobe in human path integration.

  8. A new approach to barrier-top fission dynamics

    NASA Astrophysics Data System (ADS)

    Bertsch, G. F.; Mehlhaff, J. M.

    2016-06-01

    We proposed a calculational framework for describing induced fission that avoids the Bohr-Wheeler assumption of well-defined fission channels. The building blocks of our approach are configurations that form a discrete, orthogonal basis and can be characterized by both energy and shape. The dynamics is to be determined by interaction matrix elements between the states rather than by a Hill-Wheeler construction of a collective coordinate. Within our approach, several simple limits can be seen: diffusion; quantized conductance; and ordinary decay through channels. The specific proposal for the discrete basis is to use the Kπ quantum numbers of the axially symmetric Hartree-Fock approximation to generate the configurations. Fission paths would be determined by hopping from configuration to configuration via the residual interaction. We show as an example the configurations needed to describe a fictitious fission decay 32S → 16 O + 16 O. We also examine the geometry of the path for fission of 236U, measuring distances by the number of jumps needed to go to a new Kπ partition.

  9. Non-invasive, transdermal, path-selective and specific glucose monitoring via a graphene-based platform

    NASA Astrophysics Data System (ADS)

    Lipani, Luca; Dupont, Bertrand G. R.; Doungmene, Floriant; Marken, Frank; Tyrrell, Rex M.; Guy, Richard H.; Ilie, Adelina

    2018-06-01

    Currently, there is no available needle-free approach for diabetics to monitor glucose levels in the interstitial fluid. Here, we report a path-selective, non-invasive, transdermal glucose monitoring system based on a miniaturized pixel array platform (realized either by graphene-based thin-film technology, or screen-printing). The system samples glucose from the interstitial fluid via electroosmotic extraction through individual, privileged, follicular pathways in the skin, accessible via the pixels of the array. A proof of principle using mammalian skin ex vivo is demonstrated for specific and `quantized' glucose extraction/detection via follicular pathways, and across the hypo- to hyper-glycaemic range in humans. Furthermore, the quantification of follicular and non-follicular glucose extraction fluxes is clearly shown. In vivo continuous monitoring of interstitial fluid-borne glucose with the pixel array was able to track blood sugar in healthy human subjects. This approach paves the way to clinically relevant glucose detection in diabetics without the need for invasive, finger-stick blood sampling.

  10. BRST quantization of cosmological perturbations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Armendariz-Picon, Cristian; Şengör, Gizem

    2016-11-08

    BRST quantization is an elegant and powerful method to quantize theories with local symmetries. In this article we study the Hamiltonian BRST quantization of cosmological perturbations in a universe dominated by a scalar field, along with the closely related quantization method of Dirac. We describe how both formalisms apply to perturbations in a time-dependent background, and how expectation values of gauge-invariant operators can be calculated in the in-in formalism. Our analysis focuses mostly on the free theory. By appropriate canonical transformations we simplify and diagonalize the free Hamiltonian. BRST quantization in derivative gauges allows us to dramatically simplify the structuremore » of the propagators, whereas Dirac quantization, which amounts to quantization in synchronous gauge, dispenses with the need to introduce ghosts and preserves the locality of the gauge-fixed action.« less

  11. Deformation of second and third quantization

    NASA Astrophysics Data System (ADS)

    Faizal, Mir

    2015-03-01

    In this paper, we will deform the second and third quantized theories by deforming the canonical commutation relations in such a way that they become consistent with the generalized uncertainty principle. Thus, we will first deform the second quantized commutator and obtain a deformed version of the Wheeler-DeWitt equation. Then we will further deform the third quantized theory by deforming the third quantized canonical commutation relation. This way we will obtain a deformed version of the third quantized theory for the multiverse.

  12. Quantization selection in the high-throughput H.264/AVC encoder based on the RD

    NASA Astrophysics Data System (ADS)

    Pastuszak, Grzegorz

    2013-10-01

    In the hardware video encoder, the quantization is responsible for quality losses. On the other hand, it allows the reduction of bit rates to the target one. If the mode selection is based on the rate-distortion criterion, the quantization can also be adjusted to obtain better compression efficiency. Particularly, the use of Lagrangian function with a given multiplier enables the encoder to select the most suitable quantization step determined by the quantization parameter QP. Moreover, the quantization offset added before discarding the fraction value after quantization can be adjusted. In order to select the best quantization parameter and offset in real time, the HD/SD encoder should be implemented in the hardware. In particular, the hardware architecture should embed the transformation and quantization modules able to process the same residuals many times. In this work, such an architecture is used. Experimental results show what improvements in terms of compression efficiency are achievable for Intra coding.

  13. Remarks on a New Possible Discretization Scheme for Gauge Theories

    NASA Astrophysics Data System (ADS)

    Magnot, Jean-Pierre

    2018-03-01

    We propose here a new discretization method for a class of continuum gauge theories which action functionals are polynomials of the curvature. Based on the notion of holonomy, this discretization procedure appears gauge-invariant for discretized analogs of Yang-Mills theories, and hence gauge-fixing is fully rigorous for these discretized action functionals. Heuristic parts are forwarded to the quantization procedure via Feynman integrals and the meaning of the heuristic infinite dimensional Lebesgue integral is questioned.

  14. Remarks on a New Possible Discretization Scheme for Gauge Theories

    NASA Astrophysics Data System (ADS)

    Magnot, Jean-Pierre

    2018-07-01

    We propose here a new discretization method for a class of continuum gauge theories which action functionals are polynomials of the curvature. Based on the notion of holonomy, this discretization procedure appears gauge-invariant for discretized analogs of Yang-Mills theories, and hence gauge-fixing is fully rigorous for these discretized action functionals. Heuristic parts are forwarded to the quantization procedure via Feynman integrals and the meaning of the heuristic infinite dimensional Lebesgue integral is questioned.

  15. Full Spectrum Conversion Using Traveling Pulse Wave Quantization

    DTIC Science & Technology

    2017-03-01

    Full Spectrum Conversion Using Traveling Pulse Wave Quantization Michael S. Kappes Mikko E. Waltari IQ-Analog Corporation San Diego, California...temporal-domain quantization technique called Traveling Pulse Wave Quantization (TPWQ). Full spectrum conversion is defined as the complete...pulse width measurements that are continuously generated hence the name “traveling” pulse wave quantization. Our TPWQ-based ADC is composed of a

  16. A Dynamic Bayesian Observer Model Reveals Origins of Bias in Visual Path Integration.

    PubMed

    Lakshminarasimhan, Kaushik J; Petsalis, Marina; Park, Hyeshin; DeAngelis, Gregory C; Pitkow, Xaq; Angelaki, Dora E

    2018-06-20

    Path integration is a strategy by which animals track their position by integrating their self-motion velocity. To identify the computational origins of bias in visual path integration, we asked human subjects to navigate in a virtual environment using optic flow and found that they generally traveled beyond the goal location. Such a behavior could stem from leaky integration of unbiased self-motion velocity estimates or from a prior expectation favoring slower speeds that causes velocity underestimation. Testing both alternatives using a probabilistic framework that maximizes expected reward, we found that subjects' biases were better explained by a slow-speed prior than imperfect integration. When subjects integrate paths over long periods, this framework intriguingly predicts a distance-dependent bias reversal due to buildup of uncertainty, which we also confirmed experimentally. These results suggest that visual path integration in noisy environments is limited largely by biases in processing optic flow rather than by leaky integration. Copyright © 2018 Elsevier Inc. All rights reserved.

  17. Which way and how far? Tracking of translation and rotation information for human path integration.

    PubMed

    Chrastil, Elizabeth R; Sherrill, Katherine R; Hasselmo, Michael E; Stern, Chantal E

    2016-10-01

    Path integration, the constant updating of the navigator's knowledge of position and orientation during movement, requires both visuospatial knowledge and memory. This study aimed to develop a systems-level understanding of human path integration by examining the basic building blocks of path integration in humans. To achieve this goal, we used functional imaging to examine the neural mechanisms that support the tracking and memory of translational and rotational components of human path integration. Critically, and in contrast to previous studies, we examined movement in translation and rotation tasks with no defined end-point or goal. Navigators accumulated translational and rotational information during virtual self-motion. Activity in hippocampus, retrosplenial cortex (RSC), and parahippocampal cortex (PHC) increased during both translation and rotation encoding, suggesting that these regions track self-motion information during path integration. These results address current questions regarding distance coding in the human brain. By implementing a modified delayed match to sample paradigm, we also examined the encoding and maintenance of path integration signals in working memory. Hippocampus, PHC, and RSC were recruited during successful encoding and maintenance of path integration information, with RSC selective for tasks that required processing heading rotation changes. These data indicate distinct working memory mechanisms for translation and rotation, which are essential for updating neural representations of current location. The results provide evidence that hippocampus, PHC, and RSC flexibly track task-relevant translation and rotation signals for path integration and could form the hub of a more distributed network supporting spatial navigation. Hum Brain Mapp 37:3636-3655, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  18. Multipurpose image watermarking algorithm based on multistage vector quantization.

    PubMed

    Lu, Zhe-Ming; Xu, Dian-Guo; Sun, Sheng-He

    2005-06-01

    The rapid growth of digital multimedia and Internet technologies has made copyright protection, copy protection, and integrity verification three important issues in the digital world. To solve these problems, the digital watermarking technique has been presented and widely researched. Traditional watermarking algorithms are mostly based on discrete transform domains, such as the discrete cosine transform, discrete Fourier transform (DFT), and discrete wavelet transform (DWT). Most of these algorithms are good for only one purpose. Recently, some multipurpose digital watermarking methods have been presented, which can achieve the goal of content authentication and copyright protection simultaneously. However, they are based on DWT or DFT. Lately, several robust watermarking schemes based on vector quantization (VQ) have been presented, but they can only be used for copyright protection. In this paper, we present a novel multipurpose digital image watermarking method based on the multistage vector quantizer structure, which can be applied to image authentication and copyright protection. In the proposed method, the semi-fragile watermark and the robust watermark are embedded in different VQ stages using different techniques, and both of them can be extracted without the original image. Simulation results demonstrate the effectiveness of our algorithm in terms of robustness and fragility.

  19. Perfect discretization of reparametrization invariant path integrals

    NASA Astrophysics Data System (ADS)

    Bahr, Benjamin; Dittrich, Bianca; Steinhaus, Sebastian

    2011-05-01

    To obtain a well-defined path integral one often employs discretizations. In the case of gravity and reparametrization-invariant systems, the latter of which we consider here as a toy example, discretizations generically break diffeomorphism and reparametrization symmetry, respectively. This has severe implications, as these symmetries determine the dynamics of the corresponding system. Indeed we will show that a discretized path integral with reparametrization-invariance is necessarily also discretization independent and therefore uniquely determined by the corresponding continuum quantum mechanical propagator. We use this insight to develop an iterative method for constructing such a discretized path integral, akin to a Wilsonian RG flow. This allows us to address the problem of discretization ambiguities and of an anomaly-free path integral measure for such systems. The latter is needed to obtain a path integral, that can act as a projector onto the physical states, satisfying the quantum constraints. We will comment on implications for discrete quantum gravity models, such as spin foams.

  20. Quantizing and sampling considerations in digital phased-locked loops

    NASA Technical Reports Server (NTRS)

    Hurst, G. T.; Gupta, S. C.

    1974-01-01

    The quantizer problem is first considered. The conditions under which the uniform white sequence model for the quantizer error is valid are established independent of the sampling rate. An equivalent spectral density is defined for the quantizer error resulting in an effective SNR value. This effective SNR may be used to determine quantized performance from infinitely fine quantized results. Attention is given to sampling rate considerations. Sampling rate characteristics of the digital phase-locked loop (DPLL) structure are investigated for the infinitely fine quantized system. The predicted phase error variance equation is examined as a function of the sampling rate. Simulation results are presented and a method is described which enables the minimum required sampling rate to be determined from the predicted phase error variance equations.

  1. Distinct roles of hippocampus and medial prefrontal cortex in spatial and nonspatial memory.

    PubMed

    Sapiurka, Maya; Squire, Larry R; Clark, Robert E

    2016-12-01

    In earlier work, patients with hippocampal damage successfully path integrated, apparently by maintaining spatial information in working memory. In contrast, rats with hippocampal damage were unable to path integrate, even when the paths were simple and working memory might have been expected to support performance. We considered possible ways to understand these findings. We tested rats with either hippocampal lesions or lesions of medial prefrontal cortex (mPFC) on three tasks of spatial or nonspatial memory: path integration, spatial alternation, and a nonspatial alternation task. Rats with mPFC lesions were impaired on both spatial and nonspatial alternation but performed normally on path integration. By contrast, rats with hippocampal lesions were impaired on path integration and spatial alternation but performed normally on nonspatial alternation. We propose that rodent neocortex is limited in its ability to construct a coherent spatial working memory of complex environments. Accordingly, in tasks such as path integration and spatial alternation, working memory cannot depend on neocortex alone. Rats may accomplish many spatial memory tasks by relying on long-term memory. Alternatively, they may accomplish these tasks within working memory through sustained coordination between hippocampus and other cortical brain regions such as mPFC, in the case of spatial alternation, or parietal cortex in the case of path integration. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  2. Modeling and analysis of energy quantization effects on single electron inverter performance

    NASA Astrophysics Data System (ADS)

    Dan, Surya Shankar; Mahapatra, Santanu

    2009-08-01

    In this paper, for the first time, the effects of energy quantization on single electron transistor (SET) inverter performance are analyzed through analytical modeling and Monte Carlo simulations. It is shown that energy quantization mainly changes the Coulomb blockade region and drain current of SET devices and thus affects the noise margin, power dissipation, and the propagation delay of SET inverter. A new analytical model for the noise margin of SET inverter is proposed which includes the energy quantization effects. Using the noise margin as a metric, the robustness of SET inverter is studied against the effects of energy quantization. A compact expression is developed for a novel parameter quantization threshold which is introduced for the first time in this paper. Quantization threshold explicitly defines the maximum energy quantization that an SET inverter logic circuit can withstand before its noise margin falls below a specified tolerance level. It is found that SET inverter designed with CT:CG=1/3 (where CT and CG are tunnel junction and gate capacitances, respectively) offers maximum robustness against energy quantization.

  3. Chern-Simons Term: Theory and Applications.

    NASA Astrophysics Data System (ADS)

    Gupta, Kumar Sankar

    1992-01-01

    We investigate the quantization and applications of Chern-Simons theories to several systems of interest. Elementary canonical methods are employed for the quantization of abelian and nonabelian Chern-Simons actions using ideas from gauge theories and quantum gravity. When the spatial slice is a disc, it yields quantum states at the edge of the disc carrying a representation of the Kac-Moody algebra. We next include sources in this model and their quantum states are shown to be those of a conformal family. Vertex operators for both abelian and nonabelian sources are constructed. The regularized abelian Wilson line is proved to be a vertex operator. The spin-statistics theorem is established for Chern-Simons dynamics using purely geometrical techniques. Chern-Simons action is associated with exotic spin and statistics in 2 + 1 dimensions. We study several systems in which the Chern-Simons action affects the spin and statistics. The first class of systems we study consist of G/H models. The solitons of these models are shown to obey anyonic statistics in the presence of a Chern-Simons term. The second system deals with the effect of the Chern -Simons term in a model for high temperature superconductivity. The coefficient of the Chern-Simons term is shown to be quantized, one of its possible values giving fermionic statistics to the solitons of this model. Finally, we study a system of spinning particles interacting with 2 + 1 gravity, the latter being described by an ISO(2,1) Chern-Simons term. An effective action for the particles is obtained by integrating out the gauge fields. Next we construct operators which exchange the particles. They are shown to satisfy the braid relations. There are ambiguities in the quantization of this system which can be exploited to give anyonic statistics to the particles. We also point out that at the level of the first quantized theory, the usual spin-statistics relation need not apply to these particles.

  4. Kinetic isotope effects and how to describe them

    PubMed Central

    Karandashev, Konstantin; Xu, Zhen-Hao; Meuwly, Markus; Vaníček, Jiří; Richardson, Jeremy O.

    2017-01-01

    We review several methods for computing kinetic isotope effects in chemical reactions including semiclassical and quantum instanton theory. These methods describe both the quantization of vibrational modes as well as tunneling and are applied to the ⋅H + H2 and ⋅H + CH4 reactions. The absolute rate constants computed with the semiclassical instanton method both using on-the-fly electronic structure calculations and fitted potential-energy surfaces are also compared directly with exact quantum dynamics results. The error inherent in the instanton approximation is found to be relatively small and similar in magnitude to that introduced by using fitted surfaces. The kinetic isotope effect computed by the quantum instanton is even more accurate, and although it is computationally more expensive, the efficiency can be improved by path-integral acceleration techniques. We also test a simple approach for designing potential-energy surfaces for the example of proton transfer in malonaldehyde. The tunneling splittings are computed, and although they are found to deviate from experimental results, the ratio of the splitting to that of an isotopically substituted form is in much better agreement. We discuss the strengths and limitations of the potential-energy surface and based on our findings suggest ways in which it can be improved. PMID:29282447

  5. Thermal properties of graphene under tensile stress

    NASA Astrophysics Data System (ADS)

    Herrero, Carlos P.; Ramírez, Rafael

    2018-05-01

    Thermal properties of graphene display peculiar characteristics associated to the two-dimensional nature of this crystalline membrane. These properties can be changed and tuned in the presence of applied stresses, both tensile and compressive. Here, we study graphene monolayers under tensile stress by using path-integral molecular dynamics (PIMD) simulations, which allows one to take into account quantization of vibrational modes and analyze the effect of anharmonicity on physical observables. The influence of the elastic energy due to strain in the crystalline membrane is studied for increasing tensile stress and for rising temperature (thermal expansion). We analyze the internal energy, enthalpy, and specific heat of graphene, and compare the results obtained from PIMD simulations with those given by a harmonic approximation for the vibrational modes. This approximation turns out to be precise at low temperatures, and deteriorates as temperature and pressure are increased. At low temperature, the specific heat changes as cp˜T for stress-free graphene, and evolves to a dependence cp˜T2 as the tensile stress is increased. Structural and thermodynamic properties display non-negligible quantum effects, even at temperatures higher than 300 K. Moreover, differences in the behavior of the in-plane and real areas of graphene are discussed, along with their associated properties. These differences show up clearly in the corresponding compressibility and thermal expansion coefficient.

  6. Information theory-based decision support system for integrated design of multivariable hydrometric networks

    NASA Astrophysics Data System (ADS)

    Keum, Jongho; Coulibaly, Paulin

    2017-07-01

    Adequate and accurate hydrologic information from optimal hydrometric networks is an essential part of effective water resources management. Although the key hydrologic processes in the water cycle are interconnected, hydrometric networks (e.g., streamflow, precipitation, groundwater level) have been routinely designed individually. A decision support framework is proposed for integrated design of multivariable hydrometric networks. The proposed method is applied to design optimal precipitation and streamflow networks simultaneously. The epsilon-dominance hierarchical Bayesian optimization algorithm was combined with Shannon entropy of information theory to design and evaluate hydrometric networks. Specifically, the joint entropy from the combined networks was maximized to provide the most information, and the total correlation was minimized to reduce redundant information. To further optimize the efficiency between the networks, they were designed by maximizing the conditional entropy of the streamflow network given the information of the precipitation network. Compared to the traditional individual variable design approach, the integrated multivariable design method was able to determine more efficient optimal networks by avoiding the redundant stations. Additionally, four quantization cases were compared to evaluate their effects on the entropy calculations and the determination of the optimal networks. The evaluation results indicate that the quantization methods should be selected after careful consideration for each design problem since the station rankings and the optimal networks can change accordingly.

  7. An Off-Grid Turbo Channel Estimation Algorithm for Millimeter Wave Communications.

    PubMed

    Han, Lingyi; Peng, Yuexing; Wang, Peng; Li, Yonghui

    2016-09-22

    The bandwidth shortage has motivated the exploration of the millimeter wave (mmWave) frequency spectrum for future communication networks. To compensate for the severe propagation attenuation in the mmWave band, massive antenna arrays can be adopted at both the transmitter and receiver to provide large array gains via directional beamforming. To achieve such array gains, channel estimation (CE) with high resolution and low latency is of great importance for mmWave communications. However, classic super-resolution subspace CE methods such as multiple signal classification (MUSIC) and estimation of signal parameters via rotation invariant technique (ESPRIT) cannot be applied here due to RF chain constraints. In this paper, an enhanced CE algorithm is developed for the off-grid problem when quantizing the angles of mmWave channel in the spatial domain where off-grid problem refers to the scenario that angles do not lie on the quantization grids with high probability, and it results in power leakage and severe reduction of the CE performance. A new model is first proposed to formulate the off-grid problem. The new model divides the continuously-distributed angle into a quantized discrete grid part, referred to as the integral grid angle, and an offset part, termed fractional off-grid angle. Accordingly, an iterative off-grid turbo CE (IOTCE) algorithm is proposed to renew and upgrade the CE between the integral grid part and the fractional off-grid part under the Turbo principle. By fully exploiting the sparse structure of mmWave channels, the integral grid part is estimated by a soft-decoding based compressed sensing (CS) method called improved turbo compressed channel sensing (ITCCS). It iteratively updates the soft information between the linear minimum mean square error (LMMSE) estimator and the sparsity combiner. Monte Carlo simulations are presented to evaluate the performance of the proposed method, and the results show that it enhances the angle detection resolution greatly.

  8. Berezin-Toeplitz quantization and naturally defined star products for Kähler manifolds

    NASA Astrophysics Data System (ADS)

    Schlichenmaier, Martin

    2018-04-01

    For compact quantizable Kähler manifolds the Berezin-Toeplitz quantization schemes, both operator and deformation quantization (star product) are reviewed. The treatment includes Berezin's covariant symbols and the Berezin transform. The general compact quantizable case was done by Bordemann-Meinrenken-Schlichenmaier, Schlichenmaier, and Karabegov-Schlichenmaier. For star products on Kähler manifolds, separation of variables, or equivalently star product of (anti-) Wick type, is a crucial property. As canonically defined star products the Berezin-Toeplitz, Berezin, and the geometric quantization are treated. It turns out that all three are equivalent, but different.

  9. Medial temporal lobe roles in human path integration.

    PubMed

    Yamamoto, Naohide; Philbeck, John W; Woods, Adam J; Gajewski, Daniel A; Arthur, Joeanna C; Potolicchio, Samuel J; Levy, Lucien; Caputy, Anthony J

    2014-01-01

    Path integration is a process in which observers derive their location by integrating self-motion signals along their locomotion trajectory. Although the medial temporal lobe (MTL) is thought to take part in path integration, the scope of its role for path integration remains unclear. To address this issue, we administered a variety of tasks involving path integration and other related processes to a group of neurosurgical patients whose MTL was unilaterally resected as therapy for epilepsy. These patients were unimpaired relative to neurologically intact controls in many tasks that required integration of various kinds of sensory self-motion information. However, the same patients (especially those who had lesions in the right hemisphere) walked farther than the controls when attempting to walk without vision to a previewed target. Importantly, this task was unique in our test battery in that it allowed participants to form a mental representation of the target location and anticipate their upcoming walking trajectory before they began moving. Thus, these results put forth a new idea that the role of MTL structures for human path integration may stem from their participation in predicting the consequences of one's locomotor actions. The strengths of this new theoretical viewpoint are discussed.

  10. Medial Temporal Lobe Roles in Human Path Integration

    PubMed Central

    Yamamoto, Naohide; Philbeck, John W.; Woods, Adam J.; Gajewski, Daniel A.; Arthur, Joeanna C.; Potolicchio, Samuel J.; Levy, Lucien; Caputy, Anthony J.

    2014-01-01

    Path integration is a process in which observers derive their location by integrating self-motion signals along their locomotion trajectory. Although the medial temporal lobe (MTL) is thought to take part in path integration, the scope of its role for path integration remains unclear. To address this issue, we administered a variety of tasks involving path integration and other related processes to a group of neurosurgical patients whose MTL was unilaterally resected as therapy for epilepsy. These patients were unimpaired relative to neurologically intact controls in many tasks that required integration of various kinds of sensory self-motion information. However, the same patients (especially those who had lesions in the right hemisphere) walked farther than the controls when attempting to walk without vision to a previewed target. Importantly, this task was unique in our test battery in that it allowed participants to form a mental representation of the target location and anticipate their upcoming walking trajectory before they began moving. Thus, these results put forth a new idea that the role of MTL structures for human path integration may stem from their participation in predicting the consequences of one's locomotor actions. The strengths of this new theoretical viewpoint are discussed. PMID:24802000

  11. Path integrals and the WKB approximation in loop quantum cosmology

    NASA Astrophysics Data System (ADS)

    Ashtekar, Abhay; Campiglia, Miguel; Henderson, Adam

    2010-12-01

    We follow the Feynman procedure to obtain a path integral formulation of loop quantum cosmology starting from the Hilbert space framework. Quantum geometry effects modify the weight associated with each path so that the effective measure on the space of paths is different from that used in the Wheeler-DeWitt theory. These differences introduce some conceptual subtleties in arriving at the WKB approximation. But the approximation is well defined and provides intuition for the differences between loop quantum cosmology and the Wheeler-DeWitt theory from a path integral perspective.

  12. Perfect discretization of path integrals

    NASA Astrophysics Data System (ADS)

    Steinhaus, Sebastian

    2012-05-01

    In order to obtain a well-defined path integral one often employs discretizations. In the case of General Relativity these generically break diffeomorphism symmetry, which has severe consequences since these symmetries determine the dynamics of the corresponding system. In this article we consider the path integral of reparametrization invariant systems as a toy example and present an improvement procedure for the discretized propagator. Fixed points and convergence of the procedure are discussed. Furthermore we show that a reparametrization invariant path integral implies discretization independence and acts as a projector onto physical states.

  13. Master equations and the theory of stochastic path integrals

    NASA Astrophysics Data System (ADS)

    Weber, Markus F.; Frey, Erwin

    2017-04-01

    This review provides a pedagogic and self-contained introduction to master equations and to their representation by path integrals. Since the 1930s, master equations have served as a fundamental tool to understand the role of fluctuations in complex biological, chemical, and physical systems. Despite their simple appearance, analyses of master equations most often rely on low-noise approximations such as the Kramers-Moyal or the system size expansion, or require ad-hoc closure schemes for the derivation of low-order moment equations. We focus on numerical and analytical methods going beyond the low-noise limit and provide a unified framework for the study of master equations. After deriving the forward and backward master equations from the Chapman-Kolmogorov equation, we show how the two master equations can be cast into either of four linear partial differential equations (PDEs). Three of these PDEs are discussed in detail. The first PDE governs the time evolution of a generalized probability generating function whose basis depends on the stochastic process under consideration. Spectral methods, WKB approximations, and a variational approach have been proposed for the analysis of the PDE. The second PDE is novel and is obeyed by a distribution that is marginalized over an initial state. It proves useful for the computation of mean extinction times. The third PDE describes the time evolution of a ‘generating functional’, which generalizes the so-called Poisson representation. Subsequently, the solutions of the PDEs are expressed in terms of two path integrals: a ‘forward’ and a ‘backward’ path integral. Combined with inverse transformations, one obtains two distinct path integral representations of the conditional probability distribution solving the master equations. We exemplify both path integrals in analysing elementary chemical reactions. Moreover, we show how a well-known path integral representation of averaged observables can be recovered from them. Upon expanding the forward and the backward path integrals around stationary paths, we then discuss and extend a recent method for the computation of rare event probabilities. Besides, we also derive path integral representations for processes with continuous state spaces whose forward and backward master equations admit Kramers-Moyal expansions. A truncation of the backward expansion at the level of a diffusion approximation recovers a classic path integral representation of the (backward) Fokker-Planck equation. One can rewrite this path integral in terms of an Onsager-Machlup function and, for purely diffusive Brownian motion, it simplifies to the path integral of Wiener. To make this review accessible to a broad community, we have used the language of probability theory rather than quantum (field) theory and do not assume any knowledge of the latter. The probabilistic structures underpinning various technical concepts, such as coherent states, the Doi-shift, and normal-ordered observables, are thereby made explicit.

  14. Master equations and the theory of stochastic path integrals.

    PubMed

    Weber, Markus F; Frey, Erwin

    2017-04-01

    This review provides a pedagogic and self-contained introduction to master equations and to their representation by path integrals. Since the 1930s, master equations have served as a fundamental tool to understand the role of fluctuations in complex biological, chemical, and physical systems. Despite their simple appearance, analyses of master equations most often rely on low-noise approximations such as the Kramers-Moyal or the system size expansion, or require ad-hoc closure schemes for the derivation of low-order moment equations. We focus on numerical and analytical methods going beyond the low-noise limit and provide a unified framework for the study of master equations. After deriving the forward and backward master equations from the Chapman-Kolmogorov equation, we show how the two master equations can be cast into either of four linear partial differential equations (PDEs). Three of these PDEs are discussed in detail. The first PDE governs the time evolution of a generalized probability generating function whose basis depends on the stochastic process under consideration. Spectral methods, WKB approximations, and a variational approach have been proposed for the analysis of the PDE. The second PDE is novel and is obeyed by a distribution that is marginalized over an initial state. It proves useful for the computation of mean extinction times. The third PDE describes the time evolution of a 'generating functional', which generalizes the so-called Poisson representation. Subsequently, the solutions of the PDEs are expressed in terms of two path integrals: a 'forward' and a 'backward' path integral. Combined with inverse transformations, one obtains two distinct path integral representations of the conditional probability distribution solving the master equations. We exemplify both path integrals in analysing elementary chemical reactions. Moreover, we show how a well-known path integral representation of averaged observables can be recovered from them. Upon expanding the forward and the backward path integrals around stationary paths, we then discuss and extend a recent method for the computation of rare event probabilities. Besides, we also derive path integral representations for processes with continuous state spaces whose forward and backward master equations admit Kramers-Moyal expansions. A truncation of the backward expansion at the level of a diffusion approximation recovers a classic path integral representation of the (backward) Fokker-Planck equation. One can rewrite this path integral in terms of an Onsager-Machlup function and, for purely diffusive Brownian motion, it simplifies to the path integral of Wiener. To make this review accessible to a broad community, we have used the language of probability theory rather than quantum (field) theory and do not assume any knowledge of the latter. The probabilistic structures underpinning various technical concepts, such as coherent states, the Doi-shift, and normal-ordered observables, are thereby made explicit.

  15. PathCase-SB architecture and database design

    PubMed Central

    2011-01-01

    Background Integration of metabolic pathways resources and regulatory metabolic network models, and deploying new tools on the integrated platform can help perform more effective and more efficient systems biology research on understanding the regulation in metabolic networks. Therefore, the tasks of (a) integrating under a single database environment regulatory metabolic networks and existing models, and (b) building tools to help with modeling and analysis are desirable and intellectually challenging computational tasks. Description PathCase Systems Biology (PathCase-SB) is built and released. The PathCase-SB database provides data and API for multiple user interfaces and software tools. The current PathCase-SB system provides a database-enabled framework and web-based computational tools towards facilitating the development of kinetic models for biological systems. PathCase-SB aims to integrate data of selected biological data sources on the web (currently, BioModels database and KEGG), and to provide more powerful and/or new capabilities via the new web-based integrative framework. This paper describes architecture and database design issues encountered in PathCase-SB's design and implementation, and presents the current design of PathCase-SB's architecture and database. Conclusions PathCase-SB architecture and database provide a highly extensible and scalable environment with easy and fast (real-time) access to the data in the database. PathCase-SB itself is already being used by researchers across the world. PMID:22070889

  16. Controlling neutron orbital angular momentum

    NASA Astrophysics Data System (ADS)

    Clark, Charles W.; Barankov, Roman; Huber, Michael G.; Arif, Muhammad; Cory, David G.; Pushin, Dmitry A.

    2015-09-01

    The quantized orbital angular momentum (OAM) of photons offers an additional degree of freedom and topological protection from noise. Photonic OAM states have therefore been exploited in various applications ranging from studies of quantum entanglement and quantum information science to imaging. The OAM states of electron beams have been shown to be similarly useful, for example in rotating nanoparticles and determining the chirality of crystals. However, although neutrons--as massive, penetrating and neutral particles--are important in materials characterization, quantum information and studies of the foundations of quantum mechanics, OAM control of neutrons has yet to be achieved. Here, we demonstrate OAM control of neutrons using macroscopic spiral phase plates that apply a `twist' to an input neutron beam. The twisted neutron beams are analysed with neutron interferometry. Our techniques, applied to spatially incoherent beams, demonstrate both the addition of quantum angular momenta along the direction of propagation, effected by multiple spiral phase plates, and the conservation of topological charge with respect to uniform phase fluctuations. Neutron-based studies of quantum information science, the foundations of quantum mechanics, and scattering and imaging of magnetic, superconducting and chiral materials have until now been limited to three degrees of freedom: spin, path and energy. The optimization of OAM control, leading to well defined values of OAM, would provide an additional quantized degree of freedom for such studies.

  17. A visual detection model for DCT coefficient quantization

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Watson, Andrew B.

    1994-01-01

    The discrete cosine transform (DCT) is widely used in image compression and is part of the JPEG and MPEG compression standards. The degree of compression and the amount of distortion in the decompressed image are controlled by the quantization of the transform coefficients. The standards do not specify how the DCT coefficients should be quantized. One approach is to set the quantization level for each coefficient so that the quantization error is near the threshold of visibility. Results from previous work are combined to form the current best detection model for DCT coefficient quantization noise. This model predicts sensitivity as a function of display parameters, enabling quantization matrices to be designed for display situations varying in luminance, veiling light, and spatial frequency related conditions (pixel size, viewing distance, and aspect ratio). It also allows arbitrary color space directions for the representation of color. A model-based method of optimizing the quantization matrix for an individual image was developed. The model described above provides visual thresholds for each DCT frequency. These thresholds are adjusted within each block for visual light adaptation and contrast masking. For given quantization matrix, the DCT quantization errors are scaled by the adjusted thresholds to yield perceptual errors. These errors are pooled nonlinearly over the image to yield total perceptual error. With this model one may estimate the quantization matrix for a particular image that yields minimum bit rate for a given total perceptual error, or minimum perceptual error for a given bit rate. Custom matrices for a number of images show clear improvement over image-independent matrices. Custom matrices are compatible with the JPEG standard, which requires transmission of the quantization matrix.

  18. The path dependency theory: analytical framework to study institutional integration. The case of France.

    PubMed

    Trouvé, Hélène; Couturier, Yves; Etheridge, Francis; Saint-Jean, Olivier; Somme, Dominique

    2010-06-30

    The literature on integration indicates the need for an enhanced theorization of institutional integration. This article proposes path dependence as an analytical framework to study the systems in which integration takes place. PRISMA proposes a model for integrating health and social care services for older adults. This model was initially tested in Quebec. The PRISMA France study gave us an opportunity to analyze institutional integration in France. A qualitative approach was used. Analyses were based on semi-structured interviews with actors of all levels of decision-making, observations of advisory board meetings, and administrative documents. Our analyses revealed the complexity and fragmentation of institutional integration. The path dependency theory, which analyzes the change capacity of institutions by taking into account their historic structures, allows analysis of this situation. The path dependency to the Bismarckian system and the incomplete reforms of gerontological policies generate the coexistence and juxtaposition of institutional systems. In such a context, no institution has sufficient ability to determine gerontology policy and build institutional integration by itself. Using path dependence as an analytical framework helps to understand the reasons why institutional integration is critical to organizational and clinical integration, and the complex construction of institutional integration in France.

  19. Quantization and fractional quantization of currents in periodically driven stochastic systems. I. Average currents

    NASA Astrophysics Data System (ADS)

    Chernyak, Vladimir Y.; Klein, John R.; Sinitsyn, Nikolai A.

    2012-04-01

    This article studies Markovian stochastic motion of a particle on a graph with finite number of nodes and periodically time-dependent transition rates that satisfy the detailed balance condition at any time. We show that under general conditions, the currents in the system on average become quantized or fractionally quantized for adiabatic driving at sufficiently low temperature. We develop the quantitative theory of this quantization and interpret it in terms of topological invariants. By implementing the celebrated Kirchhoff theorem we derive a general and explicit formula for the average generated current that plays a role of an efficient tool for treating the current quantization effects.

  20. Optimal Quantization Scheme for Data-Efficient Target Tracking via UWSNs Using Quantized Measurements.

    PubMed

    Zhang, Senlin; Chen, Huayan; Liu, Meiqin; Zhang, Qunfei

    2017-11-07

    Target tracking is one of the broad applications of underwater wireless sensor networks (UWSNs). However, as a result of the temporal and spatial variability of acoustic channels, underwater acoustic communications suffer from an extremely limited bandwidth. In order to reduce network congestion, it is important to shorten the length of the data transmitted from local sensors to the fusion center by quantization. Although quantization can reduce bandwidth cost, it also brings about bad tracking performance as a result of information loss after quantization. To solve this problem, this paper proposes an optimal quantization-based target tracking scheme. It improves the tracking performance of low-bit quantized measurements by minimizing the additional covariance caused by quantization. The simulation demonstrates that our scheme performs much better than the conventional uniform quantization-based target tracking scheme and the increment of the data length affects our scheme only a little. Its tracking performance improves by only 4.4% from 2- to 3-bit, which means our scheme weakly depends on the number of data bits. Moreover, our scheme also weakly depends on the number of participate sensors, and it can work well in sparse sensor networks. In a 6 × 6 × 6 sensor network, compared with 4 × 4 × 4 sensor networks, the number of participant sensors increases by 334.92%, while the tracking accuracy using 1-bit quantized measurements improves by only 50.77%. Overall, our optimal quantization-based target tracking scheme can achieve the pursuit of data-efficiency, which fits the requirements of low-bandwidth UWSNs.

  1. Trace identities and their semiclassical implications

    NASA Astrophysics Data System (ADS)

    Smilansky, Uzy

    2000-03-01

    The compatibility of the semiclassical quantization of area-preserving maps with some exact identities which follow from the unitarity of the quantum evolution operator is discussed. The quantum identities involve relations between traces of powers of the evolution operator. For classically integrable maps, the semiclassical approximation is shown to be compatible with the trace identities. This is done by the identification of stationary phase manifolds which give the main contributions to the result. The compatibility of the semiclassical quantization with the trace identities demonstrates the crucial importance of non-diagonal contributions. The same technique is not applicable for chaotic maps, and the compatibility of the semiclassical theory in this case remains unsettled. However, the trace identities are applied to maps which appear naturally in the theory of quantum graphs, revealing some features of the periodic orbit theory for these paradigms of quantum chaos.

  2. Two-path plasmonic interferometer with integrated detector

    DOEpatents

    Dyer, Gregory Conrad; Shaner, Eric A.; Aizin, Gregory

    2016-03-29

    An electrically tunable terahertz two-path plasmonic interferometer with an integrated detection element can down convert a terahertz field to a rectified DC signal. The integrated detector utilizes a resonant plasmonic homodyne mixing mechanism that measures the component of the plasma waves in-phase with an excitation field that functions as the local oscillator in the mixer. The plasmonic interferometer comprises two independently tuned electrical paths. The plasmonic interferometer enables a spectrometer-on-a-chip where the tuning of electrical path length plays an analogous role to that of physical path length in macroscopic Fourier transform interferometers.

  3. Bit Grooming: Statistically accurate precision-preserving quantization with compression, evaluated in the netCDF operators (NCO, v4.4.8+)

    DOE PAGES

    Zender, Charles S.

    2016-09-19

    Geoscientific models and measurements generate false precision (scientifically meaningless data bits) that wastes storage space. False precision can mislead (by implying noise is signal) and be scientifically pointless, especially for measurements. By contrast, lossy compression can be both economical (save space) and heuristic (clarify data limitations) without compromising the scientific integrity of data. Data quantization can thus be appropriate regardless of whether space limitations are a concern. We introduce, implement, and characterize a new lossy compression scheme suitable for IEEE floating-point data. Our new Bit Grooming algorithm alternately shaves (to zero) and sets (to one) the least significant bits ofmore » consecutive values to preserve a desired precision. This is a symmetric, two-sided variant of an algorithm sometimes called Bit Shaving that quantizes values solely by zeroing bits. Our variation eliminates the artificial low bias produced by always zeroing bits, and makes Bit Grooming more suitable for arrays and multi-dimensional fields whose mean statistics are important. Bit Grooming relies on standard lossless compression to achieve the actual reduction in storage space, so we tested Bit Grooming by applying the DEFLATE compression algorithm to bit-groomed and full-precision climate data stored in netCDF3, netCDF4, HDF4, and HDF5 formats. Bit Grooming reduces the storage space required by initially uncompressed and compressed climate data by 25–80 and 5–65 %, respectively, for single-precision values (the most common case for climate data) quantized to retain 1–5 decimal digits of precision. The potential reduction is greater for double-precision datasets. When used aggressively (i.e., preserving only 1–2 digits), Bit Grooming produces storage reductions comparable to other quantization techniques such as Linear Packing. Unlike Linear Packing, whose guaranteed precision rapidly degrades within the relatively narrow dynamic range of values that it can compress, Bit Grooming guarantees the specified precision throughout the full floating-point range. Data quantization by Bit Grooming is irreversible (i.e., lossy) yet transparent, meaning that no extra processing is required by data users/readers. Hence Bit Grooming can easily reduce data storage volume without sacrificing scientific precision or imposing extra burdens on users.« less

  4. The path integral on the pseudosphere

    NASA Astrophysics Data System (ADS)

    Grosche, C.; Steiner, F.

    1988-02-01

    A rigorous path integral treatment for the d-dimensional pseudosphere Λd-1 , a Riemannian manifold of constant negative curvature, is presented. The path integral formulation is based on a canonical approach using Weyl-ordering and the Hamiltonian path integral defined on midpoints. The time-dependent and energy-dependent Feynman kernels obtain different expressions in the even- and odd-dimensional cases, respectively. The special case of the three-dimensional pseudosphere, which is analytically equivalent to the Poincaré upper half plane, the Poincaré disc, and the hyperbolic strip, is discussed in detail including the energy spectrum and the normalised wave-functions.

  5. Perceptual Optimization of DCT Color Quantization Matrices

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Statler, Irving C. (Technical Monitor)

    1994-01-01

    Many image compression schemes employ a block Discrete Cosine Transform (DCT) and uniform quantization. Acceptable rate/distortion performance depends upon proper design of the quantization matrix. In previous work, we showed how to use a model of the visibility of DCT basis functions to design quantization matrices for arbitrary display resolutions and color spaces. Subsequently, we showed how to optimize greyscale quantization matrices for individual images, for optimal rate/perceptual distortion performance. Here we describe extensions of this optimization algorithm to color images.

  6. Formulation of D-brane Dynamics

    NASA Astrophysics Data System (ADS)

    Evans, Thomas

    2012-03-01

    It is the purpose of this paper (within the context of STS rules & guidelines ``research report'') to formulate a statistical-mechanical form of D-brane dynamics. We consider first the path integral formulation of quantum mechanics, and extend this to a path-integral formulation of D-brane mechanics, summing over all the possible path integral sectors of R-R, NS charged states. We then investigate this generalization utilizing a path-integral formulation summing over all the possible path integral sectors of R-R charged states, calculated from the mean probability tree-level amplitude of type I, IIA, and IIB strings, serving as a generalization of all strings described by D-branes. We utilize this generalization to study black holes in regimes where the initial D-brane system is legitimate, and further this generalization to look at information loss near regions of nonlocality on a non-ordinary event horizon. We see here that in these specific regimes, we can calculate a path integral formulation, as describing D0-brane mechanics, tracing the dissipation of entropy throughout the event horizon. This is used to study the information paradox, and to propose a resolution between the phenomena and the correct and expected quantum mechanical description. This is done as our path integral throughout entropy entering the event horizon effectively and correctly encodes the initial state in subtle correlations in the Hawking radiation.

  7. A visual detection model for DCT coefficient quantization

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Peterson, Heidi A.

    1993-01-01

    The discrete cosine transform (DCT) is widely used in image compression, and is part of the JPEG and MPEG compression standards. The degree of compression, and the amount of distortion in the decompressed image are determined by the quantization of the transform coefficients. The standards do not specify how the DCT coefficients should be quantized. Our approach is to set the quantization level for each coefficient so that the quantization error is at the threshold of visibility. Here we combine results from our previous work to form our current best detection model for DCT coefficient quantization noise. This model predicts sensitivity as a function of display parameters, enabling quantization matrices to be designed for display situations varying in luminance, veiling light, and spatial frequency related conditions (pixel size, viewing distance, and aspect ratio). It also allows arbitrary color space directions for the representation of color.

  8. All-Optical Wavelength-Path Service With Quality Assurance by Multilayer Integration System

    NASA Astrophysics Data System (ADS)

    Yagi, Mikio; Tanaka, Shinya; Satomi, Shuichi; Ryu, Shiro; Asano, Shoichiro

    2006-09-01

    In the future all-optical network controlled by generalized multiprotocol label switching (GMPLS), the wavelength path between end nodes will change dynamically. This inevitably means that the fiber parameters along the wavelength path will also vary. This variation in fiber parameters influences the signal quality of high-speed-transmission system (bit rates over 40 Gb/s). Therefore, at a path setup, the fiber-parameter effect should be adequately compensated. Moreover, the path setup must be completed fast enough to meet the network-application demands. To realize the rapid setup of adequate paths, a multilayer integration system for all-optical wavelength-path quality assurance is proposed. This multilayer integration system is evaluated in a field trial. In the trial, the GMPLS control plane, measurement plane, and data plane coordinated to maintain the quality of a 40-Gb/s wavelength path that would otherwise be degraded by the influence of chromatic dispersion. It is also demonstrated that the multilayer integration system can assure the signal quality in the face of not only chromatic dispersion but also degradation in the optical signal-to-noise ratio by the use of a 2R regeneration system. Our experiments confirm that the proposed multilayer integration system is an essential part of future all-optical networks.

  9. Simultaneous Conduction and Valence Band Quantization in Ultrashallow High-Density Doping Profiles in Semiconductors

    NASA Astrophysics Data System (ADS)

    Mazzola, F.; Wells, J. W.; Pakpour-Tabrizi, A. C.; Jackman, R. B.; Thiagarajan, B.; Hofmann, Ph.; Miwa, J. A.

    2018-01-01

    We demonstrate simultaneous quantization of conduction band (CB) and valence band (VB) states in silicon using ultrashallow, high-density, phosphorus doping profiles (so-called Si:P δ layers). We show that, in addition to the well-known quantization of CB states within the dopant plane, the confinement of VB-derived states between the subsurface P dopant layer and the Si surface gives rise to a simultaneous quantization of VB states in this narrow region. We also show that the VB quantization can be explained using a simple particle-in-a-box model, and that the number and energy separation of the quantized VB states depend on the depth of the P dopant layer beneath the Si surface. Since the quantized CB states do not show a strong dependence on the dopant depth (but rather on the dopant density), it is straightforward to exhibit control over the properties of the quantized CB and VB states independently of each other by choosing the dopant density and depth accordingly, thus offering new possibilities for engineering quantum matter.

  10. Path Integral Computation of Quantum Free Energy Differences Due to Alchemical Transformations Involving Mass and Potential.

    PubMed

    Pérez, Alejandro; von Lilienfeld, O Anatole

    2011-08-09

    Thermodynamic integration, perturbation theory, and λ-dynamics methods were applied to path integral molecular dynamics calculations to investigate free energy differences due to "alchemical" transformations. Several estimators were formulated to compute free energy differences in solvable model systems undergoing changes in mass and/or potential. Linear and nonlinear alchemical interpolations were used for the thermodynamic integration. We find improved convergence for the virial estimators, as well as for the thermodynamic integration over nonlinear interpolation paths. Numerical results for the perturbative treatment of changes in mass and electric field strength in model systems are presented. We used thermodynamic integration in ab initio path integral molecular dynamics to compute the quantum free energy difference of the isotope transformation in the Zundel cation. The performance of different free energy methods is discussed.

  11. Path integrals, supersymmetric quantum mechanics, and the Atiyah-Singer index theorem for twisted Dirac

    NASA Astrophysics Data System (ADS)

    Fine, Dana S.; Sawin, Stephen

    2017-01-01

    Feynman's time-slicing construction approximates the path integral by a product, determined by a partition of a finite time interval, of approximate propagators. This paper formulates general conditions to impose on a short-time approximation to the propagator in a general class of imaginary-time quantum mechanics on a Riemannian manifold which ensure that these products converge. The limit defines a path integral which agrees pointwise with the heat kernel for a generalized Laplacian. The result is a rigorous construction of the propagator for supersymmetric quantum mechanics, with potential, as a path integral. Further, the class of Laplacians includes the square of the twisted Dirac operator, which corresponds to an extension of N = 1/2 supersymmetric quantum mechanics. General results on the rate of convergence of the approximate path integrals suffice in this case to derive the local version of the Atiyah-Singer index theorem.

  12. The path integral on the Poincaré upper half-plane with a magnetic field and for the Morse potential

    NASA Astrophysics Data System (ADS)

    Grosche, Christian

    1988-10-01

    Rigorous path integral treatments on the Poincaré upper half-plane with a magnetic field and for the Morse potential are presented. The calculation starts with the path integral on the Poincaré upper half-plane with a magnetic field. By a Fourier expansion and a non-linear transformation this problem is reformulated in terms of the path integral for the Morse potential. This latter problem can be reduced by an appropriate space-time transformation to the path integral for the harmonic oscillator with generalised angular momentum, a technique which has been developed in recent years. The well-known solution for the last problem enables one to give explicit expressions for the Feynman kernels for the Morse potential and for the Poincaré upper half-plane with magnetic field, respectively. The wavefunctions and the energy spectrum for the bound and scattering states are given, respectively.

  13. The path dependency theory: analytical framework to study institutional integration. The case of France

    PubMed Central

    Trouvé, Hélène; Couturier, Yves; Etheridge, Francis; Saint-Jean, Olivier; Somme, Dominique

    2010-01-01

    Background The literature on integration indicates the need for an enhanced theorization of institutional integration. This article proposes path dependence as an analytical framework to study the systems in which integration takes place. Purpose PRISMA proposes a model for integrating health and social care services for older adults. This model was initially tested in Quebec. The PRISMA France study gave us an opportunity to analyze institutional integration in France. Methods A qualitative approach was used. Analyses were based on semi-structured interviews with actors of all levels of decision-making, observations of advisory board meetings, and administrative documents. Results Our analyses revealed the complexity and fragmentation of institutional integration. The path dependency theory, which analyzes the change capacity of institutions by taking into account their historic structures, allows analysis of this situation. The path dependency to the Bismarckian system and the incomplete reforms of gerontological policies generate the coexistence and juxtaposition of institutional systems. In such a context, no institution has sufficient ability to determine gerontology policy and build institutional integration by itself. Conclusion Using path dependence as an analytical framework helps to understand the reasons why institutional integration is critical to organizational and clinical integration, and the complex construction of institutional integration in France. PMID:20689740

  14. Quarks, Symmetries and Strings - a Symposium in Honor of Bunji Sakita's 60th Birthday

    NASA Astrophysics Data System (ADS)

    Kaku, M.; Jevicki, A.; Kikkawa, K.

    1991-04-01

    The Table of Contents for the full book PDF is as follows: * Preface * Evening Banquet Speech * I. Quarks and Phenomenology * From the SU(6) Model to Uniqueness in the Standard Model * A Model for Higgs Mechanism in the Standard Model * Quark Mass Generation in QCD * Neutrino Masses in the Standard Model * Solar Neutrino Puzzle, Horizontal Symmetry of Electroweak Interactions and Fermion Mass Hierarchies * State of Chiral Symmetry Breaking at High Temperatures * Approximate |ΔI| = 1/2 Rule from a Perspective of Light-Cone Frame Physics * Positronium (and Some Other Systems) in a Strong Magnetic Field * Bosonic Technicolor and the Flavor Problem * II. Strings * Supersymmetry in String Theory * Collective Field Theory and Schwinger-Dyson Equations in Matrix Models * Non-Perturbative String Theory * The Structure of Non-Perturbative Quantum Gravity in One and Two Dimensions * Noncritical Virasoro Algebra of d < 1 Matrix Model and Quantized String Field * Chaos in Matrix Models ? * On the Non-Commutative Symmetry of Quantum Gravity in Two Dimensions * Matrix Model Formulation of String Field Theory in One Dimension * Geometry of the N = 2 String Theory * Modular Invariance form Gauge Invariance in the Non-Polynomial String Field Theory * Stringy Symmetry and Off-Shell Ward Identities * q-Virasoro Algebra and q-Strings * Self-Tuning Fields and Resonant Correlations in 2d-Gravity * III. Field Theory Methods * Linear Momentum and Angular Momentum in Quaternionic Quantum Mechanics * Some Comments on Real Clifford Algebras * On the Quantum Group p-adics Connection * Gravitational Instantons Revisited * A Generalized BBGKY Hierarchy from the Classical Path-Integral * A Quantum Generated Symmetry: Group-Level Duality in Conformal and Topological Field Theory * Gauge Symmetries in Extended Objects * Hidden BRST Symmetry and Collective Coordinates * Towards Stochastically Quantizing Topological Actions * IV. Statistical Methods * A Brief Summary of the s-Channel Theory of Superconductivity * Neural Networks and Models for the Brain * Relativistic One-Body Equations for Planar Particles with Arbitrary Spin * Chiral Property of Quarks and Hadron Spectrum in Lattice QCD * Scalar Lattice QCD * Semi-Superconductivity of a Charged Anyon Gas * Two-Fermion Theory of Strongly Correlated Electrons and Charge-Spin Separation * Statistical Mechanics and Error-Correcting Codes * Quantum Statistics

  15. An adaptive vector quantization scheme

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.

    1990-01-01

    Vector quantization is known to be an effective compression scheme to achieve a low bit rate so as to minimize communication channel bandwidth and also to reduce digital memory storage while maintaining the necessary fidelity of the data. However, the large number of computations required in vector quantizers has been a handicap in using vector quantization for low-rate source coding. An adaptive vector quantization algorithm is introduced that is inherently suitable for simple hardware implementation because it has a simple architecture. It allows fast encoding and decoding because it requires only addition and subtraction operations.

  16. Discrimination of growth and water stress in wheat by various vegetation indices through a clear a turbid atmosphere

    NASA Technical Reports Server (NTRS)

    Jackson, R. D.; Slater, P. M.; Pinter, P. J. (Principal Investigator)

    1982-01-01

    Reflectance data were obtained over a drought-stressed and a well-watered wheat plot with a hand-held radiometer having bands similar to the MSS bands of the LANDSAT satellites. Data for 48 clear days were interpolated to yield reflectance values for each day of the growing season, from planting until harvest. With an atmospheric path radiance model and LANDSAT-2 calibration data, the reflectance were used to simulate LANDSAT digital counts (not quantized) for the four LANDSAT bands for each day of the growing season, through a clear (approximately 100 km meteorological range) and a turbid (approximately 10 km meteorological range) atmosphere. Several ratios and linear combinations of bands were calculated using the simulated data, then assessed for their relative ability to discriminate vegetative growth and plant stress through the two atmospheres. The results show that water stress was not detected by any of the indices until after growth was retarded, and the sensitivity of the various indices to vegetation depended on plant growth stage and atmospheric path radiance.

  17. Mnemonic discrimination relates to perforant path integrity: An ultra-high resolution diffusion tensor imaging study.

    PubMed

    Bennett, Ilana J; Stark, Craig E L

    2016-03-01

    Pattern separation describes the orthogonalization of similar inputs into unique, non-overlapping representations. This computational process is thought to serve memory by reducing interference and to be mediated by the dentate gyrus of the hippocampus. Using ultra-high in-plane resolution diffusion tensor imaging (hrDTI) in older adults, we previously demonstrated that integrity of the perforant path, which provides input to the dentate gyrus from entorhinal cortex, was associated with mnemonic discrimination, a behavioral outcome designed to load on pattern separation. The current hrDTI study assessed the specificity of this perforant path integrity-mnemonic discrimination relationship relative to other cognitive constructs (identified using a factor analysis) and white matter tracts (hippocampal cingulum, fornix, corpus callosum) in 112 healthy adults (20-87 years). Results revealed age-related declines in integrity of the perforant path and other medial temporal lobe (MTL) tracts (hippocampal cingulum, fornix). Controlling for global effects of brain aging, perforant path integrity related only to the factor that captured mnemonic discrimination performance. Comparable integrity-mnemonic discrimination relationships were also observed for the hippocampal cingulum and fornix. Thus, whereas perforant path integrity specifically relates to mnemonic discrimination, mnemonic discrimination may be mediated by a broader MTL network. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Three-Axis Superconducting Gravity Gradiometer

    NASA Technical Reports Server (NTRS)

    Paik, Ho Jung

    1987-01-01

    Gravity gradients measured even on accelerating platforms. Three-axis superconducting gravity gradiometer based on flux quantization and Meissner effect in superconductors and employs superconducting quantum interference device as amplifier. Incorporates several magnetically levitated proof masses. Gradiometer design integrates accelerometers for operation in differential mode. Principal use in commercial instruments for measurement of Earth-gravity gradients in geo-physical surveying and exploration for oil.

  19. Sensory feedback in a bump attractor model of path integration.

    PubMed

    Poll, Daniel B; Nguyen, Khanh; Kilpatrick, Zachary P

    2016-04-01

    Mammalian spatial navigation systems utilize several different sensory information channels. This information is converted into a neural code that represents the animal's current position in space by engaging place cell, grid cell, and head direction cell networks. In particular, sensory landmark (allothetic) cues can be utilized in concert with an animal's knowledge of its own velocity (idiothetic) cues to generate a more accurate representation of position than path integration provides on its own (Battaglia et al. The Journal of Neuroscience 24(19):4541-4550 (2004)). We develop a computational model that merges path integration with feedback from external sensory cues that provide a reliable representation of spatial position along an annular track. Starting with a continuous bump attractor model, we explore the impact of synaptic spatial asymmetry and heterogeneity, which disrupt the position code of the path integration process. We use asymptotic analysis to reduce the bump attractor model to a single scalar equation whose potential represents the impact of asymmetry and heterogeneity. Such imperfections cause errors to build up when the network performs path integration, but these errors can be corrected by an external control signal representing the effects of sensory cues. We demonstrate that there is an optimal strength and decay rate of the control signal when cues appear either periodically or randomly. A similar analysis is performed when errors in path integration arise from dynamic noise fluctuations. Again, there is an optimal strength and decay of discrete control that minimizes the path integration error.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Serwer, Philip, E-mail: serwer@uthscsa.edu; Wright, Elena T.; Liu, Zheng

    DNA packaging of phages phi29, T3 and T7 sometimes produces incompletely packaged DNA with quantized lengths, based on gel electrophoretic band formation. We discover here a packaging ATPase-free, in vitro model for packaged DNA length quantization. We use directed evolution to isolate a five-site T3 point mutant that hyper-produces tail-free capsids with mature DNA (heads). Three tail gene mutations, but no head gene mutations, are present. A variable-length DNA segment leaks from some mutant heads, based on DNase I-protection assay and electron microscopy. The protected DNA segment has quantized lengths, based on restriction endonuclease analysis: six sharp bands of DNAmore » missing 3.7–12.3% of the last end packaged. Native gel electrophoresis confirms quantized DNA expulsion and, after removal of external DNA, provides evidence that capsid radius is the quantization-ruler. Capsid-based DNA length quantization possibly evolved via selection for stalling that provides time for feedback control during DNA packaging and injection. - Graphical abstract: Highlights: • We implement directed evolution- and DNA-sequencing-based phage assembly genetics. • We purify stable, mutant phage heads with a partially leaked mature DNA molecule. • Native gels and DNase-protection show leaked DNA segments to have quantized lengths. • Native gels after DNase I-removal of leaked DNA reveal the capsids to vary in radius. • Thus, we hypothesize leaked DNA quantization via variably quantized capsid radius.« less

  1. Dimensional quantization effects in the thermodynamics of conductive filaments

    NASA Astrophysics Data System (ADS)

    Niraula, D.; Grice, C. R.; Karpov, V. G.

    2018-06-01

    We consider the physical effects of dimensional quantization in conductive filaments that underlie operations of some modern electronic devices. We show that, as a result of quantization, a sufficiently thin filament acquires a positive charge. Several applications of this finding include the host material polarization, the stability of filament constrictions, the equilibrium filament radius, polarity in device switching, and quantization of conductance.

  2. Nearly associative deformation quantization

    NASA Astrophysics Data System (ADS)

    Vassilevich, Dmitri; Oliveira, Fernando Martins Costa

    2018-04-01

    We study several classes of non-associative algebras as possible candidates for deformation quantization in the direction of a Poisson bracket that does not satisfy Jacobi identities. We show that in fact alternative deformation quantization algebras require the Jacobi identities on the Poisson bracket and, under very general assumptions, are associative. At the same time, flexible deformation quantization algebras exist for any Poisson bracket.

  3. Dimensional quantization effects in the thermodynamics of conductive filaments.

    PubMed

    Niraula, D; Grice, C R; Karpov, V G

    2018-06-29

    We consider the physical effects of dimensional quantization in conductive filaments that underlie operations of some modern electronic devices. We show that, as a result of quantization, a sufficiently thin filament acquires a positive charge. Several applications of this finding include the host material polarization, the stability of filament constrictions, the equilibrium filament radius, polarity in device switching, and quantization of conductance.

  4. A Note on Feynman Path Integral for Electromagnetic External Fields

    NASA Astrophysics Data System (ADS)

    Botelho, Luiz C. L.

    2017-08-01

    We propose a Fresnel stochastic white noise framework to analyze the nature of the Feynman paths entering on the Feynman Path Integral expression for the Feynman Propagator of a particle quantum mechanically moving under an external electromagnetic time-independent potential.

  5. An Anatomically Constrained Model for Path Integration in the Bee Brain.

    PubMed

    Stone, Thomas; Webb, Barbara; Adden, Andrea; Weddig, Nicolai Ben; Honkanen, Anna; Templin, Rachel; Wcislo, William; Scimeca, Luca; Warrant, Eric; Heinze, Stanley

    2017-10-23

    Path integration is a widespread navigational strategy in which directional changes and distance covered are continuously integrated on an outward journey, enabling a straight-line return to home. Bees use vision for this task-a celestial-cue-based visual compass and an optic-flow-based visual odometer-but the underlying neural integration mechanisms are unknown. Using intracellular electrophysiology, we show that polarized-light-based compass neurons and optic-flow-based speed-encoding neurons converge in the central complex of the bee brain, and through block-face electron microscopy, we identify potential integrator cells. Based on plausible output targets for these cells, we propose a complete circuit for path integration and steering in the central complex, with anatomically identified neurons suggested for each processing step. The resulting model circuit is thus fully constrained biologically and provides a functional interpretation for many previously unexplained architectural features of the central complex. Moreover, we show that the receptive fields of the newly discovered speed neurons can support path integration for the holonomic motion (i.e., a ground velocity that is not precisely aligned with body orientation) typical of bee flight, a feature not captured in any previously proposed model of path integration. In a broader context, the model circuit presented provides a general mechanism for producing steering signals by comparing current and desired headings-suggesting a more basic function for central complex connectivity, from which path integration may have evolved. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Random multispace quantization as an analytic mechanism for BioHashing of biometric and random identity inputs.

    PubMed

    Teoh, Andrew B J; Goh, Alwyn; Ngo, David C L

    2006-12-01

    Biometric analysis for identity verification is becoming a widespread reality. Such implementations necessitate large-scale capture and storage of biometric data, which raises serious issues in terms of data privacy and (if such data is compromised) identity theft. These problems stem from the essential permanence of biometric data, which (unlike secret passwords or physical tokens) cannot be refreshed or reissued if compromised. Our previously presented biometric-hash framework prescribes the integration of external (password or token-derived) randomness with user-specific biometrics, resulting in bitstring outputs with security characteristics (i.e., noninvertibility) comparable to cryptographic ciphers or hashes. The resultant BioHashes are hence cancellable, i.e., straightforwardly revoked and reissued (via refreshed password or reissued token) if compromised. BioHashing furthermore enhances recognition effectiveness, which is explained in this paper as arising from the Random Multispace Quantization (RMQ) of biometric and external random inputs.

  7. Quantum spaces, central extensions of Lie groups and related quantum field theories

    NASA Astrophysics Data System (ADS)

    Poulain, Timothé; Wallet, Jean-Christophe

    2018-02-01

    Quantum spaces with su(2) noncommutativity can be modelled by using a family of SO(3)-equivariant differential *-representations. The quantization maps are determined from the combination of the Wigner theorem for SU(2) with the polar decomposition of the quantized plane waves. A tracial star-product, equivalent to the Kontsevich product for the Poisson manifold dual to su(2) is obtained from a subfamily of differential *-representations. Noncommutative (scalar) field theories free from UV/IR mixing and whose commutative limit coincides with the usual ϕ 4 theory on ℛ3 are presented. A generalization of the construction to semi-simple possibly non simply connected Lie groups based on their central extensions by suitable abelian Lie groups is discussed. Based on a talk presented by Poulain T at the XXVth International Conference on Integrable Systems and Quantum symmetries (ISQS-25), Prague, June 6-10 2017.

  8. On the theory of quantum measurement

    NASA Technical Reports Server (NTRS)

    Haus, Hermann A.; Kaertner, Franz X.

    1994-01-01

    Many so called paradoxes of quantum mechanics are clarified when the measurement equipment is treated as a quantized system. Every measurement involves nonlinear processes. Self consistent formulations of nonlinear quantum optics are relatively simple. Hence optical measurements, such as the quantum nondemolition (QND) measurement of photon number, are particularly well suited for such a treatment. It shows that the so called 'collapse of the wave function' is not needed for the interpretation of the measurement process. Coherence of the density matrix of the signal is progressively reduced with increasing accuracy of the photon number determination. If the QND measurement is incorporated into the double slit experiment, the contrast ratio of the fringes is found to decrease with increasing information on the photon number in one of the two paths.

  9. Noncommutative reading of the complex plane through Delone sequences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ali, S. Twareque; Balkova, Lubka; Gazeau, J. P.

    2009-04-15

    The Berezin-Klauder-Toeplitz ('anti-Wick') quantization or 'noncommutative reading' of the complex plane, viewed as the phase space of a particle moving on the line, is derived from the resolution of the unity provided by the standard (or Gaussian) coherent states. The construction of these states and their attractive properties are essentially based on the energy spectrum of the harmonic oscillator, that is, on the natural numbers. This work is an attempt for following the same path by considering sequences of non-negative numbers which are not 'too far' from the natural numbers. In particular, we examine the consequences of such perturbations onmore » the noncommutative reading of the complex plane in terms of its probabilistic, functional, and localization aspects.« less

  10. Spin coherent-state path integrals and the instanton calculus

    NASA Astrophysics Data System (ADS)

    Garg, Anupam; Kochetov, Evgueny; Park, Kee-Su; Stone, Michael

    2003-01-01

    We use an instanton approximation to the continuous-time spin coherent-state path integral to obtain the tunnel splitting of classically degenerate ground states. We show that provided the fluctuation determinant is carefully evaluated, the path integral expression is accurate to order O(1/j). We apply the method to the LMG model and to the molecular magnet Fe8 in a transverse field.

  11. Topological quantization in units of the fine structure constant.

    PubMed

    Maciejko, Joseph; Qi, Xiao-Liang; Drew, H Dennis; Zhang, Shou-Cheng

    2010-10-15

    Fundamental topological phenomena in condensed matter physics are associated with a quantized electromagnetic response in units of fundamental constants. Recently, it has been predicted theoretically that the time-reversal invariant topological insulator in three dimensions exhibits a topological magnetoelectric effect quantized in units of the fine structure constant α=e²/ℏc. In this Letter, we propose an optical experiment to directly measure this topological quantization phenomenon, independent of material details. Our proposal also provides a way to measure the half-quantized Hall conductances on the two surfaces of the topological insulator independently of each other.

  12. On the Dequantization of Fedosov's Deformation Quantization

    NASA Astrophysics Data System (ADS)

    Karabegov, Alexander V.

    2003-08-01

    To each natural deformation quantization on a Poisson manifold M we associate a Poisson morphism from the formal neighborhood of the zero section of the cotangent bundle to M to the formal neighborhood of the diagonal of the product M x M~, where M~ is a copy of M with the opposite Poisson structure. We call it dequantization of the natural deformation quantization. Then we "dequantize" Fedosov's quantization.

  13. Variational nature, integration, and properties of Newton reaction path

    NASA Astrophysics Data System (ADS)

    Bofill, Josep Maria; Quapp, Wolfgang

    2011-02-01

    The distinguished coordinate path and the reduced gradient following path or its equivalent formulation, the Newton trajectory, are analyzed and unified using the theory of calculus of variations. It is shown that their minimum character is related to the fact that the curve is located in a valley region. In this case, we say that the Newton trajectory is a reaction path with the category of minimum energy path. In addition to these findings a Runge-Kutta-Fehlberg algorithm to integrate these curves is also proposed.

  14. Variational nature, integration, and properties of Newton reaction path.

    PubMed

    Bofill, Josep Maria; Quapp, Wolfgang

    2011-02-21

    The distinguished coordinate path and the reduced gradient following path or its equivalent formulation, the Newton trajectory, are analyzed and unified using the theory of calculus of variations. It is shown that their minimum character is related to the fact that the curve is located in a valley region. In this case, we say that the Newton trajectory is a reaction path with the category of minimum energy path. In addition to these findings a Runge-Kutta-Fehlberg algorithm to integrate these curves is also proposed.

  15. Path integration of head direction: updating a packet of neural activity at the correct speed using neuronal time constants.

    PubMed

    Walters, D M; Stringer, S M

    2010-07-01

    A key question in understanding the neural basis of path integration is how individual, spatially responsive, neurons may self-organize into networks that can, through learning, integrate velocity signals to update a continuous representation of location within an environment. It is of vital importance that this internal representation of position is updated at the correct speed, and in real time, to accurately reflect the motion of the animal. In this article, we present a biologically plausible model of velocity path integration of head direction that can solve this problem using neuronal time constants to effect natural time delays, over which associations can be learned through associative Hebbian learning rules. The model comprises a linked continuous attractor network and competitive network. In simulation, we show that the same model is able to learn two different speeds of rotation when implemented with two different values for the time constant, and without the need to alter any other model parameters. The proposed model could be extended to path integration of place in the environment, and path integration of spatial view.

  16. From classical to quantum and back: Hamiltonian adaptive resolution path integral, ring polymer, and centroid molecular dynamics

    NASA Astrophysics Data System (ADS)

    Kreis, Karsten; Kremer, Kurt; Potestio, Raffaello; Tuckerman, Mark E.

    2017-12-01

    Path integral-based methodologies play a crucial role for the investigation of nuclear quantum effects by means of computer simulations. However, these techniques are significantly more demanding than corresponding classical simulations. To reduce this numerical effort, we recently proposed a method, based on a rigorous Hamiltonian formulation, which restricts the quantum modeling to a small but relevant spatial region within a larger reservoir where particles are treated classically. In this work, we extend this idea and show how it can be implemented along with state-of-the-art path integral simulation techniques, including path-integral molecular dynamics, which allows for the calculation of quantum statistical properties, and ring-polymer and centroid molecular dynamics, which allow the calculation of approximate quantum dynamical properties. To this end, we derive a new integration algorithm that also makes use of multiple time-stepping. The scheme is validated via adaptive classical-path-integral simulations of liquid water. Potential applications of the proposed multiresolution method are diverse and include efficient quantum simulations of interfaces as well as complex biomolecular systems such as membranes and proteins.

  17. Enzymatic Kinetic Isotope Effects from Path-Integral Free Energy Perturbation Theory.

    PubMed

    Gao, J

    2016-01-01

    Path-integral free energy perturbation (PI-FEP) theory is presented to directly determine the ratio of quantum mechanical partition functions of different isotopologs in a single simulation. Furthermore, a double averaging strategy is used to carry out the practical simulation, separating the quantum mechanical path integral exactly into two separate calculations, one corresponding to a classical molecular dynamics simulation of the centroid coordinates, and another involving free-particle path-integral sampling over the classical, centroid positions. An integrated centroid path-integral free energy perturbation and umbrella sampling (PI-FEP/UM, or simply, PI-FEP) method along with bisection sampling was summarized, which provides an accurate and fast convergent method for computing kinetic isotope effects for chemical reactions in solution and in enzymes. The PI-FEP method is illustrated by a number of applications, to highlight the computational precision and accuracy, the rule of geometrical mean in kinetic isotope effects, enhanced nuclear quantum effects in enzyme catalysis, and protein dynamics on temperature dependence of kinetic isotope effects. © 2016 Elsevier Inc. All rights reserved.

  18. Architectural constraints are a major factor reducing path integration accuracy in the rat head direction cell system.

    PubMed

    Page, Hector J I; Walters, Daniel; Stringer, Simon M

    2015-01-01

    Head direction cells fire to signal the direction in which an animal's head is pointing. They are able to track head direction using only internally-derived information (path integration)In this simulation study we investigate the factors that affect path integration accuracy. Specifically, two major limiting factors are identified: rise time, the time after stimulation it takes for a neuron to start firing, and the presence of symmetric non-offset within-layer recurrent collateral connectivity. On the basis of the latter, the important prediction is made that head direction cell regions directly involved in path integration will not contain this type of connectivity; giving a theoretical explanation for architectural observations. Increased neuronal rise time is found to slow path integration, and the slowing effect for a given rise time is found to be more severe in the context of short conduction delays. Further work is suggested on the basis of our findings, which represent a valuable contribution to understanding of the head direction cell system.

  19. Feynman path integral application on deriving black-scholes diffusion equation for european option pricing

    NASA Astrophysics Data System (ADS)

    Utama, Briandhika; Purqon, Acep

    2016-08-01

    Path Integral is a method to transform a function from its initial condition to final condition through multiplying its initial condition with the transition probability function, known as propagator. At the early development, several studies focused to apply this method for solving problems only in Quantum Mechanics. Nevertheless, Path Integral could also apply to other subjects with some modifications in the propagator function. In this study, we investigate the application of Path Integral method in financial derivatives, stock options. Black-Scholes Model (Nobel 1997) was a beginning anchor in Option Pricing study. Though this model did not successfully predict option price perfectly, especially because its sensitivity for the major changing on market, Black-Scholes Model still is a legitimate equation in pricing an option. The derivation of Black-Scholes has a high difficulty level because it is a stochastic partial differential equation. Black-Scholes equation has a similar principle with Path Integral, where in Black-Scholes the share's initial price is transformed to its final price. The Black-Scholes propagator function then derived by introducing a modified Lagrange based on Black-Scholes equation. Furthermore, we study the correlation between path integral analytical solution and Monte-Carlo numeric solution to find the similarity between this two methods.

  20. Vortex and half-vortex dynamics in a nonlinear spinor quantum fluid

    PubMed Central

    Dominici, Lorenzo; Dagvadorj, Galbadrakh; Fellows, Jonathan M.; Ballarini, Dario; De Giorgi, Milena; Marchetti, Francesca M.; Piccirillo, Bruno; Marrucci, Lorenzo; Bramati, Alberto; Gigli, Giuseppe; Szymańska, Marzena H.; Sanvitto, Daniele

    2015-01-01

    Vortices are archetypal objects that recur in the universe across the scale of complexity, from subatomic particles to galaxies and black holes. Their appearance is connected with spontaneous symmetry breaking and phase transitions. In Bose-Einstein condensates and superfluids, vortices are both point-like and quantized quasiparticles. We use a two-dimensional (2D) fluid of polaritons, bosonic particles constituted by hybrid photonic and electronic oscillations, to study quantum vortex dynamics. Polaritons benefit from easiness of wave function phase detection, a spinor nature sustaining half-integer vorticity, strong nonlinearity, and tuning of the background disorder. We can directly generate by resonant pulsed excitations a polariton condensate carrying either a full or half-integer vortex as initial condition and follow their coherent evolution using ultrafast imaging on the picosecond scale. The observations highlight a rich phenomenology, such as the spiraling of the half-vortex and the joint path of the twin charges of a full vortex, until the moment of their splitting. Furthermore, we observe the ordered branching into newly generated secondary couples, associated with the breaking of radial and azimuthal symmetries. This allows us to devise the interplay of nonlinearity and sample disorder in shaping the fluid and driving the vortex dynamics. In addition, our observations suggest that phase singularities may be seen as fundamental particles whose quantized events span from pair creation and recombination to 2D+t topological vortex strings. PMID:26665174

  1. Vortex and half-vortex dynamics in a nonlinear spinor quantum fluid.

    PubMed

    Dominici, Lorenzo; Dagvadorj, Galbadrakh; Fellows, Jonathan M; Ballarini, Dario; De Giorgi, Milena; Marchetti, Francesca M; Piccirillo, Bruno; Marrucci, Lorenzo; Bramati, Alberto; Gigli, Giuseppe; Szymańska, Marzena H; Sanvitto, Daniele

    2015-12-01

    Vortices are archetypal objects that recur in the universe across the scale of complexity, from subatomic particles to galaxies and black holes. Their appearance is connected with spontaneous symmetry breaking and phase transitions. In Bose-Einstein condensates and superfluids, vortices are both point-like and quantized quasiparticles. We use a two-dimensional (2D) fluid of polaritons, bosonic particles constituted by hybrid photonic and electronic oscillations, to study quantum vortex dynamics. Polaritons benefit from easiness of wave function phase detection, a spinor nature sustaining half-integer vorticity, strong nonlinearity, and tuning of the background disorder. We can directly generate by resonant pulsed excitations a polariton condensate carrying either a full or half-integer vortex as initial condition and follow their coherent evolution using ultrafast imaging on the picosecond scale. The observations highlight a rich phenomenology, such as the spiraling of the half-vortex and the joint path of the twin charges of a full vortex, until the moment of their splitting. Furthermore, we observe the ordered branching into newly generated secondary couples, associated with the breaking of radial and azimuthal symmetries. This allows us to devise the interplay of nonlinearity and sample disorder in shaping the fluid and driving the vortex dynamics. In addition, our observations suggest that phase singularities may be seen as fundamental particles whose quantized events span from pair creation and recombination to 2D+t topological vortex strings.

  2. Integrability of conformal fishnet theory

    NASA Astrophysics Data System (ADS)

    Gromov, Nikolay; Kazakov, Vladimir; Korchemsky, Gregory; Negro, Stefano; Sizov, Grigory

    2018-01-01

    We study integrability of fishnet-type Feynman graphs arising in planar four-dimensional bi-scalar chiral theory recently proposed in arXiv:1512.06704 as a special double scaling limit of gamma-deformed N = 4 SYM theory. We show that the transfer matrix "building" the fishnet graphs emerges from the R-matrix of non-compact conformal SU(2 , 2) Heisenberg spin chain with spins belonging to principal series representations of the four-dimensional conformal group. We demonstrate explicitly a relationship between this integrable spin chain and the Quantum Spectral Curve (QSC) of N = 4 SYM. Using QSC and spin chain methods, we construct Baxter equation for Q-functions of the conformal spin chain needed for computation of the anomalous dimensions of operators of the type tr( ϕ 1 J ) where ϕ 1 is one of the two scalars of the theory. For J = 3 we derive from QSC a quantization condition that fixes the relevant solution of Baxter equation. The scaling dimensions of the operators only receive contributions from wheel-like graphs. We develop integrability techniques to compute the divergent part of these graphs and use it to present the weak coupling expansion of dimensions to very high orders. Then we apply our exact equations to calculate the anomalous dimensions with J = 3 to practically unlimited precision at any coupling. These equations also describe an infinite tower of local conformal operators all carrying the same charge J = 3. The method should be applicable for any J and, in principle, to any local operators of bi-scalar theory. We show that at strong coupling the scaling dimensions can be derived from semiclassical quantization of finite gap solutions describing an integrable system of noncompact SU(2 , 2) spins. This bears similarities with the classical strings arising in the strongly coupled limit of N = 4 SYM.

  3. Quantum Computing and Second Quantization

    DOE PAGES

    Makaruk, Hanna Ewa

    2017-02-10

    Quantum computers are by their nature many particle quantum systems. Both the many-particle arrangement and being quantum are necessary for the existence of the entangled states, which are responsible for the parallelism of the quantum computers. Second quantization is a very important approximate method of describing such systems. This lecture will present the general idea of the second quantization, and discuss shortly some of the most important formulations of second quantization.

  4. Quantum Computing and Second Quantization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makaruk, Hanna Ewa

    Quantum computers are by their nature many particle quantum systems. Both the many-particle arrangement and being quantum are necessary for the existence of the entangled states, which are responsible for the parallelism of the quantum computers. Second quantization is a very important approximate method of describing such systems. This lecture will present the general idea of the second quantization, and discuss shortly some of the most important formulations of second quantization.

  5. BSIFT: toward data-independent codebook for large scale image search.

    PubMed

    Zhou, Wengang; Li, Houqiang; Hong, Richang; Lu, Yijuan; Tian, Qi

    2015-03-01

    Bag-of-Words (BoWs) model based on Scale Invariant Feature Transform (SIFT) has been widely used in large-scale image retrieval applications. Feature quantization by vector quantization plays a crucial role in BoW model, which generates visual words from the high- dimensional SIFT features, so as to adapt to the inverted file structure for the scalable retrieval. Traditional feature quantization approaches suffer several issues, such as necessity of visual codebook training, limited reliability, and update inefficiency. To avoid the above problems, in this paper, a novel feature quantization scheme is proposed to efficiently quantize each SIFT descriptor to a descriptive and discriminative bit-vector, which is called binary SIFT (BSIFT). Our quantizer is independent of image collections. In addition, by taking the first 32 bits out from BSIFT as code word, the generated BSIFT naturally lends itself to adapt to the classic inverted file structure for image indexing. Moreover, the quantization error is reduced by feature filtering, code word expansion, and query sensitive mask shielding. Without any explicit codebook for quantization, our approach can be readily applied in image search in some resource-limited scenarios. We evaluate the proposed algorithm for large scale image search on two public image data sets. Experimental results demonstrate the index efficiency and retrieval accuracy of our approach.

  6. Density-Dependent Quantized Least Squares Support Vector Machine for Large Data Sets.

    PubMed

    Nan, Shengyu; Sun, Lei; Chen, Badong; Lin, Zhiping; Toh, Kar-Ann

    2017-01-01

    Based on the knowledge that input data distribution is important for learning, a data density-dependent quantization scheme (DQS) is proposed for sparse input data representation. The usefulness of the representation scheme is demonstrated by using it as a data preprocessing unit attached to the well-known least squares support vector machine (LS-SVM) for application on big data sets. Essentially, the proposed DQS adopts a single shrinkage threshold to obtain a simple quantization scheme, which adapts its outputs to input data density. With this quantization scheme, a large data set is quantized to a small subset where considerable sample size reduction is generally obtained. In particular, the sample size reduction can save significant computational cost when using the quantized subset for feature approximation via the Nyström method. Based on the quantized subset, the approximated features are incorporated into LS-SVM to develop a data density-dependent quantized LS-SVM (DQLS-SVM), where an analytic solution is obtained in the primal solution space. The developed DQLS-SVM is evaluated on synthetic and benchmark data with particular emphasis on large data sets. Extensive experimental results show that the learning machine incorporating DQS attains not only high computational efficiency but also good generalization performance.

  7. Image-adapted visually weighted quantization matrices for digital image compression

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1994-01-01

    A method for performing image compression that eliminates redundant and invisible image components is presented. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  8. 77 FR 74027 - Certain Integrated Circuit Packages Provided with Multiple Heat-Conducting Paths and Products...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-12

    ... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-851] Certain Integrated Circuit Packages Provided with Multiple Heat- Conducting Paths and Products Containing Same; Commission Determination Not To... provided with multiple heat-conducting paths and products containing same by reason of infringement of...

  9. FIELD EVALUATION OF A METHOD FOR ESTIMATING GASEOUS FLUXES FROM AREA SOURCES USING OPEN-PATH FTIR

    EPA Science Inventory


    The paper gives preliminary results from a field evaluation of a new approach for quantifying gaseous fugitive emissions of area air pollution sources. The approach combines path-integrated concentration data acquired with any path-integrated optical remote sensing (PI-ORS) ...

  10. FIELD EVALUATION OF A METHOD FOR ESTIMATING GASEOUS FLUXES FROM AREA SOURCES USING OPEN-PATH FOURIER TRANSFORM INFRARED

    EPA Science Inventory

    The paper describes preliminary results from a field experiment designed to evaluate a new approach to quantifying gaseous fugitive emissions from area air pollution sources. The new approach combines path-integrated concentration data acquired with any path-integrated optical re...

  11. Development of Advanced Technologies for Complete Genomic and Proteomic Characterization of Quantized Human Tumor Cells

    DTIC Science & Technology

    2015-09-01

    glioblastoma . We have successfully established several patient-derived cell lines from glioblastoma tumors and further established a number of...and single-cell technologies. Although the focus of this research is glioblastoma , the proposed tools are generally applicable to all cancer-based...studies. 15. SUBJECT TERMS Human cohorts, Glioblastoma , Genomic, Proteomic, Single-cell technologies, Hypothesis-driven, integrative systems approach

  12. Generic absence of strong singularities in loop quantum Bianchi-IX spacetimes

    NASA Astrophysics Data System (ADS)

    Saini, Sahil; Singh, Parampreet

    2018-03-01

    We study the generic resolution of strong singularities in loop quantized effective Bianchi-IX spacetime in two different quantizations—the connection operator based ‘A’ quantization and the extrinsic curvature based ‘K’ quantization. We show that in the effective spacetime description with arbitrary matter content, it is necessary to include inverse triad corrections to resolve all the strong singularities in the ‘A’ quantization. Whereas in the ‘K’ quantization these results can be obtained without including inverse triad corrections. Under these conditions, the energy density, expansion and shear scalars for both of the quantization prescriptions are bounded. Notably, both the quantizations can result in potentially curvature divergent events if matter content allows divergences in the partial derivatives of the energy density with respect to the triad variables at a finite energy density. Such events are found to be weak curvature singularities beyond which geodesics can be extended in the effective spacetime. Our results show that all potential strong curvature singularities of the classical theory are forbidden in Bianchi-IX spacetime in loop quantum cosmology and geodesic evolution never breaks down for such events.

  13. Spatial Updating Strategy Affects the Reference Frame in Path Integration.

    PubMed

    He, Qiliang; McNamara, Timothy P

    2018-06-01

    This study investigated how spatial updating strategies affected the selection of reference frames in path integration. Participants walked an outbound path consisting of three successive waypoints in a featureless environment and then pointed to the first waypoint. We manipulated the alignment of participants' final heading at the end of the outbound path with their initial heading to examine the adopted reference frame. We assumed that the initial heading defined the principal reference direction in an allocentric reference frame. In Experiment 1, participants were instructed to use a configural updating strategy and to monitor the shape of the outbound path while they walked it. Pointing performance was best when the final heading was aligned with the initial heading, indicating the use of an allocentric reference frame. In Experiment 2, participants were instructed to use a continuous updating strategy and to keep track of the location of the first waypoint while walking the outbound path. Pointing performance was equivalent regardless of the alignment between the final and the initial headings, indicating the use of an egocentric reference frame. These results confirmed that people could employ different spatial updating strategies in path integration (Wiener, Berthoz, & Wolbers Experimental Brain Research 208(1) 61-71, 2011), and suggested that these strategies could affect the selection of the reference frame for path integration.

  14. Complexity theory, time series analysis and Tsallis q-entropy principle part one: theoretical aspects

    NASA Astrophysics Data System (ADS)

    Pavlos, George P.

    2017-12-01

    In this study, we present the highlights of complexity theory (Part I) and significant experimental verifications (Part II) and we try to give a synoptic description of complexity theory both at the microscopic and at the macroscopic level of the physical reality. Also, we propose that the self-organization observed macroscopically is a phenomenon that reveals the strong unifying character of the complex dynamics which includes thermodynamical and dynamical characteristics in all levels of the physical reality. From this point of view, macroscopical deterministic and stochastic processes are closely related to the microscopical chaos and self-organization. The scientific work of scientists such as Wilson, Nicolis, Prigogine, Hooft, Nottale, El Naschie, Castro, Tsallis, Chang and others is used for the development of a unified physical comprehension of complex dynamics from the microscopic to the macroscopic level. Finally, we provide a comprehensive description of the novel concepts included in the complexity theory from microscopic to macroscopic level. Some of the modern concepts that can be used for a unified description of complex systems and for the understanding of modern complexity theory, as it is manifested at the macroscopic and the microscopic level, are the fractal geometry and fractal space-time, scale invariance and scale relativity, phase transition and self-organization, path integral amplitudes, renormalization group theory, stochastic and chaotic quantization and E-infinite theory, etc.

  15. An automated integration-free path-integral method based on Kleinert's variational perturbation theory

    NASA Astrophysics Data System (ADS)

    Wong, Kin-Yiu; Gao, Jiali

    2007-12-01

    Based on Kleinert's variational perturbation (KP) theory [Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets, 3rd ed. (World Scientific, Singapore, 2004)], we present an analytic path-integral approach for computing the effective centroid potential. The approach enables the KP theory to be applied to any realistic systems beyond the first-order perturbation (i.e., the original Feynman-Kleinert [Phys. Rev. A 34, 5080 (1986)] variational method). Accurate values are obtained for several systems in which exact quantum results are known. Furthermore, the computed kinetic isotope effects for a series of proton transfer reactions, in which the potential energy surfaces are evaluated by density-functional theory, are in good accordance with experiments. We hope that our method could be used by non-path-integral experts or experimentalists as a "black box" for any given system.

  16. Pseudo-Kähler Quantization on Flag Manifolds

    NASA Astrophysics Data System (ADS)

    Karabegov, Alexander V.

    A unified approach to geometric, symbol and deformation quantizations on a generalized flag manifold endowed with an invariant pseudo-Kähler structure is proposed. In particular cases we arrive at Berezin's quantization via covariant and contravariant symbols.

  17. Quantization improves stabilization of dynamical systems with delayed feedback

    NASA Astrophysics Data System (ADS)

    Stepan, Gabor; Milton, John G.; Insperger, Tamas

    2017-11-01

    We show that an unstable scalar dynamical system with time-delayed feedback can be stabilized by quantizing the feedback. The discrete time model corresponds to a previously unrecognized case of the microchaotic map in which the fixed point is both locally and globally repelling. In the continuous-time model, stabilization by quantization is possible when the fixed point in the absence of feedback is an unstable node, and in the presence of feedback, it is an unstable focus (spiral). The results are illustrated with numerical simulation of the unstable Hayes equation. The solutions of the quantized Hayes equation take the form of oscillations in which the amplitude is a function of the size of the quantization step. If the quantization step is sufficiently small, the amplitude of the oscillations can be small enough to practically approximate the dynamics around a stable fixed point.

  18. On Correspondence of BRST-BFV, Dirac, and Refined Algebraic Quantizations of Constrained Systems

    NASA Astrophysics Data System (ADS)

    Shvedov, O. Yu.

    2002-11-01

    The correspondence between BRST-BFV, Dirac, and refined algebraic (group averaging, projection operator) approaches to quantizing constrained systems is analyzed. For the closed-algebra case, it is shown that the component of the BFV wave function corresponding to maximal (minimal) value of number of ghosts and antighosts in the Schrodinger representation may be viewed as a wave function in the refined algebraic (Dirac) quantization approach. The Giulini-Marolf group averaging formula for the inner product in the refined algebraic quantization approach is obtained from the Batalin-Marnelius prescription for the BRST-BFV inner product, which should be generally modified due to topological problems. The considered prescription for the correspondence of states is observed to be applicable to the open-algebra case. The refined algebraic quantization approach is generalized then to the case of nontrivial structure functions. A simple example is discussed. The correspondence of observables for different quantization methods is also investigated.

  19. Perceptual compression of magnitude-detected synthetic aperture radar imagery

    NASA Technical Reports Server (NTRS)

    Gorman, John D.; Werness, Susan A.

    1994-01-01

    A perceptually-based approach for compressing synthetic aperture radar (SAR) imagery is presented. Key components of the approach are a multiresolution wavelet transform, a bit allocation mask based on an empirical human visual system (HVS) model, and hybrid scalar/vector quantization. Specifically, wavelet shrinkage techniques are used to segregate wavelet transform coefficients into three components: local means, edges, and texture. Each of these three components is then quantized separately according to a perceptually-based bit allocation scheme. Wavelet coefficients associated with local means and edges are quantized using high-rate scalar quantization while texture information is quantized using low-rate vector quantization. The impact of the perceptually-based multiresolution compression algorithm on visual image quality, impulse response, and texture properties is assessed for fine-resolution magnitude-detected SAR imagery; excellent image quality is found at bit rates at or above 1 bpp along with graceful performance degradation at rates below 1 bpp.

  20. Quantization of geometric phase with integer and fractional topological characterization in a quantum Ising chain with long-range interaction.

    PubMed

    Sarkar, Sujit

    2018-04-12

    An attempt is made to study and understand the behavior of quantization of geometric phase of a quantum Ising chain with long range interaction. We show the existence of integer and fractional topological characterization for this model Hamiltonian with different quantization condition and also the different quantized value of geometric phase. The quantum critical lines behave differently from the perspective of topological characterization. The results of duality and its relation to the topological quantization is presented here. The symmetry study for this model Hamiltonian is also presented. Our results indicate that the Zak phase is not the proper physical parameter to describe the topological characterization of system with long range interaction. We also present quite a few exact solutions with physical explanation. Finally we present the relation between duality, symmetry and topological characterization. Our work provides a new perspective on topological quantization.

  1. INNOVATIVE APPROACH FOR MEASURING AMMONIA AND METHANE FLUXES FROM A HOG FARM USING OPEN-PATH FOURIER TRANSFORM INFRARED SPECTROSCOPY

    EPA Science Inventory

    The paper describes a new approach to quantify emissions from area air pollution sources. The approach combines path-integrated concentration data acquired with any path-integrated optical remote sensing (PI-ORS) technique and computed tomography (CT) technique. In this study, an...

  2. Evaluation of the path integral for flow through random porous media

    NASA Astrophysics Data System (ADS)

    Westbroek, Marise J. E.; Coche, Gil-Arnaud; King, Peter R.; Vvedensky, Dimitri D.

    2018-04-01

    We present a path integral formulation of Darcy's equation in one dimension with random permeability described by a correlated multivariate lognormal distribution. This path integral is evaluated with the Markov chain Monte Carlo method to obtain pressure distributions, which are shown to agree with the solutions of the corresponding stochastic differential equation for Dirichlet and Neumann boundary conditions. The extension of our approach to flow through random media in two and three dimensions is discussed.

  3. User's guide to Monte Carlo methods for evaluating path integrals

    NASA Astrophysics Data System (ADS)

    Westbroek, Marise J. E.; King, Peter R.; Vvedensky, Dimitri D.; Dürr, Stephan

    2018-04-01

    We give an introduction to the calculation of path integrals on a lattice, with the quantum harmonic oscillator as an example. In addition to providing an explicit computational setup and corresponding pseudocode, we pay particular attention to the existence of autocorrelations and the calculation of reliable errors. The over-relaxation technique is presented as a way to counter strong autocorrelations. The simulation methods can be extended to compute observables for path integrals in other settings.

  4. Noncommutative gerbes and deformation quantization

    NASA Astrophysics Data System (ADS)

    Aschieri, Paolo; Baković, Igor; Jurčo, Branislav; Schupp, Peter

    2010-11-01

    We define noncommutative gerbes using the language of star products. Quantized twisted Poisson structures are discussed as an explicit realization in the sense of deformation quantization. Our motivation is the noncommutative description of D-branes in the presence of topologically non-trivial background fields.

  5. Quantized discrete space oscillators

    NASA Technical Reports Server (NTRS)

    Uzes, C. A.; Kapuscik, Edward

    1993-01-01

    A quasi-canonical sequence of finite dimensional quantizations was found which has canonical quantization as its limit. In order to demonstrate its practical utility and its numerical convergence, this formalism is applied to the eigenvalue and 'eigenfunction' problem of several harmonic and anharmonic oscillators.

  6. Visibility of wavelet quantization noise

    NASA Technical Reports Server (NTRS)

    Watson, A. B.; Yang, G. Y.; Solomon, J. A.; Villasenor, J.

    1997-01-01

    The discrete wavelet transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that we call DWT uniform quantization noise; it is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2-lambda, where r is display visual resolution in pixels/degree, and lambda is the wavelet level. Thresholds increase rapidly with wavelet spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from lowpass to horizontal/vertical to diagonal. We construct a mathematical model for DWT noise detection thresholds that is a function of level, orientation, and display visual resolution. This allows calculation of a "perceptually lossless" quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.

  7. A recursive technique for adaptive vector quantization

    NASA Technical Reports Server (NTRS)

    Lindsay, Robert A.

    1989-01-01

    Vector Quantization (VQ) is fast becoming an accepted, if not preferred method for image compression. The VQ performs well when compressing all types of imagery including Video, Electro-Optical (EO), Infrared (IR), Synthetic Aperture Radar (SAR), Multi-Spectral (MS), and digital map data. The only requirement is to change the codebook to switch the compressor from one image sensor to another. There are several approaches for designing codebooks for a vector quantizer. Adaptive Vector Quantization is a procedure that simultaneously designs codebooks as the data is being encoded or quantized. This is done by computing the centroid as a recursive moving average where the centroids move after every vector is encoded. When computing the centroid of a fixed set of vectors the resultant centroid is identical to the previous centroid calculation. This method of centroid calculation can be easily combined with VQ encoding techniques. The defined quantizer changes after every encoded vector by recursively updating the centroid of minimum distance which is the selected by the encoder. Since the quantizer is changing definition or states after every encoded vector, the decoder must now receive updates to the codebook. This is done as side information by multiplexing bits into the compressed source data.

  8. Robust fault tolerant control based on sliding mode method for uncertain linear systems with quantization.

    PubMed

    Hao, Li-Ying; Yang, Guang-Hong

    2013-09-01

    This paper is concerned with the problem of robust fault-tolerant compensation control problem for uncertain linear systems subject to both state and input signal quantization. By incorporating novel matrix full-rank factorization technique with sliding surface design successfully, the total failure of certain actuators can be coped with, under a special actuator redundancy assumption. In order to compensate for quantization errors, an adjustment range of quantization sensitivity for a dynamic uniform quantizer is given through the flexible choices of design parameters. Comparing with the existing results, the derived inequality condition leads to the fault tolerance ability stronger and much wider scope of applicability. With a static adjustment policy of quantization sensitivity, an adaptive sliding mode controller is then designed to maintain the sliding mode, where the gain of the nonlinear unit vector term is updated automatically to compensate for the effects of actuator faults, quantization errors, exogenous disturbances and parameter uncertainties without the need for a fault detection and isolation (FDI) mechanism. Finally, the effectiveness of the proposed design method is illustrated via a model of a rocket fairing structural-acoustic. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Wireless sensor platform for harsh environments

    NASA Technical Reports Server (NTRS)

    Garverick, Steven L. (Inventor); Yu, Xinyu (Inventor); Toygur, Lemi (Inventor); He, Yunli (Inventor)

    2009-01-01

    Reliable and efficient sensing becomes increasingly difficult in harsher environments. A sensing module for high-temperature conditions utilizes a digital, rather than analog, implementation on a wireless platform to achieve good quality data transmission. The module comprises a sensor, integrated circuit, and antenna. The integrated circuit includes an amplifier, A/D converter, decimation filter, and digital transmitter. To operate, an analog signal is received by the sensor, amplified by the amplifier, converted into a digital signal by the A/D converter, filtered by the decimation filter to address the quantization error, and output in digital format by the digital transmitter and antenna.

  10. Vector navigation in desert ants, Cataglyphis fortis: celestial compass cues are essential for the proper use of distance information.

    PubMed

    Sommer, Stefan; Wehner, Rüdiger

    2005-10-01

    Foraging desert ants navigate primarily by path integration. They continually update homing direction and distance by employing a celestial compass and an odometer. Here we address the question of whether information about travel distance is correctly used in the absence of directional information. By using linear channels that were partly covered to exclude celestial compass cues, we were able to test the distance component of the path-integration process while suppressing the directional information. Our results suggest that the path integrator cannot process the distance information accumulated by the odometer while ants are deprived of celestial compass information. Hence, during path integration directional cues are a prerequisite for the proper use of travel-distance information by ants.

  11. Thermal field theory and generalized light front quantization

    NASA Astrophysics Data System (ADS)

    Weldon, H. Arthur

    2003-04-01

    The dependence of thermal field theory on the surface of quantization and on the velocity of the heat bath is investigated by working in general coordinates that are arbitrary linear combinations of the Minkowski coordinates. In the general coordinates the metric tensor gμν¯ is nondiagonal. The Kubo-Martin-Schwinger condition requires periodicity in thermal correlation functions when the temporal variable changes by an amount -i/(T(g00¯)). Light-front quantization fails since g00¯=0; however, various related quantizations are possible.

  12. Path optimization method for the sign problem

    NASA Astrophysics Data System (ADS)

    Ohnishi, Akira; Mori, Yuto; Kashiwa, Kouji

    2018-03-01

    We propose a path optimization method (POM) to evade the sign problem in the Monte-Carlo calculations for complex actions. Among many approaches to the sign problem, the Lefschetz-thimble path-integral method and the complex Langevin method are promising and extensively discussed. In these methods, real field variables are complexified and the integration manifold is determined by the flow equations or stochastically sampled. When we have singular points of the action or multiple critical points near the original integral surface, however, we have a risk to encounter the residual and global sign problems or the singular drift term problem. One of the ways to avoid the singular points is to optimize the integration path which is designed not to hit the singular points of the Boltzmann weight. By specifying the one-dimensional integration-path as z = t +if(t)(f ɛ R) and by optimizing f(t) to enhance the average phase factor, we demonstrate that we can avoid the sign problem in a one-variable toy model for which the complex Langevin method is found to fail. In this proceedings, we propose POM and discuss how we can avoid the sign problem in a toy model. We also discuss the possibility to utilize the neural network to optimize the path.

  13. Cycle-Averaged Phase-Space States for the Harmonic and the Morse Oscillators, and the Corresponding Uncertainty Relations

    ERIC Educational Resources Information Center

    Nicolaides, Cleanthes A.; Constantoudis, Vasilios

    2009-01-01

    In Planck's model of the harmonic oscillator (HO) a century ago, both the energy and the phase space were quantized according to epsilon[subscript n] = nhv, n = 0, 1, 2..., and [double integral]dp[subscript x] dx = h. By referring to just these two relations, we show how the adoption of "cycle-averaged phase-space states" (CAPSSs) leads to the…

  14. Tunneling Spectroscopy of Quantum Hall States in Bilayer Graphene

    NASA Astrophysics Data System (ADS)

    Wang, Ke; Harzheim, Achim; Watanabe, Kenji; Taniguchi, Takashi; Kim, Philip

    In the quantum Hall (QH) regime, ballistic conducting paths along the physical edges of a sample appear, leading to quantized Hall conductance and vanishing longitudinal magnetoconductance. These QH edge states are often described as ballistic compressible strips separated by insulating incompressible strips, the spatial profiles of which can be crucial in understanding the stability and emergence of interaction driven QH states. In this work, we present tunneling transport between two QH edge states in bilayer graphene. Employing locally gated device structure, we guide and control the separation between the QH edge states in bilayer graphene. Using resonant Landau level tunneling as a spectroscopy tool, we measure the energy gap in bilayer graphene as a function of displacement field and probe the emergence and evolution of incompressible strips.

  15. Generalized radiation-field quantization method and the Petermann excess-noise factor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, Y.-J.; Siegman, A.E.; E.L. Ginzton Laboratory, Stanford University, Stanford, California 94305

    2003-10-01

    We propose a generalized radiation-field quantization formalism, where quantization does not have to be referenced to a set of power-orthogonal eigenmodes as conventionally required. This formalism can be used to directly quantize the true system eigenmodes, which can be non-power-orthogonal due to the open nature of the system or the gain/loss medium involved in the system. We apply this generalized field quantization to the laser linewidth problem, in particular, lasers with non-power-orthogonal oscillation modes, and derive the excess-noise factor in a fully quantum-mechanical framework. We also show that, despite the excess-noise factor for oscillating modes, the total spatially averaged decaymore » rate for the laser atoms remains unchanged.« less

  16. Simultaneous fault detection and control design for switched systems with two quantized signals.

    PubMed

    Li, Jian; Park, Ju H; Ye, Dan

    2017-01-01

    The problem of simultaneous fault detection and control design for switched systems with two quantized signals is presented in this paper. Dynamic quantizers are employed, respectively, before the output is passed to fault detector, and before the control input is transmitted to the switched system. Taking the quantized errors into account, the robust performance for this kind of system is given. Furthermore, sufficient conditions for the existence of fault detector/controller are presented in the framework of linear matrix inequalities, and fault detector/controller gains and the supremum of quantizer range are derived by a convex optimized method. Finally, two illustrative examples demonstrate the effectiveness of the proposed method. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  17. BFV approach to geometric quantization

    NASA Astrophysics Data System (ADS)

    Fradkin, E. S.; Linetsky, V. Ya.

    1994-12-01

    A gauge-invariant approach to geometric quantization is developed. It yields a complete quantum description for dynamical systems with non-trivial geometry and topology of the phase space. The method is a global version of the gauge-invariant approach to quantization of second-class constraints developed by Batalin, Fradkin and Fradkina (BFF). Physical quantum states and quantum observables are respectively described by covariantly constant sections of the Fock bundle and the bundle of hermitian operators over the phase space with a flat connection defined by the nilpotent BVF-BRST operator. Perturbative calculation of the first non-trivial quantum correction to the Poisson brackets leads to the Chevalley cocycle known in deformation quantization. Consistency conditions lead to a topological quantization condition with metaplectic anomaly.

  18. Deformation quantization of fermi fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Galaviz, I.; Garcia-Compean, H.; Departamento de Fisica, Centro de Investigacion y de Estudios Avanzados del IPN, P.O. Box 14-740, 07000 Mexico, D.F.

    2008-04-15

    Deformation quantization for any Grassmann scalar free field is described via the Weyl-Wigner-Moyal formalism. The Stratonovich-Weyl quantizer, the Moyal *-product and the Wigner functional are obtained by extending the formalism proposed recently in [I. Galaviz, H. Garcia-Compean, M. Przanowski, F.J. Turrubiates, Weyl-Wigner-Moyal Formalism for Fermi Classical Systems, arXiv:hep-th/0612245] to the fermionic systems of infinite number of degrees of freedom. In particular, this formalism is applied to quantize the Dirac free field. It is observed that the use of suitable oscillator variables facilitates considerably the procedure. The Stratonovich-Weyl quantizer, the Moyal *-product, the Wigner functional, the normal ordering operator, and finally,more » the Dirac propagator have been found with the use of these variables.« less

  19. Polymer-Fourier quantization of the scalar field revisited

    NASA Astrophysics Data System (ADS)

    Garcia-Chung, Angel; Vergara, J. David

    2016-10-01

    The polymer quantization of the Fourier modes of the real scalar field is studied within algebraic scheme. We replace the positive linear functional of the standard Poincaré invariant quantization by a singular one. This singular positive linear functional is constructed as mimicking the singular limit of the complex structure of the Poincaré invariant Fock quantization. The resulting symmetry group of such polymer quantization is the subgroup SDiff(ℝ4) which is a subgroup of Diff(ℝ4) formed by spatial volume preserving diffeomorphisms. In consequence, this yields an entirely different irreducible representation of the canonical commutation relations, nonunitary equivalent to the standard Fock representation. We also compared the Poincaré invariant Fock vacuum with the polymer Fourier vacuum.

  20. Quantized Rabi oscillations and circular dichroism in quantum Hall systems

    NASA Astrophysics Data System (ADS)

    Tran, D. T.; Cooper, N. R.; Goldman, N.

    2018-06-01

    The dissipative response of a quantum system upon periodic driving can be exploited as a probe of its topological properties. Here we explore the implications of such phenomena in two-dimensional gases subjected to a uniform magnetic field. It is shown that a filled Landau level exhibits a quantized circular dichroism, which can be traced back to its underlying nontrivial topology. Based on selection rules, we find that this quantized effect can be suitably described in terms of Rabi oscillations, whose frequencies satisfy simple quantization laws. We discuss how quantized dissipative responses can be probed locally, both in the bulk and at the boundaries of the system. This work suggests alternative forms of topological probes based on circular dichroism.

  1. Measurement of J-integral in CAD/CAM dental ceramics and composite resin by digital image correlation.

    PubMed

    Jiang, Yanxia; Akkus, Anna; Roperto, Renato; Akkus, Ozan; Li, Bo; Lang, Lisa; Teich, Sorin

    2016-09-01

    Ceramic and composite resin blocks for CAD/CAM machining of dental restorations are becoming more common. The sample sizes affordable by these blocks are smaller than ideal for stress intensity factor (SIF) based tests. The J-integral measurement calls for full field strain measurement, making it challenging to conduct. Accordingly, the J-integral values of dental restoration materials used in CAD/CAM restorations have not been reported to date. Digital image correlation (DIC) provides full field strain maps, making it possible to calculate the J-integral value. The aim of this study was to measure the J-integral value for CAD/CAM restorative materials. Four types of materials (sintered IPS E-MAX CAD, non-sintered IPS E-MAX CAD, Vita Mark II and Paradigm MZ100) were used to prepare beam samples for three-point bending tests. J-integrals were calculated for different integral path size and locations with respect to the crack tip. J-integral at path 1 for each material was 1.26±0.31×10(-4)MPam for MZ 100, 0.59±0.28×10(-4)MPam for sintered E-MAX, 0.19±0.07×10(-4)MPam for VM II, and 0.21±0.05×10(-4)MPam for non-sintered E-MAX. There were no significant differences between different integral path size, except for the non-sintered E-MAX group. J-integral paths of non-sintered E-MAX located within 42% of the height of the sample provided consistent values whereas outside this range resulted in lower J-integral values. Moreover, no significant difference was found among different integral path locations. The critical SIF was calculated from J-integral (KJ) along with geometry derived SIF values (KI). KI values were comparable with KJ and geometry based SIF values obtained from literature. Therefore, DIC derived J-integral is a reliable way to assess the fracture toughness of small sized specimens for dental CAD/CAM restorative materials; however, with caution applied to the selection of J-integral path. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Instabilities caused by floating-point arithmetic quantization.

    NASA Technical Reports Server (NTRS)

    Phillips, C. L.

    1972-01-01

    It is shown that an otherwise stable digital control system can be made unstable by signal quantization when the controller operates on floating-point arithmetic. Sufficient conditions of instability are determined, and an example of loss of stability is treated when only one quantizer is operated.

  3. Dynamic State Estimation of Power Systems With Quantization Effects: A Recursive Filter Approach.

    PubMed

    Hu, Liang; Wang, Zidong; Liu, Xiaohui

    2016-08-01

    In this paper, a recursive filter algorithm is developed to deal with the state estimation problem for power systems with quantized nonlinear measurements. The measurements from both the remote terminal units and the phasor measurement unit are subject to quantizations described by a logarithmic quantizer. Attention is focused on the design of a recursive filter such that, in the simultaneous presence of nonlinear measurements and quantization effects, an upper bound for the estimation error covariance is guaranteed and subsequently minimized. Instead of using the traditional approximation methods in nonlinear estimation that simply ignore the linearization errors, we treat both the linearization and quantization errors as norm-bounded uncertainties in the algorithm development so as to improve the performance of the estimator. For the power system with such kind of introduced uncertainties, a filter is designed in the framework of robust recursive estimation, and the developed filter algorithm is tested on the IEEE benchmark power system to demonstrate its effectiveness.

  4. Direct comparison of fractional and integer quantized Hall resistance

    NASA Astrophysics Data System (ADS)

    Ahlers, Franz J.; Götz, Martin; Pierz, Klaus

    2017-08-01

    We present precision measurements of the fractional quantized Hall effect, where the quantized resistance {{R}≤ft[ 1/3 \\right]} in the fractional quantum Hall state at filling factor 1/3 was compared with a quantized resistance {{R}[2]} , represented by an integer quantum Hall state at filling factor 2. A cryogenic current comparator bridge capable of currents down to the nanoampere range was used to directly compare two resistance values of two GaAs-based devices located in two cryostats. A value of 1-(5.3  ±  6.3) 10-8 (95% confidence level) was obtained for the ratio ({{R}≤ft[ 1/3 \\right]}/6{{R}[2]} ). This constitutes the most precise comparison of integer resistance quantization (in terms of h/e 2) in single-particle systems and of fractional quantization in fractionally charged quasi-particle systems. While not relevant for practical metrology, such a test of the validity of the underlying physics is of significance in the context of the upcoming revision of the SI.

  5. Canonical quantization of classical mechanics in curvilinear coordinates. Invariant quantization procedure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Błaszak, Maciej, E-mail: blaszakm@amu.edu.pl; Domański, Ziemowit, E-mail: ziemowit@amu.edu.pl

    In the paper is presented an invariant quantization procedure of classical mechanics on the phase space over flat configuration space. Then, the passage to an operator representation of quantum mechanics in a Hilbert space over configuration space is derived. An explicit form of position and momentum operators as well as their appropriate ordering in arbitrary curvilinear coordinates is demonstrated. Finally, the extension of presented formalism onto non-flat case and related ambiguities of the process of quantization are discussed. -- Highlights: •An invariant quantization procedure of classical mechanics on the phase space over flat configuration space is presented. •The passage tomore » an operator representation of quantum mechanics in a Hilbert space over configuration space is derived. •Explicit form of position and momentum operators and their appropriate ordering in curvilinear coordinates is shown. •The invariant form of Hamiltonian operators quadratic and cubic in momenta is derived. •The extension of presented formalism onto non-flat case and related ambiguities of the quantization process are discussed.« less

  6. Quantization noise in digital speech. M.S. Thesis- Houston Univ.

    NASA Technical Reports Server (NTRS)

    Schmidt, O. L.

    1972-01-01

    The amount of quantization noise generated in a digital-to-analog converter is dependent on the number of bits or quantization levels used to digitize the analog signal in the analog-to-digital converter. The minimum number of quantization levels and the minimum sample rate were derived for a digital voice channel. A sample rate of 6000 samples per second and lowpass filters with a 3 db cutoff of 2400 Hz are required for 100 percent sentence intelligibility. Consonant sounds are the first speech components to be degraded by quantization noise. A compression amplifier can be used to increase the weighting of the consonant sound amplitudes in the analog-to-digital converter. An expansion network must be installed at the output of the digital-to-analog converter to restore the original weighting of the consonant sounds. This technique results in 100 percent sentence intelligibility for a sample rate of 5000 samples per second, eight quantization levels, and lowpass filters with a 3 db cutoff of 2000 Hz.

  7. Coherent state quantization of quaternions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muraleetharan, B., E-mail: bbmuraleetharan@jfn.ac.lk, E-mail: santhar@gmail.com; Thirulogasanthar, K., E-mail: bbmuraleetharan@jfn.ac.lk, E-mail: santhar@gmail.com

    Parallel to the quantization of the complex plane, using the canonical coherent states of a right quaternionic Hilbert space, quaternion field of quaternionic quantum mechanics is quantized. Associated upper symbols, lower symbols, and related quantities are analyzed. Quaternionic version of the harmonic oscillator and Weyl-Heisenberg algebra are also obtained.

  8. A review of path-independent integrals in elastic-plastic fracture mechanics

    NASA Technical Reports Server (NTRS)

    Kim, Kwang S.; Orange, Thomas W.

    1988-01-01

    The objective of this paper is to review the path-independent (P-I) integrals in elastic plastic fracture mechanics which have been proposed in recent years to overcome the limitations imposed on the J-integral. The P-I integrals considered are the J-integral by Rice (1968), the thermoelastic P-I integrals by Wilson and Yu (1979) and Gurtin (1979), the J-integral by Blackburn (1972), the J(theta)-integral by Ainsworth et al. (1978), the J-integral by Kishimoto et al. (1980), and the Delta-T(p) and Delta T(p)-asterisk integrals by Alturi et al. (1982). The theoretical foundation of the P-I integrals is examined with an emphasis on whether or not the path independence is maintained in the presence of nonproportional loading and unloading in the plastic regime, thermal gradient, and material inhomogeneities. The simularities, difference, salient features, and limitations of the P-I integrals are discussed. Comments are also made with regard to the physical meaning, the possibility of experimental measurement, and computational aspects.

  9. A review of path-independent integrals in elastic-plastic fracture mechanics, task 4

    NASA Technical Reports Server (NTRS)

    Kim, K. S.

    1985-01-01

    The path independent (P-I) integrals in elastic plastic fracture mechanics which have been proposed in recent years to overcome the limitations imposed on the J integral are reviewed. The P-I integrals considered herein are the J integral by Rice, the thermoelastic P-I integrals by Wilson and Yu and by Gurtin, the J* integral by Blackburn, the J sub theta integral by Ainsworth et al., the J integral by Kishimoto et al., and the delta T sub p and delta T* sub p integrals by Atluri et al. The theoretical foundation of these P-I integrals is examined with emphasis on whether or not path independence is maintained in the presence of nonproportional loading and unloading in the plastic regime, thermal gradients, and material inhomogeneities. The similarities, differences, salient features, and limitations of these P-I integrals are discussed. Comments are also made with regard to the physical meaning, the possibility of experimental measurement, and computational aspects.

  10. 77 FR 33486 - Certain Integrated Circuit Packages Provided With Multiple Heat-Conducting Paths and Products...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-06

    ... INTERNATIONAL TRADE COMMISSION [Docket No. 2899] Certain Integrated Circuit Packages Provided With... complaint entitled Certain Integrated Circuit Packages Provided With Multiple Heat-Conducting Paths and..., telephone (202) 205-2000. The public version of the complaint can be accessed on the Commission's electronic...

  11. BOOK REVIEW: Path Integrals in Field Theory: An Introduction

    NASA Astrophysics Data System (ADS)

    Ryder, Lewis

    2004-06-01

    In the 1960s Feynman was known to particle physicists as one of the people who solved the major problems of quantum electrodynamics, his contribution famously introducing what are now called Feynman diagrams. To other physicists he gained a reputation as the author of the Feynman Lectures on Physics; in addition some people were aware of his work on the path integral formulation of quantum theory, and a very few knew about his work on gravitation and Yang--Mills theories, which made use of path integral methods. Forty years later the scene is rather different. Many of the problems of high energy physics are solved; and the standard model incorporates Feynman's path integral method as a way of proving the renormalisability of the gauge (Yang--Mills) theories involved. Gravitation is proving a much harder nut to crack, but here also questions of renormalisability are couched in path-integral language. What is more, theoretical studies of condensed matter physics now also appeal to this technique for quantisation, so the path integral method is becoming part of the standard apparatus of theoretical physics. Chapters on it appear in a number of recent books, and a few books have appeared devoted to this topic alone; the book under review is a very recent one. Path integral techniques have the advantage of enormous conceptual appeal and the great disadvantage of mathematical complexity, this being partly the result of messy integrals but more fundamentally due to the notions of functional differentiation and integration which are involved in the method. All in all this subject is not such an easy ride. Mosel's book, described as an introduction, is aimed at graduate students and research workers in particle physics. It assumes a background knowledge of quantum mechanics, both non-relativistic and relativistic. After three chapters on the path integral formulation of non-relativistic quantum mechanics there are eight chapters on scalar and spinor field theory, followed by three on gauge field theories---quantum electrodynamics and Yang--Mills theories, Faddeev--Popov ghosts and so on.There is no treatment of the quantisation of gravity.Thus in about 200 pages the reader has the chance to learn in some detail about a most important area of modern physics. The subject is tough but the style is clear and pedagogic, results for the most part being derived explicitly. The choice of topics included is main-stream and sensible and one has a clear sense that the author knows where he is going and is a reliable guide. Path Integrals in Field Theory is clearly the work of a man with considerable teaching experience and is recommended as a readable and helpful account of a rather non-trivial subject.

  12. Educational Information Quantization for Improving Content Quality in Learning Management Systems

    ERIC Educational Resources Information Center

    Rybanov, Alexander Aleksandrovich

    2014-01-01

    The article offers the educational information quantization method for improving content quality in Learning Management Systems. The paper considers questions concerning analysis of quality of quantized presentation of educational information, based on quantitative text parameters: average frequencies of parts of speech, used in the text; formal…

  13. A Heisenberg Algebra Bundle of a Vector Field in Three-Space and its Weyl Quantization

    NASA Astrophysics Data System (ADS)

    Binz, Ernst; Pods, Sonja

    2006-01-01

    In these notes we associate a natural Heisenberg group bundle Ha with a singularity free smooth vector field X = (id,a) on a submanifold M in a Euclidean three-space. This bundle yields naturally an infinite dimensional Heisenberg group HX∞. A representation of the C*-group algebra of HX∞ is a quantization. It causes a natural Weyl-deformation quantization of X. The influence of the topological structure of M on this quantization is encoded in the Chern class of a canonical complex line bundle inside Ha.

  14. BFV quantization on hermitian symmetric spaces

    NASA Astrophysics Data System (ADS)

    Fradkin, E. S.; Linetsky, V. Ya.

    1995-02-01

    Gauge-invariant BFV approach to geometric quantization is applied to the case of hermitian symmetric spaces G/ H. In particular, gauge invariant quantization on the Lobachevski plane and sphere is carried out. Due to the presence of symmetry, master equations for the first-class constraints, quantum observables and physical quantum states are exactly solvable. BFV-BRST operator defines a flat G-connection in the Fock bundle over G/ H. Physical quantum states are covariantly constant sections with respect to this connection and are shown to coincide with the generalized coherent states for the group G. Vacuum expectation values of the quantum observables commuting with the quantum first-class constraints reduce to the covariant symbols of Berezin. The gauge-invariant approach to quantization on symplectic manifolds synthesizes geometric, deformation and Berezin quantization approaches.

  15. BOOK REVIEW: Quantum Gravity (2nd edn)

    NASA Astrophysics Data System (ADS)

    Husain, Viqar

    2008-06-01

    There has been a flurry of books on quantum gravity in the past few years. The first edition of Kiefer's book appeared in 2004, about the same time as Carlo Rovelli's book with the same title. This was soon followed by Thomas Thiemann's 'Modern Canonical Quantum General Relativity'. Although the main focus of each of these books is non-perturbative and non-string approaches to the quantization of general relativity, they are quite orthogonal in temperament, style, subject matter and mathematical detail. Rovelli and Thiemann focus primarily on loop quantum gravity (LQG), whereas Kiefer attempts a broader introduction and review of the subject that includes chapters on string theory and decoherence. Kiefer's second edition attempts an even wider and somewhat ambitious sweep with 'new sections on asymptotic safety, dynamical triangulation, primordial black holes, the information-loss problem, loop quantum cosmology, and other topics'. The presentation of these current topics is necessarily brief given the size of the book, but effective in encapsulating the main ideas in some cases. For instance the few pages devoted to loop quantum cosmology describe how the mini-superspace reduction of the quantum Hamiltonian constraint of LQG becomes a difference equation, whereas the discussion of 'dynamical triangulations', an approach to defining a discretized Lorentzian path integral for quantum gravity, is less detailed. The first few chapters of the book provide, in a roughly historical sequence, the covariant and canonical metric variable approach to the subject developed in the 1960s and 70s. The problem(s) of time in quantum gravity are nicely summarized in the chapter on quantum geometrodynamics, followed by a detailed and effective introduction of the WKB approach and the semi-classical approximation. These topics form the traditional core of the subject. The next three chapters cover LQG, quantization of black holes, and quantum cosmology. Of these the chapter on LQG is the shortest at fourteen pages—a reflection perhaps of the fact that there are two books and a few long reviews of the subject available written by the main protagonists in the field. The chapters on black holes and cosmology provide a more or less standard introduction to black hole thermodynamics, Hawking and Unruh radiation, quantization of the Schwarzschild metric and mini-superspace collapse models, and the DeWitt, Hartle Hawking and Vilenkin wavefunctions. The chapter on string theory is an essay-like overview of its quantum gravitational aspects. It provides a nice introduction to selected ideas and a guide to the literature. Here a prescient student may be left wondering why there is no quantum cosmology in string theory, perhaps a deliberate omission to avoid the 'landscape' and its fauna. In summary, I think this book succeeds in its purpose of providing a broad introduction to quantum gravity, and nicely complements some of the other books on the subject.

  16. LDMOS Channel Thermometer Based on a Thermal Resistance Sensor for Balancing Temperature in Monolithic Power ICs.

    PubMed

    Lin, Tingyou; Ho, Yingchieh; Su, Chauchin

    2017-06-15

    This paper presents a method of thermal balancing for monolithic power integrated circuits (ICs). An on-chip temperature monitoring sensor that consists of a poly resistor strip in each of multiple parallel MOSFET banks is developed. A temperature-to-frequency converter (TFC) is proposed to quantize on-chip temperature. A pulse-width-modulation (PWM) methodology is developed to balance the channel temperature based on the quantization. The modulated PWM pulses control the hottest of metal-oxide-semiconductor field-effect transistor (MOSFET) bank to reduce its power dissipation and heat generation. A test chip with eight parallel MOSFET banks is fabricated in TSMC 0.25 μm HV BCD processes, and total area is 900 × 914 μm². The maximal temperature variation among the eight banks can reduce to 2.8 °C by the proposed thermal balancing system from 9.5 °C with 1.5 W dissipation. As a result, our proposed system improves the lifetime of a power MOSFET by 20%.

  17. LDMOS Channel Thermometer Based on a Thermal Resistance Sensor for Balancing Temperature in Monolithic Power ICs

    PubMed Central

    Lin, Tingyou; Ho, Yingchieh; Su, Chauchin

    2017-01-01

    This paper presents a method of thermal balancing for monolithic power integrated circuits (ICs). An on-chip temperature monitoring sensor that consists of a poly resistor strip in each of multiple parallel MOSFET banks is developed. A temperature-to-frequency converter (TFC) is proposed to quantize on-chip temperature. A pulse-width-modulation (PWM) methodology is developed to balance the channel temperature based on the quantization. The modulated PWM pulses control the hottest of metal-oxide-semiconductor field-effect transistor (MOSFET) bank to reduce its power dissipation and heat generation. A test chip with eight parallel MOSFET banks is fabricated in TSMC 0.25 μm HV BCD processes, and total area is 900 × 914 μm2. The maximal temperature variation among the eight banks can reduce to 2.8 °C by the proposed thermal balancing system from 9.5 °C with 1.5 W dissipation. As a result, our proposed system improves the lifetime of a power MOSFET by 20%. PMID:28617346

  18. Importance sampling studies of helium using the Feynman-Kac path integral method

    NASA Astrophysics Data System (ADS)

    Datta, S.; Rejcek, J. M.

    2018-05-01

    In the Feynman-Kac path integral approach the eigenvalues of a quantum system can be computed using Wiener measure which uses Brownian particle motion. In our previous work on such systems we have observed that the Wiener process numerically converges slowly for dimensions greater than two because almost all trajectories will escape to infinity. One can speed up this process by using a generalized Feynman-Kac (GFK) method, in which the new measure associated with the trial function is stationary, so that the convergence rate becomes much faster. We thus achieve an example of "importance sampling" and, in the present work, we apply it to the Feynman-Kac (FK) path integrals for the ground and first few excited-state energies for He to speed up the convergence rate. We calculate the path integrals using space averaging rather than the time averaging as done in the past. The best previous calculations from variational computations report precisions of 10-16 Hartrees, whereas in most cases our path integral results obtained for the ground and first excited states of He are lower than these results by about 10-6 Hartrees or more.

  19. BOOK REVIEW: Canonical Gravity and Applications: Cosmology, Black Holes, and Quantum Gravity Canonical Gravity and Applications: Cosmology, Black Holes, and Quantum Gravity

    NASA Astrophysics Data System (ADS)

    Husain, Viqar

    2012-03-01

    Research on quantum gravity from a non-perturbative 'quantization of geometry' perspective has been the focus of much research in the past two decades, due to the Ashtekar-Barbero Hamiltonian formulation of general relativity. This approach provides an SU(2) gauge field as the canonical configuration variable; the analogy with Yang-Mills theory at the kinematical level opened up some research space to reformulate the old Wheeler-DeWitt program into what is now known as loop quantum gravity (LQG). The author is known for his work in the LQG approach to cosmology, which was the first application of this formalism that provided the possibility of exploring physical questions. Therefore the flavour of the book is naturally informed by this history. The book is based on a set of graduate-level lectures designed to impart a working knowledge of the canonical approach to gravitation. It is more of a textbook than a treatise, unlike three other recent books in this area by Kiefer [1], Rovelli [2] and Thiemann [3]. The style and choice of topics of these authors are quite different; Kiefer's book provides a broad overview of the path integral and canonical quantization methods from a historical perspective, whereas Rovelli's book focuses on philosophical and formalistic aspects of the problems of time and observables, and gives a development of spin-foam ideas. Thiemann's is much more a mathematical physics book, focusing entirely on the theory of representing constraint operators on a Hilbert space and charting a mathematical trajectory toward a physical Hilbert space for quantum gravity. The significant difference from these books is that Bojowald covers mainly classical topics until the very last chapter, which contains the only discussion of quantization. In its coverage of classical gravity, the book has some content overlap with Poisson's book [4], and with Ryan and Shepley's older work on relativistic cosmology [5]; for instance the contents of chapter five of the book are also covered in detail, and with more worked examples, in the former book, and the entire focus of the latter is Bianchi models. After a brief introduction outlining the aim of the book, the second chapter provides the canonical theory of homogeneous isotropic cosmology with scalar matter; this covers the basics and linear perturbation theory, and is meant as a first taste of what is to come. The next chapter is a thorough introduction of the canonical formulation of general relativity in both the ADM and Ashtekar-Barbero variables. This chapter contains details useful for graduate students which are either scattered or missing in the literature. Applications of the canonical formalism are in the following chapter. These cover standard material and techniques for obtaining mini(midi)-superspace models, including the Bianchi and Gowdy cosmologies, and spherically symmetric reductions. There is also a brief discussion of the two-dimensional dilaton gravity. The spherically symmetric reduction is presented in detail also in the connection-triad variables. The chapter on global and asymptotic properties gives introductions to geodesic and null congruences, trapped surfaces, a survey of singularity theorems, horizons and asymptotic properties. The chapter ends with a discussion of junction conditions and the Vaidya solution. As already mentioned, this material is covered in detail in Poisson's book. The final chapter on quantization describes and contrasts the Dirac and reduced phase space methods. It also gives an introduction to background independent quantization using the holonomy-flux operators, which forms the basis of the LQG program. The application of this method to cosmology and its affect on the Friedmann equation is covered next, followed by a brief introduction to the effective constraint method, which is another area developed by the author. I think this book is a useful addition to the literature for graduate students, and potentially also for researchers in other areas who wish to learn about the canonical approach to gravity. However, given the brief chapter on quantization, the book would go well with a review paper, or parts of the other three quantum gravity books cited above. References [1] Kiefer C 2006 Quantum Gravity 2nd ed. (Oxford University Press) [2] Rovelli C 2007 Quantum Gravity (Cambridge University Press) [3] Thiemann T 2008 Modern Canonical Quantum Gravity (Cambridge University Press) [4] Posson E 2004 A Relativist's Toolkit: The Mathematics of Black-Hole Mechanics (Cambridge University Press) [5] Ryan M P and Shepley L C 1975 Homogeneous Relativistic Cosmology (Princeton University Press)

  20. A Algebraic Approach to the Quantization of Constrained Systems: Finite Dimensional Examples.

    NASA Astrophysics Data System (ADS)

    Tate, Ranjeet Shekhar

    1992-01-01

    General relativity has two features in particular, which make it difficult to apply to it existing schemes for the quantization of constrained systems. First, there is no background structure in the theory, which could be used, e.g., to regularize constraint operators, to identify a "time" or to define an inner product on physical states. Second, in the Ashtekar formulation of general relativity, which is a promising avenue to quantum gravity, the natural variables for quantization are not canonical; and, classically, there are algebraic identities between them. Existing schemes are usually not concerned with such identities. Thus, from the point of view of canonical quantum gravity, it has become imperative to find a framework for quantization which provides a general prescription to find the physical inner product, and is flexible enough to accommodate non -canonical variables. In this dissertation I present an algebraic formulation of the Dirac approach to the quantization of constrained systems. The Dirac quantization program is augmented by a general principle to find the inner product on physical states. Essentially, the Hermiticity conditions on physical operators determine this inner product. I also clarify the role in quantum theory of possible algebraic identities between the elementary variables. I use this approach to quantize various finite dimensional systems. Some of these models test the new aspects of the algebraic framework. Others bear qualitative similarities to general relativity, and may give some insight into the pitfalls lurking in quantum gravity. The previous quantizations of one such model had many surprising features. When this model is quantized using the algebraic program, there is no longer any unexpected behaviour. I also construct the complete quantum theory for a previously unsolved relativistic cosmology. All these models indicate that the algebraic formulation provides powerful new tools for quantization. In (spatially compact) general relativity, the Hamiltonian is constrained to vanish. I present various approaches one can take to obtain an interpretation of the quantum theory of such "dynamically constrained" systems. I apply some of these ideas to the Bianchi I cosmology, and analyze the issue of the initial singularity in quantum theory.

  1. Real-time Feynman path integral with Picard–Lefschetz theory and its applications to quantum tunneling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tanizaki, Yuya, E-mail: yuya.tanizaki@riken.jp; Theoretical Research Division, Nishina Center, RIKEN, Wako 351-0198; Koike, Takayuki, E-mail: tkoike@ms.u-tokyo.ac.jp

    Picard–Lefschetz theory is applied to path integrals of quantum mechanics, in order to compute real-time dynamics directly. After discussing basic properties of real-time path integrals on Lefschetz thimbles, we demonstrate its computational method in a concrete way by solving three simple examples of quantum mechanics. It is applied to quantum mechanics of a double-well potential, and quantum tunneling is discussed. We identify all of the complex saddle points of the classical action, and their properties are discussed in detail. However a big theoretical difficulty turns out to appear in rewriting the original path integral into a sum of path integralsmore » on Lefschetz thimbles. We discuss generality of that problem and mention its importance. Real-time tunneling processes are shown to be described by those complex saddle points, and thus semi-classical description of real-time quantum tunneling becomes possible on solid ground if we could solve that problem. - Highlights: • Real-time path integral is studied based on Picard–Lefschetz theory. • Lucid demonstration is given through simple examples of quantum mechanics. • This technique is applied to quantum mechanics of the double-well potential. • Difficulty for practical applications is revealed, and we discuss its generality. • Quantum tunneling is shown to be closely related to complex classical solutions.« less

  2. Degradation of GaAs/AlGaAs Quantized Hall Resistors With Alloyed AuGe/Ni Contacts.

    PubMed

    Lee, Kevin C

    1998-01-01

    Careful testing over a period of 6 years of a number of GaAs/AlGaAs quantized Hall resistors (QHR) made with alloyed AuGe/Ni contacts, both with and without passivating silicon nitride coatings, has resulted in the identification of important mechanisms responsible for degradation in the performance of the devices as resistance standards. Covering the contacts with a film, such as a low-temperature silicon nitride, that is impervious to humidity and other contaminants in the atmosphere prevents the contacts from degrading. The devices coated with silicon nitride used in this study, however, showed the effects of a conducting path in parallel with the 2-dimensional electron gas (2-DEG) at temperatures above 1.1 K which interferes with their use as resistance standards. Several possible causes of this parallel conduction are evaluated. On the basis of this work, two methods are proposed for protecting QHR devices with alloyed AuGe/Ni contacts from degradation: the heterostructure can be left unpassivated, but the alloyed contacts can be completely covered with a very thick (> 3 μm) coating of gold; or the GaAs cap layer can be carefully etched away after alloying the contacts and prior to depositing a passivating silicon nitride coating over the entire sample. Of the two, the latter is more challenging to effect, but preferable because both the contacts and the heterostructure are protected from corrosion and oxidation.

  3. Weinberg propagator of a free massive particle with an arbitrary spin from the BFV-BRST path integral

    NASA Astrophysics Data System (ADS)

    Zima, V. G.; Fedoruk, S. O.

    1999-11-01

    The transition amplitude is obtained for a free massive particle of arbitrary spin by calculating the path integral in the index-spinor formulation within the BFV-BRST approach. No renormalizations of the path integral measure were applied. The calculation has given the Weinberg propagator written in the index-free form by the use of an index spinor. The choice of boundary conditions on the index spinor determines the holomorphic or antiholomorphic representation for the canonical description of particle/antiparticle spin.

  4. Integrated Data and Control Level Fault Tolerance Techniques for Signal Processing Computer Design

    DTIC Science & Technology

    1990-09-01

    TOLERANCE TECHNIQUES FOR SIGNAL PROCESSING COMPUTER DESIGN G. Robert Redinbo I. INTRODUCTION High-speed signal processing is an important application of...techniques and mathematical approaches will be expanded later to the situation where hardware errors and roundoff and quantization noise affect all...detect errors equal in number to the degree of g(X), the maximum permitted by the Singleton bound [13]. Real cyclic codes, primarily applicable to

  5. Crossing Boundaries: Nativity, Ethnicity, and Mate Selection

    PubMed Central

    Qian, Zhenchao; Glick, Jennifer E.; Baston, Christie

    2016-01-01

    The influx of immigrants has increased diversity among ethnic minorities and indicates that they may take multiple integration paths in American society. Previous research on ethnic integration often focuses on panethnic differences and few have explored ethnic diversity within a racial or panethnic context. Using 2000 U.S. census data for Puerto Rican, Mexican, Chinese, and Filipino origin individuals, we examine differences in marriage and cohabitation with whites, with other minorities, within a panethnic group, and within an ethnic group by nativity status. Ethnic endogamy is strong and, to a less extent, so is panethnic endogamy. Yet, marital or cohabiting unions with whites remain an important path of integration but differ significantly by ethnicity, nativity, age at arrival, and educational attainment. Meanwhile, ethnic differences in marriage and cohabitation with other racial or ethnic minorities are strong. Our analysis supports that unions with whites remain a major path of integration, but other paths of integration also become viable options for all ethnic groups. PMID:22350840

  6. Quantization of Electromagnetic Fields in Cavities

    NASA Technical Reports Server (NTRS)

    Kakazu, Kiyotaka; Oshiro, Kazunori

    1996-01-01

    A quantization procedure for the electromagnetic field in a rectangular cavity with perfect conductor walls is presented, where a decomposition formula of the field plays an essential role. All vector mode functions are obtained by using the decomposition. After expanding the field in terms of the vector mode functions, we get the quantized electromagnetic Hamiltonian.

  7. Quantization Distortion in Block Transform-Compressed Data

    NASA Technical Reports Server (NTRS)

    Boden, A. F.

    1995-01-01

    The popular JPEG image compression standard is an example of a block transform-based compression scheme; the image is systematically subdivided into block that are individually transformed, quantized, and encoded. The compression is achieved by quantizing the transformed data, reducing the data entropy and thus facilitating efficient encoding. A generic block transform model is introduced.

  8. Quantized impedance dealing with the damping behavior of the one-dimensional oscillator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Jinghao; Zhang, Jing; Li, Yuan

    2015-11-15

    A quantized impedance is proposed to theoretically establish the relationship between the atomic eigenfrequency and the intrinsic frequency of the one-dimensional oscillator in this paper. The classical oscillator is modified by the idea that the electron transition is treated as a charge-discharge process of a suggested capacitor with the capacitive energy equal to the energy level difference of the jumping electron. The quantized capacitance of the impedance interacting with the jumping electron can lead the resonant frequency of the oscillator to the same as the atomic eigenfrequency. The quantized resistance reflects that the damping coefficient of the oscillator is themore » mean collision frequency of the transition electron. In addition, the first and third order electric susceptibilities based on the oscillator are accordingly quantized. Our simulation of the hydrogen atom emission spectrum based on the proposed method agrees well with the experimental one. Our results exhibits that the one-dimensional oscillator with the quantized impedance may become useful in the estimations of the refractive index and one- or multi-photon absorption coefficients of some nonmagnetic media composed of hydrogen-like atoms.« less

  9. Low-rate image coding using vector quantization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makur, A.

    1990-01-01

    This thesis deals with the development and analysis of a computationally simple vector quantization image compression system for coding monochrome images at low bit rate. Vector quantization has been known to be an effective compression scheme when a low bit rate is desirable, but the intensive computation required in a vector quantization encoder has been a handicap in using it for low rate image coding. The present work shows that, without substantially increasing the coder complexity, it is indeed possible to achieve acceptable picture quality while attaining a high compression ratio. Several modifications to the conventional vector quantization coder aremore » proposed in the thesis. These modifications are shown to offer better subjective quality when compared to the basic coder. Distributed blocks are used instead of spatial blocks to construct the input vectors. A class of input-dependent weighted distortion functions is used to incorporate psychovisual characteristics in the distortion measure. Computationally simple filtering techniques are applied to further improve the decoded image quality. Finally, unique designs of the vector quantization coder using electronic neural networks are described, so that the coding delay is reduced considerably.« less

  10. Quantized impedance dealing with the damping behavior of the one-dimensional oscillator

    NASA Astrophysics Data System (ADS)

    Zhu, Jinghao; Zhang, Jing; Li, Yuan; Zhang, Yong; Fang, Zhengji; Zhao, Peide; Li, Erping

    2015-11-01

    A quantized impedance is proposed to theoretically establish the relationship between the atomic eigenfrequency and the intrinsic frequency of the one-dimensional oscillator in this paper. The classical oscillator is modified by the idea that the electron transition is treated as a charge-discharge process of a suggested capacitor with the capacitive energy equal to the energy level difference of the jumping electron. The quantized capacitance of the impedance interacting with the jumping electron can lead the resonant frequency of the oscillator to the same as the atomic eigenfrequency. The quantized resistance reflects that the damping coefficient of the oscillator is the mean collision frequency of the transition electron. In addition, the first and third order electric susceptibilities based on the oscillator are accordingly quantized. Our simulation of the hydrogen atom emission spectrum based on the proposed method agrees well with the experimental one. Our results exhibits that the one-dimensional oscillator with the quantized impedance may become useful in the estimations of the refractive index and one- or multi-photon absorption coefficients of some nonmagnetic media composed of hydrogen-like atoms.

  11. Probabilistic distance-based quantizer design for distributed estimation

    NASA Astrophysics Data System (ADS)

    Kim, Yoon Hak

    2016-12-01

    We consider an iterative design of independently operating local quantizers at nodes that should cooperate without interaction to achieve application objectives for distributed estimation systems. We suggest as a new cost function a probabilistic distance between the posterior distribution and its quantized one expressed as the Kullback Leibler (KL) divergence. We first present the analysis that minimizing the KL divergence in the cyclic generalized Lloyd design framework is equivalent to maximizing the logarithmic quantized posterior distribution on the average which can be further computationally reduced in our iterative design. We propose an iterative design algorithm that seeks to maximize the simplified version of the posterior quantized distribution and discuss that our algorithm converges to a global optimum due to the convexity of the cost function and generates the most informative quantized measurements. We also provide an independent encoding technique that enables minimization of the cost function and can be efficiently simplified for a practical use of power-constrained nodes. We finally demonstrate through extensive experiments an obvious advantage of improved estimation performance as compared with the typical designs and the novel design techniques previously published.

  12. Quantization and Superselection Sectors I:. Transformation Group C*-ALGEBRAS

    NASA Astrophysics Data System (ADS)

    Landsman, N. P.

    Quantization is defined as the act of assigning an appropriate C*-algebra { A} to a given configuration space Q, along with a prescription mapping self-adjoint elements of { A} into physically interpretable observables. This procedure is adopted to solve the problem of quantizing a particle moving on a homogeneous locally compact configuration space Q=G/H. Here { A} is chosen to be the transformation group C*-algebra corresponding to the canonical action of G on Q. The structure of these algebras and their representations are examined in some detail. Inequivalent quantizations are identified with inequivalent irreducible representations of the C*-algebra corresponding to the system, hence with its superselection sectors. Introducing the concept of a pre-Hamiltonian, we construct a large class of G-invariant time-evolutions on these algebras, and find the Hamiltonians implementing these time-evolutions in each irreducible representation of { A}. “Topological” terms in the Hamiltonian (or the corresponding action) turn out to be representation-dependent, and are automatically induced by the quantization procedure. Known “topological” charge quantization or periodicity conditions are then identically satisfied as a consequence of the representation theory of { A}.

  13. Lefschetz thimbles in fermionic effective models with repulsive vector-field

    NASA Astrophysics Data System (ADS)

    Mori, Yuto; Kashiwa, Kouji; Ohnishi, Akira

    2018-06-01

    We discuss two problems in complexified auxiliary fields in fermionic effective models, the auxiliary sign problem associated with the repulsive vector-field and the choice of the cut for the scalar field appearing from the logarithmic function. In the fermionic effective models with attractive scalar and repulsive vector-type interaction, the auxiliary scalar and vector fields appear in the path integral after the bosonization of fermion bilinears. When we make the path integral well-defined by the Wick rotation of the vector field, the oscillating Boltzmann weight appears in the partition function. This "auxiliary" sign problem can be solved by using the Lefschetz-thimble path-integral method, where the integration path is constructed in the complex plane. Another serious obstacle in the numerical construction of Lefschetz thimbles is caused by singular points and cuts induced by multivalued functions of the complexified scalar field in the momentum integration. We propose a new prescription which fixes gradient flow trajectories on the same Riemann sheet in the flow evolution by performing the momentum integration in the complex domain.

  14. Which coordinate system for modelling path integration?

    PubMed

    Vickerstaff, Robert J; Cheung, Allen

    2010-03-21

    Path integration is a navigation strategy widely observed in nature where an animal maintains a running estimate, called the home vector, of its location during an excursion. Evidence suggests it is both ancient and ubiquitous in nature, and has been studied for over a century. In that time, canonical and neural network models have flourished, based on a wide range of assumptions, justifications and supporting data. Despite the importance of the phenomenon, consensus and unifying principles appear lacking. A fundamental issue is the neural representation of space needed for biological path integration. This paper presents a scheme to classify path integration systems on the basis of the way the home vector records and updates the spatial relationship between the animal and its home location. Four extended classes of coordinate systems are used to unify and review both canonical and neural network models of path integration, from the arthropod and mammalian literature. This scheme demonstrates analytical equivalence between models which may otherwise appear unrelated, and distinguishes between models which may superficially appear similar. A thorough analysis is carried out of the equational forms of important facets of path integration including updating, steering, searching and systematic errors, using each of the four coordinate systems. The type of available directional cue, namely allothetic or idiothetic, is also considered. It is shown that on balance, the class of home vectors which includes the geocentric Cartesian coordinate system, appears to be the most robust for biological systems. A key conclusion is that deducing computational structure from behavioural data alone will be difficult or impossible, at least in the absence of an analysis of random errors. Consequently it is likely that further theoretical insights into path integration will require an in-depth study of the effect of noise on the four classes of home vectors. Copyright 2009 Elsevier Ltd. All rights reserved.

  15. Leaky Waves in Metamaterials for Antenna Applications

    DTIC Science & Technology

    2011-07-01

    excitation problems, electromagnetic fields are often represented as Sommerfeld integrals [31,32]. A detailed discussion about Sommerfeld integral is...source removed. In the rest of this section, a de- tailed discussion about Sommerfeld Integral Path is presented. 4.1 Spectral Domain Approach 4.1.1... Sommerfeld integral path for evaluating fields accurately and efficiently, the radiation intensity and directivity of electric/magnetic dipoles over a grounded

  16. Relational symplectic groupoid quantization for constant poisson structures

    NASA Astrophysics Data System (ADS)

    Cattaneo, Alberto S.; Moshayedi, Nima; Wernli, Konstantin

    2017-09-01

    As a detailed application of the BV-BFV formalism for the quantization of field theories on manifolds with boundary, this note describes a quantization of the relational symplectic groupoid for a constant Poisson structure. The presence of mixed boundary conditions and the globalization of results are also addressed. In particular, the paper includes an extension to space-times with boundary of some formal geometry considerations in the BV-BFV formalism, and specifically introduces into the BV-BFV framework a "differential" version of the classical and quantum master equations. The quantization constructed in this paper induces Kontsevich's deformation quantization on the underlying Poisson manifold, i.e., the Moyal product, which is known in full details. This allows focussing on the BV-BFV technology and testing it. For the inexperienced reader, this is also a practical and reasonably simple way to learn it.

  17. A hybrid LBG/lattice vector quantizer for high quality image coding

    NASA Technical Reports Server (NTRS)

    Ramamoorthy, V.; Sayood, K.; Arikan, E. (Editor)

    1991-01-01

    It is well known that a vector quantizer is an efficient coder offering a good trade-off between quantization distortion and bit rate. The performance of a vector quantizer asymptotically approaches the optimum bound with increasing dimensionality. A vector quantized image suffers from the following types of degradations: (1) edge regions in the coded image contain staircase effects, (2) quasi-constant or slowly varying regions suffer from contouring effects, and (3) textured regions lose details and suffer from granular noise. All three of these degradations are due to the finite size of the code book, the distortion measures used in the design, and due to the finite training procedure involved in the construction of the code book. In this paper, we present an adaptive technique which attempts to ameliorate the edge distortion and contouring effects.

  18. Design and evaluation of sparse quantization index modulation watermarking schemes

    NASA Astrophysics Data System (ADS)

    Cornelis, Bruno; Barbarien, Joeri; Dooms, Ann; Munteanu, Adrian; Cornelis, Jan; Schelkens, Peter

    2008-08-01

    In the past decade the use of digital data has increased significantly. The advantages of digital data are, amongst others, easy editing, fast, cheap and cross-platform distribution and compact storage. The most crucial disadvantages are the unauthorized copying and copyright issues, by which authors and license holders can suffer considerable financial losses. Many inexpensive methods are readily available for editing digital data and, unlike analog information, the reproduction in the digital case is simple and robust. Hence, there is great interest in developing technology that helps to protect the integrity of a digital work and the copyrights of its owners. Watermarking, which is the embedding of a signal (known as the watermark) into the original digital data, is one method that has been proposed for the protection of digital media elements such as audio, video and images. In this article, we examine watermarking schemes for still images, based on selective quantization of the coefficients of a wavelet transformed image, i.e. sparse quantization-index modulation (QIM) watermarking. Different grouping schemes for the wavelet coefficients are evaluated and experimentally verified for robustness against several attacks. Wavelet tree-based grouping schemes yield a slightly improved performance over block-based grouping schemes. Additionally, the impact of the deployment of error correction codes on the most promising configurations is examined. The utilization of BCH-codes (Bose, Ray-Chaudhuri, Hocquenghem) results in an improved robustness as long as the capacity of the error codes is not exceeded (cliff-effect).

  19. A Neurocomputational Model of Goal-Directed Navigation in Insect-Inspired Artificial Agents

    PubMed Central

    Goldschmidt, Dennis; Manoonpong, Poramate; Dasgupta, Sakyasingha

    2017-01-01

    Despite their small size, insect brains are able to produce robust and efficient navigation in complex environments. Specifically in social insects, such as ants and bees, these navigational capabilities are guided by orientation directing vectors generated by a process called path integration. During this process, they integrate compass and odometric cues to estimate their current location as a vector, called the home vector for guiding them back home on a straight path. They further acquire and retrieve path integration-based vector memories globally to the nest or based on visual landmarks. Although existing computational models reproduced similar behaviors, a neurocomputational model of vector navigation including the acquisition of vector representations has not been described before. Here we present a model of neural mechanisms in a modular closed-loop control—enabling vector navigation in artificial agents. The model consists of a path integration mechanism, reward-modulated global learning, random search, and action selection. The path integration mechanism integrates compass and odometric cues to compute a vectorial representation of the agent's current location as neural activity patterns in circular arrays. A reward-modulated learning rule enables the acquisition of vector memories by associating the local food reward with the path integration state. A motor output is computed based on the combination of vector memories and random exploration. In simulation, we show that the neural mechanisms enable robust homing and localization, even in the presence of external sensory noise. The proposed learning rules lead to goal-directed navigation and route formation performed under realistic conditions. Consequently, we provide a novel approach for vector learning and navigation in a simulated, situated agent linking behavioral observations to their possible underlying neural substrates. PMID:28446872

  20. Option pricing, stochastic volatility, singular dynamics and constrained path integrals

    NASA Astrophysics Data System (ADS)

    Contreras, Mauricio; Hojman, Sergio A.

    2014-01-01

    Stochastic volatility models have been widely studied and used in the financial world. The Heston model (Heston, 1993) [7] is one of the best known models to deal with this issue. These stochastic volatility models are characterized by the fact that they explicitly depend on a correlation parameter ρ which relates the two Brownian motions that drive the stochastic dynamics associated to the volatility and the underlying asset. Solutions to the Heston model in the context of option pricing, using a path integral approach, are found in Lemmens et al. (2008) [21] while in Baaquie (2007,1997) [12,13] propagators for different stochastic volatility models are constructed. In all previous cases, the propagator is not defined for extreme cases ρ=±1. It is therefore necessary to obtain a solution for these extreme cases and also to understand the origin of the divergence of the propagator. In this paper we study in detail a general class of stochastic volatility models for extreme values ρ=±1 and show that in these two cases, the associated classical dynamics corresponds to a system with second class constraints, which must be dealt with using Dirac’s method for constrained systems (Dirac, 1958,1967) [22,23] in order to properly obtain the propagator in the form of a Euclidean Hamiltonian path integral (Henneaux and Teitelboim, 1992) [25]. After integrating over momenta, one gets an Euclidean Lagrangian path integral without constraints, which in the case of the Heston model corresponds to a path integral of a repulsive radial harmonic oscillator. In all the cases studied, the price of the underlying asset is completely determined by one of the second class constraints in terms of volatility and plays no active role in the path integral.

  1. A Neurocomputational Model of Goal-Directed Navigation in Insect-Inspired Artificial Agents.

    PubMed

    Goldschmidt, Dennis; Manoonpong, Poramate; Dasgupta, Sakyasingha

    2017-01-01

    Despite their small size, insect brains are able to produce robust and efficient navigation in complex environments. Specifically in social insects, such as ants and bees, these navigational capabilities are guided by orientation directing vectors generated by a process called path integration. During this process, they integrate compass and odometric cues to estimate their current location as a vector, called the home vector for guiding them back home on a straight path. They further acquire and retrieve path integration-based vector memories globally to the nest or based on visual landmarks. Although existing computational models reproduced similar behaviors, a neurocomputational model of vector navigation including the acquisition of vector representations has not been described before. Here we present a model of neural mechanisms in a modular closed-loop control-enabling vector navigation in artificial agents. The model consists of a path integration mechanism, reward-modulated global learning, random search, and action selection. The path integration mechanism integrates compass and odometric cues to compute a vectorial representation of the agent's current location as neural activity patterns in circular arrays. A reward-modulated learning rule enables the acquisition of vector memories by associating the local food reward with the path integration state. A motor output is computed based on the combination of vector memories and random exploration. In simulation, we show that the neural mechanisms enable robust homing and localization, even in the presence of external sensory noise. The proposed learning rules lead to goal-directed navigation and route formation performed under realistic conditions. Consequently, we provide a novel approach for vector learning and navigation in a simulated, situated agent linking behavioral observations to their possible underlying neural substrates.

  2. Theory of the Quantized Hall Conductance in Periodic Systems: a Topological Analysis.

    NASA Astrophysics Data System (ADS)

    Czerwinski, Michael Joseph

    The integral quantization of the Hall conductance in two-dimensional periodic systems is investigated from a topological point of view. Attention is focused on the contributions from the electronic sub-bands which arise from perturbed Landau levels. After reviewing the theoretical work leading to the identification of the Hall conductance as a topological quantum number, both a determination and interpretation of these quantized values for the sub-band conductances is made. It is shown that the Hall conductance of each sub-band can be regarded as the sum of two terms which will be referred to as classical and nonclassical. Although each of these contributions individually leads to a fractional conductance, the sum of these two contributions does indeed yield an integer. These integral conductances are found to be given by the solution of a simple Diophantine equation which depends on the periodic perturbation. A connection between the quantized value of the Hall conductance and the covering of real space by the zeroes of the sub-band wavefunctions allows for a determination of these conductances under more general potentials. A method is described for obtaining the conductance values from only those states bordering the Brillouin zone, and not the states in its interior. This method is demonstrated to give Hall conductances in agreement with those obtained from the Diophantine equation for the sinusoidal potential case explored earlier. Generalizing a simple gauge invariance argument from real space to k-space, a k-space 'vector potential' is introduced. This allows for a explicit identification of the Hall conductance with the phase winding number of the sub-band wavefunction around the Brillouin zone. The previously described division of the Hall conductance into classical and nonclassical contributions is in this way made more rigorous; based on periodicity considerations alone, these terms are identified as the winding numbers associated with (i) the basis states and (ii) the coefficients of these basis states, respectively. In this way a general Diophantine equation, independent of the periodic potential, is obtained. Finally, the use of the 'parallel transport' of state vectors in the determination of an overall phase convention for these states is described. This is seen to lead to a simple and straightforward method for determining the Hall conductance. This method is based on the states directly, without reference to the particular component wavefunctions of these states. Mention is made of the generality of calculations of this type, within the context of the geometric (or Berry) phases acquired by systems under an adiabatic modification of their environment.

  3. path integral approach to closed form pricing formulas in the Heston framework.

    NASA Astrophysics Data System (ADS)

    Lemmens, Damiaan; Wouters, Michiel; Tempere, Jacques; Foulon, Sven

    2008-03-01

    We present a path integral approach for finding closed form formulas for option prices in the framework of the Heston model. The first model for determining option prices was the Black-Scholes model, which assumed that the logreturn followed a Wiener process with a given drift and constant volatility. To provide a realistic description of the market, the Black-Scholes results must be extended to include stochastic volatility. This is achieved by the Heston model, which assumes that the volatility follows a mean reverting square root process. Current applications of the Heston model are hampered by the unavailability of fast numerical methods, due to a lack of closed-form formulae. Therefore the search for closed form solutions is an essential step before the qualitatively better stochastic volatility models will be used in practice. To attain this goal we outline a simplified path integral approach yielding straightforward results for vanilla Heston options with correlation. Extensions to barrier options and other path-dependent option are discussed, and the new derivation is compared to existing results obtained from alternative path-integral approaches (Dragulescu, Kleinert).

  4. Path integration in tactile perception of shapes.

    PubMed

    Moscatelli, Alessandro; Naceri, Abdeldjallil; Ernst, Marc O

    2014-11-01

    Whenever we move the hand across a surface, tactile signals provide information about the relative velocity between the skin and the surface. If the system were able to integrate the tactile velocity information over time, cutaneous touch may provide an estimate of the relative displacement between the hand and the surface. Here, we asked whether humans are able to form a reliable representation of the motion path from tactile cues only, integrating motion information over time. In order to address this issue, we conducted three experiments using tactile motion and asked participants (1) to estimate the length of a simulated triangle, (2) to reproduce the shape of a simulated triangular path, and (3) to estimate the angle between two-line segments. Participants were able to accurately indicate the length of the path, whereas the perceived direction was affected by a direction bias (inward bias). The response pattern was thus qualitatively similar to the ones reported in classical path integration studies involving locomotion. However, we explain the directional biases as the result of a tactile motion aftereffect. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Segmentation of magnetic resonance images using fuzzy algorithms for learning vector quantization.

    PubMed

    Karayiannis, N B; Pai, P I

    1999-02-01

    This paper evaluates a segmentation technique for magnetic resonance (MR) images of the brain based on fuzzy algorithms for learning vector quantization (FALVQ). These algorithms perform vector quantization by updating all prototypes of a competitive network through an unsupervised learning process. Segmentation of MR images is formulated as an unsupervised vector quantization process, where the local values of different relaxation parameters form the feature vectors which are represented by a relatively small set of prototypes. The experiments evaluate a variety of FALVQ algorithms in terms of their ability to identify different tissues and discriminate between normal tissues and abnormalities.

  6. Splitting Times of Doubly Quantized Vortices in Dilute Bose-Einstein Condensates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huhtamaeki, J. A. M.; Pietilae, V.; Virtanen, S. M. M.

    2006-09-15

    Recently, the splitting of a topologically created doubly quantized vortex into two singly quantized vortices was experimentally investigated in dilute atomic cigar-shaped Bose-Einstein condensates [Y. Shin et al., Phys. Rev. Lett. 93, 160406 (2004)]. In particular, the dependency of the splitting time on the peak particle density was studied. We present results of theoretical simulations which closely mimic the experimental setup. We show that the combination of gravitational sag and time dependency of the trapping potential alone suffices to split the doubly quantized vortex in time scales which are in good agreement with the experiments.

  7. Response of two-band systems to a single-mode quantized field

    NASA Astrophysics Data System (ADS)

    Shi, Z. C.; Shen, H. Z.; Wang, W.; Yi, X. X.

    2016-03-01

    The response of topological insulators (TIs) to an external weakly classical field can be expressed in terms of Kubo formula, which predicts quantized Hall conductivity of the quantum Hall family. The response of TIs to a single-mode quantized field, however, remains unexplored. In this work, we take the quantum nature of the external field into account and define a Hall conductance to characterize the linear response of a two-band system to the quantized field. The theory is then applied to topological insulators. Comparisons with the traditional Hall conductance are presented and discussed.

  8. Quantized Iterative Learning Consensus Tracking of Digital Networks With Limited Information Communication.

    PubMed

    Xiong, Wenjun; Yu, Xinghuo; Chen, Yao; Gao, Jie

    2017-06-01

    This brief investigates the quantized iterative learning problem for digital networks with time-varying topologies. The information is first encoded as symbolic data and then transmitted. After the data are received, a decoder is used by the receiver to get an estimate of the sender's state. Iterative learning quantized communication is considered in the process of encoding and decoding. A sufficient condition is then presented to achieve the consensus tracking problem in a finite interval using the quantized iterative learning controllers. Finally, simulation results are given to illustrate the usefulness of the developed criterion.

  9. Image Retrieval using Integrated Features of Binary Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Agarwal, Megha; Maheshwari, R. P.

    2011-12-01

    In this paper a new approach for image retrieval is proposed with the application of binary wavelet transform. This new approach facilitates the feature calculation with the integration of histogram and correlogram features extracted from binary wavelet subbands. Experiments are performed to evaluate and compare the performance of proposed method with the published literature. It is verified that average precision and average recall of proposed method (69.19%, 41.78%) is significantly improved compared to optimal quantized wavelet correlogram (OQWC) [6] (64.3%, 38.00%) and Gabor wavelet correlogram (GWC) [10] (64.1%, 40.6%). All the experiments are performed on Corel 1000 natural image database [20].

  10. 4D Sommerfeld quantization of the complex extended charge

    NASA Astrophysics Data System (ADS)

    Bulyzhenkov, Igor E.

    2017-12-01

    Gravitational fields and accelerations cannot change quantized magnetic flux in closed line contours due to flat 3D section of curved 4D space-time-matter. The relativistic Bohr-Sommerfeld quantization of the imaginary charge reveals an electric analog of the Compton length, which can introduce quantitatively the fine structure constant and the Plank length.

  11. Accelerated sampling by infinite swapping of path integral molecular dynamics with surface hopping

    NASA Astrophysics Data System (ADS)

    Lu, Jianfeng; Zhou, Zhennan

    2018-02-01

    To accelerate the thermal equilibrium sampling of multi-level quantum systems, the infinite swapping limit of a recently proposed multi-level ring polymer representation is investigated. In the infinite swapping limit, the ring polymer evolves according to an averaged Hamiltonian with respect to all possible surface index configurations of the ring polymer and thus connects the surface hopping approach to the mean-field path-integral molecular dynamics. A multiscale integrator for the infinite swapping limit is also proposed to enable efficient sampling based on the limiting dynamics. Numerical results demonstrate the huge improvement of sampling efficiency of the infinite swapping compared with the direct simulation of path-integral molecular dynamics with surface hopping.

  12. Reducing weight precision of convolutional neural networks towards large-scale on-chip image recognition

    NASA Astrophysics Data System (ADS)

    Ji, Zhengping; Ovsiannikov, Ilia; Wang, Yibing; Shi, Lilong; Zhang, Qiang

    2015-05-01

    In this paper, we develop a server-client quantization scheme to reduce bit resolution of deep learning architecture, i.e., Convolutional Neural Networks, for image recognition tasks. Low bit resolution is an important factor in bringing the deep learning neural network into hardware implementation, which directly determines the cost and power consumption. We aim to reduce the bit resolution of the network without sacrificing its performance. To this end, we design a new quantization algorithm called supervised iterative quantization to reduce the bit resolution of learned network weights. In the training stage, the supervised iterative quantization is conducted via two steps on server - apply k-means based adaptive quantization on learned network weights and retrain the network based on quantized weights. These two steps are alternated until the convergence criterion is met. In this testing stage, the network configuration and low-bit weights are loaded to the client hardware device to recognize coming input in real time, where optimized but expensive quantization becomes infeasible. Considering this, we adopt a uniform quantization for the inputs and internal network responses (called feature maps) to maintain low on-chip expenses. The Convolutional Neural Network with reduced weight and input/response precision is demonstrated in recognizing two types of images: one is hand-written digit images and the other is real-life images in office scenarios. Both results show that the new network is able to achieve the performance of the neural network with full bit resolution, even though in the new network the bit resolution of both weight and input are significantly reduced, e.g., from 64 bits to 4-5 bits.

  13. The coordinate coherent states approach revisited

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miao, Yan-Gang, E-mail: miaoyg@nankai.edu.cn; Zhang, Shao-Jun, E-mail: sjzhang@mail.nankai.edu.cn

    2013-02-15

    We revisit the coordinate coherent states approach through two different quantization procedures in the quantum field theory on the noncommutative Minkowski plane. The first procedure, which is based on the normal commutation relation between an annihilation and creation operators, deduces that a point mass can be described by a Gaussian function instead of the usual Dirac delta function. However, we argue this specific quantization by adopting the canonical one (based on the canonical commutation relation between a field and its conjugate momentum) and show that a point mass should still be described by the Dirac delta function, which implies thatmore » the concept of point particles is still valid when we deal with the noncommutativity by following the coordinate coherent states approach. In order to investigate the dependence on quantization procedures, we apply the two quantization procedures to the Unruh effect and Hawking radiation and find that they give rise to significantly different results. Under the first quantization procedure, the Unruh temperature and Unruh spectrum are not deformed by noncommutativity, but the Hawking temperature is deformed by noncommutativity while the radiation specturm is untack. However, under the second quantization procedure, the Unruh temperature and Hawking temperature are untack but the both spectra are modified by an effective greybody (deformed) factor. - Highlights: Black-Right-Pointing-Pointer Suggest a canonical quantization in the coordinate coherent states approach. Black-Right-Pointing-Pointer Prove the validity of the concept of point particles. Black-Right-Pointing-Pointer Apply the canonical quantization to the Unruh effect and Hawking radiation. Black-Right-Pointing-Pointer Find no deformations in the Unruh temperature and Hawking temperature. Black-Right-Pointing-Pointer Provide the modified spectra of the Unruh effect and Hawking radiation.« less

  14. Hierarchically clustered adaptive quantization CMAC and its learning convergence.

    PubMed

    Teddy, S D; Lai, E M K; Quek, C

    2007-11-01

    The cerebellar model articulation controller (CMAC) neural network (NN) is a well-established computational model of the human cerebellum. Nevertheless, there are two major drawbacks associated with the uniform quantization scheme of the CMAC network. They are the following: (1) a constant output resolution associated with the entire input space and (2) the generalization-accuracy dilemma. Moreover, the size of the CMAC network is an exponential function of the number of inputs. Depending on the characteristics of the training data, only a small percentage of the entire set of CMAC memory cells is utilized. Therefore, the efficient utilization of the CMAC memory is a crucial issue. One approach is to quantize the input space nonuniformly. For existing nonuniformly quantized CMAC systems, there is a tradeoff between memory efficiency and computational complexity. Inspired by the underlying organizational mechanism of the human brain, this paper presents a novel CMAC architecture named hierarchically clustered adaptive quantization CMAC (HCAQ-CMAC). HCAQ-CMAC employs hierarchical clustering for the nonuniform quantization of the input space to identify significant input segments and subsequently allocating more memory cells to these regions. The stability of the HCAQ-CMAC network is theoretically guaranteed by the proof of its learning convergence. The performance of the proposed network is subsequently benchmarked against the original CMAC network, as well as two other existing CMAC variants on two real-life applications, namely, automated control of car maneuver and modeling of the human blood glucose dynamics. The experimental results have demonstrated that the HCAQ-CMAC network offers an efficient memory allocation scheme and improves the generalization and accuracy of the network output to achieve better or comparable performances with smaller memory usages. Index Terms-Cerebellar model articulation controller (CMAC), hierarchical clustering, hierarchically clustered adaptive quantization CMAC (HCAQ-CMAC), learning convergence, nonuniform quantization.

  15. Quantum particles in general spacetimes: A tangent bundle formalism

    NASA Astrophysics Data System (ADS)

    Wohlfarth, Mattias N. R.

    2018-06-01

    Using tangent bundle geometry we construct an equivalent reformulation of classical field theory on flat spacetimes which simultaneously encodes the perspectives of multiple observers. Its generalization to curved spacetimes realizes a new type of nonminimal coupling of the fields and is shown to admit a canonical quantization procedure. For the resulting quantum theory we demonstrate the emergence of a particle interpretation, fully consistent with general relativistic geometry. The path dependency of parallel transport forces each observer to carry their own quantum state; we find that the communication of the corresponding quantum information may generate extra particles on curved spacetimes. A speculative link between quantum information and spacetime curvature is discussed which might lead to novel explanations for quantum decoherence and vanishing interference in double-slit or interaction-free measurement scenarios, in the mere presence of additional observers.

  16. Size quantization in high-temperature superconducting cuprates and a link to Einstein's diffusion law

    NASA Astrophysics Data System (ADS)

    Roeser, H. P.; Bohr, A.; Haslam, D. T.; López, J. S.; Stepper, M.; Nikoghosyan, A. S.

    2012-07-01

    Optimum doping of high-temperature superconductors (HTSC) defines a superconducting unit volume for each HTSC. For a single-mode HTSC, e.g., a cuprate with one CuO2 plane, the volume is given by Vsc=cx2, where c is the unit cell height and x the doping distance. The experimental resistivity at Tc is connected to the structure by ρ(exp)≈c×h/(2e2). Combining this result with the classical definition of resistivity leads to an equation similar to Einstein's diffusion law x2/(2τ)=h/(2Meff)=D, where τ is the relaxation time, Meff=2me and D the diffusion constant. It has also been shown that the mean free path d=x. The Einstein-Smoluchowski diffusion relation D=μkBTc provides a connection to Tc.

  17. Entropy-aware projected Landweber reconstruction for quantized block compressive sensing of aerial imagery

    NASA Astrophysics Data System (ADS)

    Liu, Hao; Li, Kangda; Wang, Bing; Tang, Hainie; Gong, Xiaohui

    2017-01-01

    A quantized block compressive sensing (QBCS) framework, which incorporates the universal measurement, quantization/inverse quantization, entropy coder/decoder, and iterative projected Landweber reconstruction, is summarized. Under the QBCS framework, this paper presents an improved reconstruction algorithm for aerial imagery, QBCS, with entropy-aware projected Landweber (QBCS-EPL), which leverages the full-image sparse transform without Wiener filter and an entropy-aware thresholding model for wavelet-domain image denoising. Through analyzing the functional relation between the soft-thresholding factors and entropy-based bitrates for different quantization methods, the proposed model can effectively remove wavelet-domain noise of bivariate shrinkage and achieve better image reconstruction quality. For the overall performance of QBCS reconstruction, experimental results demonstrate that the proposed QBCS-EPL algorithm significantly outperforms several existing algorithms. With the experiment-driven methodology, the QBCS-EPL algorithm can obtain better reconstruction quality at a relatively moderate computational cost, which makes it more desirable for aerial imagery applications.

  18. Quantized kernel least mean square algorithm.

    PubMed

    Chen, Badong; Zhao, Songlin; Zhu, Pingping; Príncipe, José C

    2012-01-01

    In this paper, we propose a quantization approach, as an alternative of sparsification, to curb the growth of the radial basis function structure in kernel adaptive filtering. The basic idea behind this method is to quantize and hence compress the input (or feature) space. Different from sparsification, the new approach uses the "redundant" data to update the coefficient of the closest center. In particular, a quantized kernel least mean square (QKLMS) algorithm is developed, which is based on a simple online vector quantization method. The analytical study of the mean square convergence has been carried out. The energy conservation relation for QKLMS is established, and on this basis we arrive at a sufficient condition for mean square convergence, and a lower and upper bound on the theoretical value of the steady-state excess mean square error. Static function estimation and short-term chaotic time-series prediction examples are presented to demonstrate the excellent performance.

  19. Rate and power efficient image compressed sensing and transmission

    NASA Astrophysics Data System (ADS)

    Olanigan, Saheed; Cao, Lei; Viswanathan, Ramanarayanan

    2016-01-01

    This paper presents a suboptimal quantization and transmission scheme for multiscale block-based compressed sensing images over wireless channels. The proposed method includes two stages: dealing with quantization distortion and transmission errors. First, given the total transmission bit rate, the optimal number of quantization bits is assigned to the sensed measurements in different wavelet sub-bands so that the total quantization distortion is minimized. Second, given the total transmission power, the energy is allocated to different quantization bit layers based on their different error sensitivities. The method of Lagrange multipliers with Karush-Kuhn-Tucker conditions is used to solve both optimization problems, for which the first problem can be solved with relaxation and the second problem can be solved completely. The effectiveness of the scheme is illustrated through simulation results, which have shown up to 10 dB improvement over the method without the rate and power optimization in medium and low signal-to-noise ratio cases.

  20. Double Ramification Cycles and Quantum Integrable Systems

    NASA Astrophysics Data System (ADS)

    Buryak, Alexandr; Rossi, Paolo

    2016-03-01

    In this paper, we define a quantization of the Double Ramification Hierarchies of Buryak (Commun Math Phys 336:1085-1107, 2015) and Buryak and Rossi (Commun Math Phys, 2014), using intersection numbers of the double ramification cycle, the full Chern class of the Hodge bundle and psi-classes with a given cohomological field theory. We provide effective recursion formulae which determine the full quantum hierarchy starting from just one Hamiltonian, the one associated with the first descendant of the unit of the cohomological field theory only. We study various examples which provide, in very explicit form, new (1+1)-dimensional integrable quantum field theories whose classical limits are well-known integrable hierarchies such as KdV, Intermediate Long Wave, extended Toda, etc. Finally, we prove polynomiality in the ramification multiplicities of the integral of any tautological class over the double ramification cycle.

  1. Normal and lateral Casimir forces between deformed plates

    NASA Astrophysics Data System (ADS)

    Emig, Thorsten; Hanke, Andreas; Golestanian, Ramin; Kardar, Mehran

    2003-02-01

    The Casimir force between macroscopic bodies depends strongly on their shape and orientation. To study this geometry dependence in the case of two deformed metal plates, we use a path-integral quantization of the electromagnetic field which properly treats the many-body nature of the interaction, going beyond the commonly used pairwise summation (PWS) of van der Waals forces. For arbitrary deformations we provide an analytical result for the deformation induced change in the Casimir energy, which is exact to second order in the deformation amplitude. For the specific case of sinusoidally corrugated plates, we calculate both the normal and the lateral Casimir forces. The deformation induced change in the Casimir interaction of a flat and a corrugated plate shows an interesting crossover as a function of the ratio of the mean plate distance H to the corrugation length λ: For λ≪H we find a slower decay ˜H-4, compared to the H-5 behavior predicted by PWS which we show to be valid only for λ≫H. The amplitude of the lateral force between two corrugated plates which are out of registry is shown to have a maximum at an optimal wavelength of λ≈2.5 H. With increasing H/λ≳0.3 the PWS approach becomes a progressively worse description of the lateral force due to many-body effects. These results may be of relevance for the design and operation of novel microelectromechanical systems (MEMS) and other nanoscale devices.

  2. Path integrals and large deviations in stochastic hybrid systems.

    PubMed

    Bressloff, Paul C; Newby, Jay M

    2014-04-01

    We construct a path-integral representation of solutions to a stochastic hybrid system, consisting of one or more continuous variables evolving according to a piecewise-deterministic dynamics. The differential equations for the continuous variables are coupled to a set of discrete variables that satisfy a continuous-time Markov process, which means that the differential equations are only valid between jumps in the discrete variables. Examples of stochastic hybrid systems arise in biophysical models of stochastic ion channels, motor-driven intracellular transport, gene networks, and stochastic neural networks. We use the path-integral representation to derive a large deviation action principle for a stochastic hybrid system. Minimizing the associated action functional with respect to the set of all trajectories emanating from a metastable state (assuming that such a minimization scheme exists) then determines the most probable paths of escape. Moreover, evaluating the action functional along a most probable path generates the so-called quasipotential used in the calculation of mean first passage times. We illustrate the theory by considering the optimal paths of escape from a metastable state in a bistable neural network.

  3. Integrated Flight Path Planning System and Flight Control System for Unmanned Helicopters

    PubMed Central

    Jan, Shau Shiun; Lin, Yu Hsiang

    2011-01-01

    This paper focuses on the design of an integrated navigation and guidance system for unmanned helicopters. The integrated navigation system comprises two systems: the Flight Path Planning System (FPPS) and the Flight Control System (FCS). The FPPS finds the shortest flight path by the A-Star (A*) algorithm in an adaptive manner for different flight conditions, and the FPPS can add a forbidden zone to stop the unmanned helicopter from crossing over into dangerous areas. In this paper, the FPPS computation time is reduced by the multi-resolution scheme, and the flight path quality is improved by the path smoothing methods. Meanwhile, the FCS includes the fuzzy inference systems (FISs) based on the fuzzy logic. By using expert knowledge and experience to train the FIS, the controller can operate the unmanned helicopter without dynamic models. The integrated system of the FPPS and the FCS is aimed at providing navigation and guidance to the mission destination and it is implemented by coupling the flight simulation software, X-Plane, and the computing software, MATLAB. Simulations are performed and shown in real time three-dimensional animations. Finally, the integrated system is demonstrated to work successfully in controlling the unmanned helicopter to operate in various terrains of a digital elevation model (DEM). PMID:22164029

  4. Integrated flight path planning system and flight control system for unmanned helicopters.

    PubMed

    Jan, Shau Shiun; Lin, Yu Hsiang

    2011-01-01

    This paper focuses on the design of an integrated navigation and guidance system for unmanned helicopters. The integrated navigation system comprises two systems: the Flight Path Planning System (FPPS) and the Flight Control System (FCS). The FPPS finds the shortest flight path by the A-Star (A*) algorithm in an adaptive manner for different flight conditions, and the FPPS can add a forbidden zone to stop the unmanned helicopter from crossing over into dangerous areas. In this paper, the FPPS computation time is reduced by the multi-resolution scheme, and the flight path quality is improved by the path smoothing methods. Meanwhile, the FCS includes the fuzzy inference systems (FISs) based on the fuzzy logic. By using expert knowledge and experience to train the FIS, the controller can operate the unmanned helicopter without dynamic models. The integrated system of the FPPS and the FCS is aimed at providing navigation and guidance to the mission destination and it is implemented by coupling the flight simulation software, X-Plane, and the computing software, MATLAB. Simulations are performed and shown in real time three-dimensional animations. Finally, the integrated system is demonstrated to work successfully in controlling the unmanned helicopter to operate in various terrains of a digital elevation model (DEM).

  5. Covariant path integrals on hyperbolic surfaces

    NASA Astrophysics Data System (ADS)

    Schaefer, Joe

    1997-11-01

    DeWitt's covariant formulation of path integration [B. De Witt, "Dynamical theory in curved spaces. I. A review of the classical and quantum action principles," Rev. Mod. Phys. 29, 377-397 (1957)] has two practical advantages over the traditional methods of "lattice approximations;" there is no ordering problem, and classical symmetries are manifestly preserved at the quantum level. Applying the spectral theorem for unbounded self-adjoint operators, we provide a rigorous proof of the convergence of certain path integrals on Riemann surfaces of constant curvature -1. The Pauli-DeWitt curvature correction term arises, as in DeWitt's work. Introducing a Fuchsian group Γ of the first kind, and a continuous, bounded, Γ-automorphic potential V, we obtain a Feynman-Kac formula for the automorphic Schrödinger equation on the Riemann surface ΓH. We analyze the Wick rotation and prove the strong convergence of the so-called Feynman maps [K. D. Elworthy, Path Integration on Manifolds, Mathematical Aspects of Superspace, edited by Seifert, Clarke, and Rosenblum (Reidel, Boston, 1983), pp. 47-90] on a dense set of states. Finally, we give a new proof of some results in C. Grosche and F. Steiner, "The path integral on the Poincare upper half plane and for Liouville quantum mechanics," Phys. Lett. A 123, 319-328 (1987).

  6. Vector quantization

    NASA Technical Reports Server (NTRS)

    Gray, Robert M.

    1989-01-01

    During the past ten years Vector Quantization (VQ) has developed from a theoretical possibility promised by Shannon's source coding theorems into a powerful and competitive technique for speech and image coding and compression at medium to low bit rates. In this survey, the basic ideas behind the design of vector quantizers are sketched and some comments made on the state-of-the-art and current research efforts.

  7. Robust vector quantization for noisy channels

    NASA Technical Reports Server (NTRS)

    Demarca, J. R. B.; Farvardin, N.; Jayant, N. S.; Shoham, Y.

    1988-01-01

    The paper briefly discusses techniques for making vector quantizers more tolerant to tranmsission errors. Two algorithms are presented for obtaining an efficient binary word assignment to the vector quantizer codewords without increasing the transmission rate. It is shown that about 4.5 dB gain over random assignment can be achieved with these algorithms. It is also proposed to reduce the effects of error propagation in vector-predictive quantizers by appropriately constraining the response of the predictive loop. The constrained system is shown to have about 4 dB of SNR gain over an unconstrained system in a noisy channel, with a small loss of clean-channel performance.

  8. Image data compression having minimum perceptual error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1995-01-01

    A method for performing image compression that eliminates redundant and invisible image components is described. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  9. Immirzi parameter without Immirzi ambiguity: Conformal loop quantization of scalar-tensor gravity

    NASA Astrophysics Data System (ADS)

    Veraguth, Olivier J.; Wang, Charles H.-T.

    2017-10-01

    Conformal loop quantum gravity provides an approach to loop quantization through an underlying conformal structure i.e. conformally equivalent class of metrics. The property that general relativity itself has no conformal invariance is reinstated with a constrained scalar field setting the physical scale. Conformally equivalent metrics have recently been shown to be amenable to loop quantization including matter coupling. It has been suggested that conformal geometry may provide an extended symmetry to allow a reformulated Immirzi parameter necessary for loop quantization to behave like an arbitrary group parameter that requires no further fixing as its present standard form does. Here, we find that this can be naturally realized via conformal frame transformations in scalar-tensor gravity. Such a theory generally incorporates a dynamical scalar gravitational field and reduces to general relativity when the scalar field becomes a pure gauge. In particular, we introduce a conformal Einstein frame in which loop quantization is implemented. We then discuss how different Immirzi parameters under this description may be related by conformal frame transformations and yet share the same quantization having, for example, the same area gaps, modulated by the scalar gravitational field.

  10. Tribology of the lubricant quantized sliding state.

    PubMed

    Castelli, Ivano Eligio; Capozza, Rosario; Vanossi, Andrea; Santoro, Giuseppe E; Manini, Nicola; Tosatti, Erio

    2009-11-07

    In the framework of Langevin dynamics, we demonstrate clear evidence of the peculiar quantized sliding state, previously found in a simple one-dimensional boundary lubricated model [A. Vanossi et al., Phys. Rev. Lett. 97, 056101 (2006)], for a substantially less idealized two-dimensional description of a confined multilayer solid lubricant under shear. This dynamical state, marked by a nontrivial "quantized" ratio of the averaged lubricant center-of-mass velocity to the externally imposed sliding speed, is recovered, and shown to be robust against the effects of thermal fluctuations, quenched disorder in the confining substrates, and over a wide range of loading forces. The lubricant softness, setting the width of the propagating solitonic structures, is found to play a major role in promoting in-registry commensurate regions beneficial to this quantized sliding. By evaluating the force instantaneously exerted on the top plate, we find that this quantized sliding represents a dynamical "pinned" state, characterized by significantly low values of the kinetic friction. While the quantized sliding occurs due to solitons being driven gently, the transition to ordinary unpinned sliding regimes can involve lubricant melting due to large shear-induced Joule heating, for example at large speed.

  11. Optimal Compression of Floating-Point Astronomical Images Without Significant Loss of Information

    NASA Technical Reports Server (NTRS)

    Pence, William D.; White, R. L.; Seaman, R.

    2010-01-01

    We describe a compression method for floating-point astronomical images that gives compression ratios of 6 - 10 while still preserving the scientifically important information in the image. The pixel values are first preprocessed by quantizing them into scaled integer intensity levels, which removes some of the uncompressible noise in the image. The integers are then losslessly compressed using the fast and efficient Rice algorithm and stored in a portable FITS format file. Quantizing an image more coarsely gives greater image compression, but it also increases the noise and degrades the precision of the photometric and astrometric measurements in the quantized image. Dithering the pixel values during the quantization process greatly improves the precision of measurements in the more coarsely quantized images. We perform a series of experiments on both synthetic and real astronomical CCD images to quantitatively demonstrate that the magnitudes and positions of stars in the quantized images can be measured with the predicted amount of precision. In order to encourage wider use of these image compression methods, we have made available a pair of general-purpose image compression programs, called fpack and funpack, which can be used to compress any FITS format image.

  12. Review of computer simulations of isotope effects on biochemical reactions: From the Bigeleisen equation to Feynman's path integral.

    PubMed

    Wong, Kin-Yiu; Xu, Yuqing; Xu, Liang

    2015-11-01

    Enzymatic reactions are integral components in many biological functions and malfunctions. The iconic structure of each reaction path for elucidating the reaction mechanism in details is the molecular structure of the rate-limiting transition state (RLTS). But RLTS is very hard to get caught or to get visualized by experimentalists. In spite of the lack of explicit molecular structure of the RLTS in experiment, we still can trace out the RLTS unique "fingerprints" by measuring the isotope effects on the reaction rate. This set of "fingerprints" is considered as a most direct probe of RLTS. By contrast, for computer simulations, oftentimes molecular structures of a number of TS can be precisely visualized on computer screen, however, theoreticians are not sure which TS is the actual rate-limiting one. As a result, this is an excellent stage setting for a perfect "marriage" between experiment and theory for determining the structure of RLTS, along with the reaction mechanism, i.e., experimentalists are responsible for "fingerprinting", whereas theoreticians are responsible for providing candidates that match the "fingerprints". In this Review, the origin of isotope effects on a chemical reaction is discussed from the perspectives of classical and quantum worlds, respectively (e.g., the origins of the inverse kinetic isotope effects and all the equilibrium isotope effects are purely from quantum). The conventional Bigeleisen equation for isotope effect calculations, as well as its refined version in the framework of Feynman's path integral and Kleinert's variational perturbation (KP) theory for systematically incorporating anharmonicity and (non-parabolic) quantum tunneling, are also presented. In addition, the outstanding interplay between theory and experiment for successfully deducing the RLTS structures and the reaction mechanisms is demonstrated by applications on biochemical reactions, namely models of bacterial squalene-to-hopene polycyclization and RNA 2'-O-transphosphorylation. For all these applications, we used our recently-developed path-integral method based on the KP theory, called automated integration-free path-integral (AIF-PI) method, to perform ab initio path-integral calculations of isotope effects. As opposed to the conventional path-integral molecular dynamics (PIMD) and Monte Carlo (PIMC) simulations, values calculated from our AIF-PI path-integral method can be as precise as (not as accurate as) the numerical precision of the computing machine. Lastly, comments are made on the general challenges in theoretical modeling of candidates matching the experimental "fingerprints" of RLTS. This article is part of a Special Issue entitled: Enzyme Transition States from Theory and Experiment. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Quantized Majorana conductance

    NASA Astrophysics Data System (ADS)

    Zhang, Hao; Liu, Chun-Xiao; Gazibegovic, Sasa; Xu, Di; Logan, John A.; Wang, Guanzhong; van Loo, Nick; Bommer, Jouri D. S.; de Moor, Michiel W. A.; Car, Diana; Op Het Veld, Roy L. M.; van Veldhoven, Petrus J.; Koelling, Sebastian; Verheijen, Marcel A.; Pendharkar, Mihir; Pennachio, Daniel J.; Shojaei, Borzoyeh; Lee, Joon Sue; Palmstrøm, Chris J.; Bakkers, Erik P. A. M.; Sarma, S. Das; Kouwenhoven, Leo P.

    2018-04-01

    Majorana zero-modes—a type of localized quasiparticle—hold great promise for topological quantum computing. Tunnelling spectroscopy in electrical transport is the primary tool for identifying the presence of Majorana zero-modes, for instance as a zero-bias peak in differential conductance. The height of the Majorana zero-bias peak is predicted to be quantized at the universal conductance value of 2e2/h at zero temperature (where e is the charge of an electron and h is the Planck constant), as a direct consequence of the famous Majorana symmetry in which a particle is its own antiparticle. The Majorana symmetry protects the quantization against disorder, interactions and variations in the tunnel coupling. Previous experiments, however, have mostly shown zero-bias peaks much smaller than 2e2/h, with a recent observation of a peak height close to 2e2/h. Here we report a quantized conductance plateau at 2e2/h in the zero-bias conductance measured in indium antimonide semiconductor nanowires covered with an aluminium superconducting shell. The height of our zero-bias peak remains constant despite changing parameters such as the magnetic field and tunnel coupling, indicating that it is a quantized conductance plateau. We distinguish this quantized Majorana peak from possible non-Majorana origins by investigating its robustness to electric and magnetic fields as well as its temperature dependence. The observation of a quantized conductance plateau strongly supports the existence of Majorana zero-modes in the system, consequently paving the way for future braiding experiments that could lead to topological quantum computing.

  14. Quantized Majorana conductance.

    PubMed

    Zhang, Hao; Liu, Chun-Xiao; Gazibegovic, Sasa; Xu, Di; Logan, John A; Wang, Guanzhong; van Loo, Nick; Bommer, Jouri D S; de Moor, Michiel W A; Car, Diana; Op Het Veld, Roy L M; van Veldhoven, Petrus J; Koelling, Sebastian; Verheijen, Marcel A; Pendharkar, Mihir; Pennachio, Daniel J; Shojaei, Borzoyeh; Lee, Joon Sue; Palmstrøm, Chris J; Bakkers, Erik P A M; Sarma, S Das; Kouwenhoven, Leo P

    2018-04-05

    Majorana zero-modes-a type of localized quasiparticle-hold great promise for topological quantum computing. Tunnelling spectroscopy in electrical transport is the primary tool for identifying the presence of Majorana zero-modes, for instance as a zero-bias peak in differential conductance. The height of the Majorana zero-bias peak is predicted to be quantized at the universal conductance value of 2e 2 /h at zero temperature (where e is the charge of an electron and h is the Planck constant), as a direct consequence of the famous Majorana symmetry in which a particle is its own antiparticle. The Majorana symmetry protects the quantization against disorder, interactions and variations in the tunnel coupling. Previous experiments, however, have mostly shown zero-bias peaks much smaller than 2e 2 /h, with a recent observation of a peak height close to 2e 2 /h. Here we report a quantized conductance plateau at 2e 2 /h in the zero-bias conductance measured in indium antimonide semiconductor nanowires covered with an aluminium superconducting shell. The height of our zero-bias peak remains constant despite changing parameters such as the magnetic field and tunnel coupling, indicating that it is a quantized conductance plateau. We distinguish this quantized Majorana peak from possible non-Majorana origins by investigating its robustness to electric and magnetic fields as well as its temperature dependence. The observation of a quantized conductance plateau strongly supports the existence of Majorana zero-modes in the system, consequently paving the way for future braiding experiments that could lead to topological quantum computing.

  15. Controlling charge quantization with quantum fluctuations.

    PubMed

    Jezouin, S; Iftikhar, Z; Anthore, A; Parmentier, F D; Gennser, U; Cavanna, A; Ouerghi, A; Levkivskyi, I P; Idrisov, E; Sukhorukov, E V; Glazman, L I; Pierre, F

    2016-08-04

    In 1909, Millikan showed that the charge of electrically isolated systems is quantized in units of the elementary electron charge e. Today, the persistence of charge quantization in small, weakly connected conductors allows for circuits in which single electrons are manipulated, with applications in, for example, metrology, detectors and thermometry. However, as the connection strength is increased, the discreteness of charge is progressively reduced by quantum fluctuations. Here we report the full quantum control and characterization of charge quantization. By using semiconductor-based tunable elemental conduction channels to connect a micrometre-scale metallic island to a circuit, we explore the complete evolution of charge quantization while scanning the entire range of connection strengths, from a very weak (tunnel) to a perfect (ballistic) contact. We observe, when approaching the ballistic limit, that charge quantization is destroyed by quantum fluctuations, and scales as the square root of the residual probability for an electron to be reflected across the quantum channel; this scaling also applies beyond the different regimes of connection strength currently accessible to theory. At increased temperatures, the thermal fluctuations result in an exponential suppression of charge quantization and in a universal square-root scaling, valid for all connection strengths, in agreement with expectations. Besides being pertinent for the improvement of single-electron circuits and their applications, and for the metal-semiconductor hybrids relevant to topological quantum computing, knowledge of the quantum laws of electricity will be essential for the quantum engineering of future nanoelectronic devices.

  16. Multispectral scanner system parameter study and analysis software system description, volume 2

    NASA Technical Reports Server (NTRS)

    Landgrebe, D. A. (Principal Investigator); Mobasseri, B. G.; Wiersma, D. J.; Wiswell, E. R.; Mcgillem, C. D.; Anuta, P. E.

    1978-01-01

    The author has identified the following significant results. The integration of the available methods provided the analyst with the unified scanner analysis package (USAP), the flexibility and versatility of which was superior to many previous integrated techniques. The USAP consisted of three main subsystems; (1) a spatial path, (2) a spectral path, and (3) a set of analytic classification accuracy estimators which evaluated the system performance. The spatial path consisted of satellite and/or aircraft data, data correlation analyzer, scanner IFOV, and random noise model. The output of the spatial path was fed into the analytic classification and accuracy predictor. The spectral path consisted of laboratory and/or field spectral data, EXOSYS data retrieval, optimum spectral function calculation, data transformation, and statistics calculation. The output of the spectral path was fended into the stratified posterior performance estimator.

  17. Neural basis of the cognitive map: path integration does not require hippocampus or entorhinal cortex.

    PubMed

    Shrager, Yael; Kirwan, C Brock; Squire, Larry R

    2008-08-19

    The hippocampus and entorhinal cortex have been linked to both memory functions and to spatial cognition, but it has been unclear how these ideas relate to each other. An important part of spatial cognition is the ability to keep track of a reference location using self-motion cues (sometimes referred to as path integration), and it has been suggested that the hippocampus or entorhinal cortex is essential for this ability. Patients with hippocampal lesions or larger lesions that also included entorhinal cortex were led on paths while blindfolded (up to 15 m in length) and were asked to actively maintain the path in mind. Patients pointed to and estimated their distance from the start location as accurately as controls. A rotation condition confirmed that performance was based on self-motion cues. When demands on long-term memory were increased, patients were impaired. Thus, in humans, the hippocampus and entorhinal cortex are not essential for path integration.

  18. Evaluation of NASA speech encoder

    NASA Technical Reports Server (NTRS)

    1976-01-01

    Techniques developed by NASA for spaceflight instrumentation were used in the design of a quantizer for speech-decoding. Computer simulation of the actions of the quantizer was tested with synthesized and real speech signals. Results were evaluated by a phometician. Topics discussed include the relationship between the number of quantizer levels and the required sampling rate; reconstruction of signals; digital filtering; speech recording, sampling, and storage, and processing results.

  19. Addendum to "Free energies from integral equation theories: enforcing path independence".

    PubMed

    Kast, Stefan M

    2006-01-01

    The variational formalism developed for the analysis of the path dependence of free energies from integral equation theories [S. M. Kast, Phys. Rev. E 67, 041203 (2003)] is extended in order to allow for the three-dimensional treatment of arbitrarily shaped solutes.

  20. A theory for the radiation of magnetohydrodynamic surface waves and body waves into the solar corona

    NASA Technical Reports Server (NTRS)

    Davila, Joseph M.

    1988-01-01

    The Green's function for the slab coronal hole is obtained explicitly. The Fourier integral representation for the radiated field inside and outside the coronal hole waveguide is obtained. The radiated field outside the coronal hole is calculated using the method of steepest descents. It is shown that the radiated field can be written as the sum of two contributions: (1) a contribution from the integral along the steepest descent path and (2) a contribution from all the poles of the integrand between the path of the original integral and the steepest descent path. The free oscillations of the waveguide can be associated with the pole contributions in the steepest descent representation for the Green's function. These pole contributions are essentially generalized surface waves with a maximum amplitude near the interface which separates the plasma inside the coronal hole from the surrounding background corona. The path contribution to the integral is essentially the power radiated in body waves.

  1. Self-organizing path integration using a linked continuous attractor and competitive network: path integration of head direction.

    PubMed

    Stringer, Simon M; Rolls, Edmund T

    2006-12-01

    A key issue is how networks in the brain learn to perform path integration, that is update a represented position using a velocity signal. Using head direction cells as an example, we show that a competitive network could self-organize to learn to respond to combinations of head direction and angular head rotation velocity. These combination cells can then be used to drive a continuous attractor network to the next head direction based on the incoming rotation signal. An associative synaptic modification rule with a short term memory trace enables preceding combination cell activity during training to be associated with the next position in the continuous attractor network. The network accounts for the presence of neurons found in the brain that respond to combinations of head direction and angular head rotation velocity. Analogous networks in the hippocampal system could self-organize to perform path integration of place and spatial view representations.

  2. Quantum games of opinion formation based on the Marinatto-Weber quantum game scheme

    NASA Astrophysics Data System (ADS)

    Deng, Xinyang; Deng, Yong; Liu, Qi; Shi, Lei; Wang, Zhen

    2016-06-01

    Quantization has become a new way to investigate classical game theory since quantum strategies and quantum games were proposed. In the existing studies, many typical game models, such as the prisoner's dilemma, battle of the sexes, Hawk-Dove game, have been extensively explored by using quantization approach. Along a similar method, here several game models of opinion formations will be quantized on the basis of the Marinatto-Weber quantum game scheme, a frequently used scheme of converting classical games to quantum versions. Our results show that the quantization can fascinatingly change the properties of some classical opinion formation game models so as to generate win-win outcomes.

  3. On Fock-space representations of quantized enveloping algebras related to noncommutative differential geometry

    NASA Astrophysics Data System (ADS)

    Jurčo, B.; Schlieker, M.

    1995-07-01

    In this paper explicitly natural (from the geometrical point of view) Fock-space representations (contragradient Verma modules) of the quantized enveloping algebras are constructed. In order to do so, one starts from the Gauss decomposition of the quantum group and introduces the differential operators on the corresponding q-deformed flag manifold (assumed as a left comodule for the quantum group) by a projection to it of the right action of the quantized enveloping algebra on the quantum group. Finally, the representatives of the elements of the quantized enveloping algebra corresponding to the left-invariant vector fields on the quantum group are expressed as first-order differential operators on the q-deformed flag manifold.

  4. Exploratory research session on the quantization of the gravitational field. At the Institute for Theoretical Physics, Copenhagen, Denmark, June-July 1957

    NASA Astrophysics Data System (ADS)

    DeWitt, Bryce S.

    2017-06-01

    During the period June-July 1957 six physicists met at the Institute for Theoretical Physics of the University of Copenhagen in Denmark to work together on problems connected with the quantization of the gravitational field. A large part of the discussion was devoted to exposition of the individual work of the various participants, but a number of new results were also obtained. The topics investigated by these physicists are outlined in this report and may be grouped under the following main headings: The theory of measurement. Topographical problems in general relativity. Feynman quantization. Canonical quantization. Approximation methods. Special problems.

  5. PathJam: a new service for integrating biological pathway information.

    PubMed

    Glez-Peña, Daniel; Reboiro-Jato, Miguel; Domínguez, Rubén; Gómez-López, Gonzalo; Pisano, David G; Fdez-Riverola, Florentino

    2010-10-28

    Biological pathways are crucial to much of the scientific research today including the study of specific biological processes related with human diseases. PathJam is a new comprehensive and freely accessible web-server application integrating scattered human pathway annotation from several public sources. The tool has been designed for both (i) being intuitive for wet-lab users providing statistical enrichment analysis of pathway annotations and (ii) giving support to the development of new integrative pathway applications. PathJam’s unique features and advantages include interactive graphs linking pathways and genes of interest, downloadable results in fully compatible formats, GSEA compatible output files and a standardized RESTful API.

  6. Integrative Families and Systems Treatment: A Middle Path toward Integrating Common and Specific Factors in Evidence-Based Family Therapy

    ERIC Educational Resources Information Center

    Fraser, J. Scott; Solovey, Andrew D.; Grove, David; Lee, Mo Yee; Greene, Gilbert J.

    2012-01-01

    A moderate common factors approach is proposed as a synthesis or middle path to integrate common and specific factors in evidence-based approaches to high-risk youth and families. The debate in family therapy between common and specific factors camps is reviewed and followed by suggestions from the literature for synthesis and creative flexibility…

  7. Magnetic resonance image compression using scalar-vector quantization

    NASA Astrophysics Data System (ADS)

    Mohsenian, Nader; Shahri, Homayoun

    1995-12-01

    A new coding scheme based on the scalar-vector quantizer (SVQ) is developed for compression of medical images. SVQ is a fixed-rate encoder and its rate-distortion performance is close to that of optimal entropy-constrained scalar quantizers (ECSQs) for memoryless sources. The use of a fixed-rate quantizer is expected to eliminate some of the complexity issues of using variable-length scalar quantizers. When transmission of images over noisy channels is considered, our coding scheme does not suffer from error propagation which is typical of coding schemes which use variable-length codes. For a set of magnetic resonance (MR) images, coding results obtained from SVQ and ECSQ at low bit-rates are indistinguishable. Furthermore, our encoded images are perceptually indistinguishable from the original, when displayed on a monitor. This makes our SVQ based coder an attractive compression scheme for picture archiving and communication systems (PACS), currently under consideration for an all digital radiology environment in hospitals, where reliable transmission, storage, and high fidelity reconstruction of images are desired.

  8. Topologies on quantum topoi induced by quantization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakayama, Kunji

    2013-07-15

    In the present paper, we consider effects of quantization in a topos approach of quantum theory. A quantum system is assumed to be coded in a quantum topos, by which we mean the topos of presheaves on the context category of commutative subalgebras of a von Neumann algebra of bounded operators on a Hilbert space. A classical system is modeled by a Lie algebra of classical observables. It is shown that a quantization map from the classical observables to self-adjoint operators on the Hilbert space naturally induces geometric morphisms from presheaf topoi related to the classical system to the quantummore » topos. By means of the geometric morphisms, we give Lawvere-Tierney topologies on the quantum topos (and their equivalent Grothendieck topologies on the context category). We show that, among them, there exists a canonical one which we call a quantization topology. We furthermore give an explicit expression of a sheafification functor associated with the quantization topology.« less

  9. Bulk-edge correspondence in topological transport and pumping

    NASA Astrophysics Data System (ADS)

    Imura, Ken-Ichiro; Yoshimura, Yukinori; Fukui, Takahiro; Hatsugai, Yasuhiro

    2018-03-01

    The bulk-edge correspondence (BEC) refers to a one-to-one relation between the bulk and edge properties ubiquitous in topologically nontrivial systems. Depending on the setup, BEC manifests in different forms and govern the spectral and transport properties of topological insulators and semimetals. Although the topological pump is theoretically old, BEC in the pump has been established just recently [1] motivated by the state-of-the-art experiments using cold atoms [2, 3]. The center of mass (CM) of a system with boundaries shows a sequence of quantized jumps in the adiabatic limit associated with the edge states. Despite that the bulk is adiabatic, the edge is inevitably non-adiabatic in the experimental setup or in any numerical simulations. Still the pumped charge is quantized and carried by the bulk. Its quantization is guaranteed by a compensation between the bulk and edges. We show that in the presence of disorder the pumped charge continues to be quantized despite the appearance of non-quantized jumps.

  10. Fast large-scale object retrieval with binary quantization

    NASA Astrophysics Data System (ADS)

    Zhou, Shifu; Zeng, Dan; Shen, Wei; Zhang, Zhijiang; Tian, Qi

    2015-11-01

    The objective of large-scale object retrieval systems is to search for images that contain the target object in an image database. Where state-of-the-art approaches rely on global image representations to conduct searches, we consider many boxes per image as candidates to search locally in a picture. In this paper, a feature quantization algorithm called binary quantization is proposed. In binary quantization, a scale-invariant feature transform (SIFT) feature is quantized into a descriptive and discriminative bit-vector, which allows itself to adapt to the classic inverted file structure for box indexing. The inverted file, which stores the bit-vector and box ID where the SIFT feature is located inside, is compact and can be loaded into the main memory for efficient box indexing. We evaluate our approach on available object retrieval datasets. Experimental results demonstrate that the proposed approach is fast and achieves excellent search quality. Therefore, the proposed approach is an improvement over state-of-the-art approaches for object retrieval.

  11. Global synchronization of complex dynamical networks through digital communication with limited data rate.

    PubMed

    Wang, Yan-Wu; Bian, Tao; Xiao, Jiang-Wen; Wen, Changyun

    2015-10-01

    This paper studies the global synchronization of complex dynamical network (CDN) under digital communication with limited bandwidth. To realize the digital communication, the so-called uniform-quantizer-sets are introduced to quantize the states of nodes, which are then encoded and decoded by newly designed encoders and decoders. To meet the requirement of the bandwidth constraint, a scaling function is utilized to guarantee the quantizers having bounded inputs and thus achieving bounded real-time quantization levels. Moreover, a new type of vector norm is introduced to simplify the expression of the bandwidth limit. Through mathematical induction, a sufficient condition is derived to ensure global synchronization of the CDNs. The lower bound on the sum of the real-time quantization levels is analyzed for different cases. Optimization method is employed to relax the requirements on the network topology and to determine the minimum of such lower bound for each case, respectively. Simulation examples are also presented to illustrate the established results.

  12. Path integral Monte Carlo and the electron gas

    NASA Astrophysics Data System (ADS)

    Brown, Ethan W.

    Path integral Monte Carlo is a proven method for accurately simulating quantum mechanical systems at finite-temperature. By stochastically sampling Feynman's path integral representation of the quantum many-body density matrix, path integral Monte Carlo includes non-perturbative effects like thermal fluctuations and particle correlations in a natural way. Over the past 30 years, path integral Monte Carlo has been successfully employed to study the low density electron gas, high-pressure hydrogen, and superfluid helium. For systems where the role of Fermi statistics is important, however, traditional path integral Monte Carlo simulations have an exponentially decreasing efficiency with decreased temperature and increased system size. In this thesis, we work towards improving this efficiency, both through approximate and exact methods, as specifically applied to the homogeneous electron gas. We begin with a brief overview of the current state of atomic simulations at finite-temperature before we delve into a pedagogical review of the path integral Monte Carlo method. We then spend some time discussing the one major issue preventing exact simulation of Fermi systems, the sign problem. Afterwards, we introduce a way to circumvent the sign problem in PIMC simulations through a fixed-node constraint. We then apply this method to the homogeneous electron gas at a large swatch of densities and temperatures in order to map out the warm-dense matter regime. The electron gas can be a representative model for a host of real systems, from simple medals to stellar interiors. However, its most common use is as input into density functional theory. To this end, we aim to build an accurate representation of the electron gas from the ground state to the classical limit and examine its use in finite-temperature density functional formulations. The latter half of this thesis focuses on possible routes beyond the fixed-node approximation. As a first step, we utilize the variational principle inherent in the path integral Monte Carlo method to optimize the nodal surface. By using a ansatz resembling a free particle density matrix, we make a unique connection between a nodal effective mass and the traditional effective mass of many-body quantum theory. We then propose and test several alternate nodal ansatzes and apply them to single atomic systems. Finally, we propose a method to tackle the sign problem head on, by leveraging the relatively simple structure of permutation space. Using this method, we find we can perform exact simulations this of the electron gas and 3He that were previously impossible.

  13. Haralick texture features from apparent diffusion coefficient (ADC) MRI images depend on imaging and pre-processing parameters.

    PubMed

    Brynolfsson, Patrik; Nilsson, David; Torheim, Turid; Asklund, Thomas; Karlsson, Camilla Thellenberg; Trygg, Johan; Nyholm, Tufve; Garpebring, Anders

    2017-06-22

    In recent years, texture analysis of medical images has become increasingly popular in studies investigating diagnosis, classification and treatment response assessment of cancerous disease. Despite numerous applications in oncology and medical imaging in general, there is no consensus regarding texture analysis workflow, or reporting of parameter settings crucial for replication of results. The aim of this study was to assess how sensitive Haralick texture features of apparent diffusion coefficient (ADC) MR images are to changes in five parameters related to image acquisition and pre-processing: noise, resolution, how the ADC map is constructed, the choice of quantization method, and the number of gray levels in the quantized image. We found that noise, resolution, choice of quantization method and the number of gray levels in the quantized images had a significant influence on most texture features, and that the effect size varied between different features. Different methods for constructing the ADC maps did not have an impact on any texture feature. Based on our results, we recommend using images with similar resolutions and noise levels, using one quantization method, and the same number of gray levels in all quantized images, to make meaningful comparisons of texture feature results between different subjects.

  14. From black holes to white holes: a quantum gravitational, symmetric bounce

    NASA Astrophysics Data System (ADS)

    Olmedo, Javier; Saini, Sahil; Singh, Parampreet

    2017-11-01

    Recently, a consistent non-perturbative quantization of the Schwarzschild interior resulting in a bounce from black hole to white hole geometry has been obtained by loop quantizing the Kantowski-Sachs vacuum spacetime. As in other spacetimes where the singularity is dominated by the Weyl part of the spacetime curvature, the structure of the singularity is highly anisotropic in the Kantowski-Sachs vacuum spacetime. As a result, the bounce turns out to be in general asymmetric, creating a large mass difference between the parent black hole and the child white hole. In this manuscript, we investigate under what circumstances a symmetric bounce scenario can be constructed in the above quantization. Using the setting of Dirac observables and geometric clocks, we obtain a symmetric bounce condition which can be satisfied by a slight modification in the construction of loops over which holonomies are considered in the quantization procedure. These modifications can be viewed as quantization ambiguities, and are demonstrated in three different flavors, all of which lead to a non-singular black to white hole transition with identical masses. Our results show that quantization ambiguities can mitigate or even qualitatively change some key features of the physics of singularity resolution. Further, these results are potentially helpful in motivating and constructing symmetric black to white hole transition scenarios.

  15. Rate-distortion analysis of dead-zone plus uniform threshold scalar quantization and its application--part II: two-pass VBR coding for H.264/AVC.

    PubMed

    Sun, Jun; Duan, Yizhou; Li, Jiangtao; Liu, Jiaying; Guo, Zongming

    2013-01-01

    In the first part of this paper, we derive a source model describing the relationship between the rate, distortion, and quantization steps of the dead-zone plus uniform threshold scalar quantizers with nearly uniform reconstruction quantizers for generalized Gaussian distribution. This source model consists of rate-quantization, distortion-quantization (D-Q), and distortion-rate (D-R) models. In this part, we first rigorously confirm the accuracy of the proposed source model by comparing the calculated results with the coding data of JM 16.0. Efficient parameter estimation strategies are then developed to better employ this source model in our two-pass rate control method for H.264 variable bit rate coding. Based on our D-Q and D-R models, the proposed method is of high stability, low complexity and is easy to implement. Extensive experiments demonstrate that the proposed method achieves: 1) average peak signal-to-noise ratio variance of only 0.0658 dB, compared to 1.8758 dB of JM 16.0's method, with an average rate control error of 1.95% and 2) significant improvement in smoothing the video quality compared with the latest two-pass rate control method.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Granados, Carlos; Weiss, Christian

    The nucleon's peripheral transverse charge and magnetization densities are computed in chiral effective field theory. The densities are represented in first-quantized form, as overlap integrals of chiral light-front wave functions describing the transition of the nucleon to soft pion-nucleon intermediate states. The orbital motion of the pion causes a large left-right asymmetry in a transversely polarized nucleon. As a result, the effect attests to the relativistic nature of chiral dynamics [pion momenta k = O(M π)] and could be observed in form factor measurements at low momentum transfer.

  17. Universal Relation among the Many-Body Chern Number, Rotation Symmetry, and Filling

    NASA Astrophysics Data System (ADS)

    Matsugatani, Akishi; Ishiguro, Yuri; Shiozaki, Ken; Watanabe, Haruki

    2018-03-01

    Understanding the interplay between the topological nature and the symmetry property of interacting systems has been a central matter of condensed matter physics in recent years. In this Letter, we establish nonperturbative constraints on the quantized Hall conductance of many-body systems with arbitrary interactions. Our results allow one to readily determine the many-body Chern number modulo a certain integer without performing any integrations, solely based on the rotation eigenvalues and the average particle density of the many-body ground state.

  18. The Â-genus as a Projective Volume form on the Derived Loop Space

    NASA Astrophysics Data System (ADS)

    Grady, Ryan

    2018-06-01

    In the present work, we extend our previous work with Gwilliam by realizing \\hat {A}(X) as the projective volume form associated to the BV operator in our quantization of a one-dimensional sigma model. We also discuss the associated integration/expectation map. We work in the formalism of L ∞ spaces, objects of which are computationally convenient presentations for derived stacks. Both smooth and complex geometry embed into L ∞ spaces and we specialize our results in both of these cases.

  19. Operators and higher genus mirror curves

    NASA Astrophysics Data System (ADS)

    Codesido, Santiago; Gu, Jie; Mariño, Marcos

    2017-02-01

    We perform further tests of the correspondence between spectral theory and topological strings, focusing on mirror curves of genus greater than one with nontrivial mass parameters. In particular, we analyze the geometry relevant to the SU(3) relativistic Toda lattice, and the resolved C{^3}/Z_6 orbifold. Furthermore, we give evidence that the correspondence holds for arbitrary values of the mass parameters, where the quantization problem leads to resonant states. We also explore the relation between this correspondence and cluster integrable systems.

  20. Iterative blip-summed path integral for quantum dynamics in strongly dissipative environments

    NASA Astrophysics Data System (ADS)

    Makri, Nancy

    2017-04-01

    The iterative decomposition of the blip-summed path integral [N. Makri, J. Chem. Phys. 141, 134117 (2014)] is described. The starting point is the expression of the reduced density matrix for a quantum system interacting with a harmonic dissipative bath in the form of a forward-backward path sum, where the effects of the bath enter through the Feynman-Vernon influence functional. The path sum is evaluated iteratively in time by propagating an array that stores blip configurations within the memory interval. Convergence with respect to the number of blips and the memory length yields numerically exact results which are free of statistical error. In situations of strongly dissipative, sluggish baths, the algorithm leads to a dramatic reduction of computational effort in comparison with iterative path integral methods that do not implement the blip decomposition. This gain in efficiency arises from (i) the rapid convergence of the blip series and (ii) circumventing the explicit enumeration of between-blip path segments, whose number grows exponentially with the memory length. Application to an asymmetric dissipative two-level system illustrates the rapid convergence of the algorithm even when the bath memory is extremely long.

  1. Estimates on Functional Integrals of Quantum Mechanics and Non-relativistic Quantum Field Theory

    NASA Astrophysics Data System (ADS)

    Bley, Gonzalo A.; Thomas, Lawrence E.

    2017-01-01

    We provide a unified method for obtaining upper bounds for certain functional integrals appearing in quantum mechanics and non-relativistic quantum field theory, functionals of the form {E[{exp}(A_T)]} , the (effective) action {A_T} being a function of particle trajectories up to time T. The estimates in turn yield rigorous lower bounds for ground state energies, via the Feynman-Kac formula. The upper bounds are obtained by writing the action for these functional integrals in terms of stochastic integrals. The method is illustrated in familiar quantum mechanical settings: for the hydrogen atom, for a Schrödinger operator with {1/|x|^2} potential with small coupling, and, with a modest adaptation of the method, for the harmonic oscillator. We then present our principal applications of the method, in the settings of non-relativistic quantum field theories for particles moving in a quantized Bose field, including the optical polaron and Nelson models.

  2. CMOS-compatible 2-bit optical spectral quantization scheme using a silicon-nanocrystal-based horizontal slot waveguide

    PubMed Central

    Kang, Zhe; Yuan, Jinhui; Zhang, Xianting; Wu, Qiang; Sang, Xinzhu; Farrell, Gerald; Yu, Chongxiu; Li, Feng; Tam, Hwa Yaw; Wai, P. K. A.

    2014-01-01

    All-optical analog-to-digital converters based on the third-order nonlinear effects in silicon waveguide are a promising candidate to overcome the limitation of electronic devices and are suitable for photonic integration. In this paper, a 2-bit optical spectral quantization scheme for on-chip all-optical analog-to-digital conversion is proposed. The proposed scheme is realized by filtering the broadened and split spectrum induced by the self-phase modulation effect in a silicon horizontal slot waveguide filled with silicon-nanocrystal. Nonlinear coefficient as high as 8708 W−1/m is obtained because of the tight mode confinement of the horizontal slot waveguide and the high nonlinear refractive index of the silicon-nanocrystal, which provides the enhanced nonlinear interaction and accordingly low power threshold. The results show that a required input peak power level less than 0.4 W can be achieved, along with the 1.98-bit effective-number-of-bit and Gray code output. The proposed scheme can find important applications in on-chip all-optical digital signal processing systems. PMID:25417847

  3. CMOS-compatible 2-bit optical spectral quantization scheme using a silicon-nanocrystal-based horizontal slot waveguide.

    PubMed

    Kang, Zhe; Yuan, Jinhui; Zhang, Xianting; Wu, Qiang; Sang, Xinzhu; Farrell, Gerald; Yu, Chongxiu; Li, Feng; Tam, Hwa Yaw; Wai, P K A

    2014-11-24

    All-optical analog-to-digital converters based on the third-order nonlinear effects in silicon waveguide are a promising candidate to overcome the limitation of electronic devices and are suitable for photonic integration. In this paper, a 2-bit optical spectral quantization scheme for on-chip all-optical analog-to-digital conversion is proposed. The proposed scheme is realized by filtering the broadened and split spectrum induced by the self-phase modulation effect in a silicon horizontal slot waveguide filled with silicon-nanocrystal. Nonlinear coefficient as high as 8708 W(-1)/m is obtained because of the tight mode confinement of the horizontal slot waveguide and the high nonlinear refractive index of the silicon-nanocrystal, which provides the enhanced nonlinear interaction and accordingly low power threshold. The results show that a required input peak power level less than 0.4 W can be achieved, along with the 1.98-bit effective-number-of-bit and Gray code output. The proposed scheme can find important applications in on-chip all-optical digital signal processing systems.

  4. Quantized visual awareness.

    PubMed

    Escobar, W A

    2013-01-01

    The proposed model holds that, at its most fundamental level, visual awareness is quantized. That is to say that visual awareness arises as individual bits of awareness through the action of neural circuits with hundreds to thousands of neurons in at least the human striate cortex. Circuits with specific topologies will reproducibly result in visual awareness that correspond to basic aspects of vision like color, motion, and depth. These quanta of awareness (qualia) are produced by the feedforward sweep that occurs through the geniculocortical pathway but are not integrated into a conscious experience until recurrent processing from centers like V4 or V5 select the appropriate qualia being produced in V1 to create a percept. The model proposed here has the potential to shift the focus of the search for visual awareness to the level of microcircuits and these likely exist across the kingdom Animalia. Thus establishing qualia as the fundamental nature of visual awareness will not only provide a deeper understanding of awareness, but also allow for a more quantitative understanding of the evolution of visual awareness throughout the animal kingdom.

  5. Path-integral method for the source apportionment of photochemical pollutants

    NASA Astrophysics Data System (ADS)

    Dunker, A. M.

    2015-06-01

    A new, path-integral method is presented for apportioning the concentrations of pollutants predicted by a photochemical model to emissions from different sources. A novel feature of the method is that it can apportion the difference in a species concentration between two simulations. For example, the anthropogenic ozone increment, which is the difference between a simulation with all emissions present and another simulation with only the background (e.g., biogenic) emissions included, can be allocated to the anthropogenic emission sources. The method is based on an existing, exact mathematical equation. This equation is applied to relate the concentration difference between simulations to line or path integrals of first-order sensitivity coefficients. The sensitivities describe the effects of changing the emissions and are accurately calculated by the decoupled direct method. The path represents a continuous variation of emissions between the two simulations, and each path can be viewed as a separate emission-control strategy. The method does not require auxiliary assumptions, e.g., whether ozone formation is limited by the availability of volatile organic compounds (VOCs) or nitrogen oxides (NOx), and can be used for all the species predicted by the model. A simplified configuration of the Comprehensive Air Quality Model with Extensions (CAMx) is used to evaluate the accuracy of different numerical integration procedures and the dependence of the source contributions on the path. A Gauss-Legendre formula using three or four points along the path gives good accuracy for apportioning the anthropogenic increments of ozone, nitrogen dioxide, formaldehyde, and nitric acid. Source contributions to these increments were obtained for paths representing proportional control of all anthropogenic emissions together, control of NOx emissions before VOC emissions, and control of VOC emissions before NOx emissions. There are similarities in the source contributions from the three paths but also differences due to the different chemical regimes resulting from the emission-control strategies.

  6. Path-integral method for the source apportionment of photochemical pollutants

    NASA Astrophysics Data System (ADS)

    Dunker, A. M.

    2014-12-01

    A new, path-integral method is presented for apportioning the concentrations of pollutants predicted by a photochemical model to emissions from different sources. A novel feature of the method is that it can apportion the difference in a species concentration between two simulations. For example, the anthropogenic ozone increment, which is the difference between a simulation with all emissions present and another simulation with only the background (e.g., biogenic) emissions included, can be allocated to the anthropogenic emission sources. The method is based on an existing, exact mathematical equation. This equation is applied to relate the concentration difference between simulations to line or path integrals of first-order sensitivity coefficients. The sensitivities describe the effects of changing the emissions and are accurately calculated by the decoupled direct method. The path represents a continuous variation of emissions between the two simulations, and each path can be viewed as a separate emission-control strategy. The method does not require auxiliary assumptions, e.g., whether ozone formation is limited by the availability of volatile organic compounds (VOC's) or nitrogen oxides (NOx), and can be used for all the species predicted by the model. A simplified configuration of the Comprehensive Air Quality Model with Extensions is used to evaluate the accuracy of different numerical integration procedures and the dependence of the source contributions on the path. A Gauss-Legendre formula using 3 or 4 points along the path gives good accuracy for apportioning the anthropogenic increments of ozone, nitrogen dioxide, formaldehyde, and nitric acid. Source contributions to these increments were obtained for paths representing proportional control of all anthropogenic emissions together, control of NOx emissions before VOC emissions, and control of VOC emissions before NOx emissions. There are similarities in the source contributions from the three paths but also differences due to the different chemical regimes resulting from the emission-control strategies.

  7. SIMULATION STUDY FOR GASEOUS FLUXES FROM AN AREA SOURCE USING COMPUTED TOMOGRAPHY AND OPTICAL REMOTE SENSING

    EPA Science Inventory

    The paper presents a new approach to quantifying emissions from fugitive gaseous air pollution sources. Computed tomography (CT) and path-integrated optical remote sensing (PI-ORS) concentration data are combined in a new field beam geometry. Path-integrated concentrations are ...

  8. Teaching Basic Quantum Mechanics in Secondary School Using Concepts of Feynman Path Integrals Method

    ERIC Educational Resources Information Center

    Fanaro, Maria de los Angeles; Otero, Maria Rita; Arlego, Marcelo

    2012-01-01

    This paper discusses the teaching of basic quantum mechanics in high school. Rather than following the usual formalism, our approach is based on Feynman's path integral method. Our presentation makes use of simulation software and avoids sophisticated mathematical formalism. (Contains 3 figures.)

  9. Piloting Systems Reset Path Integration Systems during Position Estimation

    ERIC Educational Resources Information Center

    Zhang, Lei; Mou, Weimin

    2017-01-01

    During locomotion, individuals can determine their positions with either idiothetic cues from movement (path integration systems) or visual landmarks (piloting systems). This project investigated how these 2 systems interact in determining humans' positions. In 2 experiments, participants studied the locations of 5 target objects and 1 single…

  10. Path-integral invariants in abelian Chern-Simons theory

    NASA Astrophysics Data System (ADS)

    Guadagnini, E.; Thuillier, F.

    2014-05-01

    We consider the U(1) Chern-Simons gauge theory defined in a general closed oriented 3-manifold M; the functional integration is used to compute the normalized partition function and the expectation values of the link holonomies. The non-perturbative path-integral is defined in the space of the gauge orbits of the connections which belong to the various inequivalent U(1) principal bundles over M; the different sectors of configuration space are labelled by the elements of the first homology group of M and are characterized by appropriate background connections. The gauge orbits of flat connections, whose classification is also based on the homology group, control the non-perturbative contributions to the mean values. The functional integration is carried out in any 3-manifold M, and the corresponding path-integral invariants turn out to be strictly related with the abelian Reshetikhin-Turaev surgery invariants.

  11. An analogue of Weyl’s law for quantized irreducible generalized flag manifolds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matassa, Marco, E-mail: marco.matassa@gmail.com, E-mail: mmatassa@math.uio.no

    2015-09-15

    We prove an analogue of Weyl’s law for quantized irreducible generalized flag manifolds. This is formulated in terms of a zeta function which, similarly to the classical setting, satisfies the following two properties: as a functional on the quantized algebra it is proportional to the Haar state and its first singularity coincides with the classical dimension. The relevant formulas are given for the more general case of compact quantum groups.

  12. Quantization error of CCD cameras and their influence on phase calculation in fringe pattern analysis.

    PubMed

    Skydan, Oleksandr A; Lilley, Francis; Lalor, Michael J; Burton, David R

    2003-09-10

    We present an investigation into the phase errors that occur in fringe pattern analysis that are caused by quantization effects. When acquisition devices with a limited value of camera bit depth are used, there are a limited number of quantization levels available to record the signal. This may adversely affect the recorded signal and adds a potential source of instrumental error to the measurement system. Quantization effects also determine the accuracy that may be achieved by acquisition devices in a measurement system. We used the Fourier fringe analysis measurement technique. However, the principles can be applied equally well for other phase measuring techniques to yield a phase error distribution that is caused by the camera bit depth.

  13. Performance of customized DCT quantization tables on scientific data

    NASA Technical Reports Server (NTRS)

    Ratnakar, Viresh; Livny, Miron

    1994-01-01

    We show that it is desirable to use data-specific or customized quantization tables for scaling the spatial frequency coefficients obtained using the Discrete Cosine Transform (DCT). DCT is widely used for image and video compression (MP89, PM93) but applications typically use default quantization matrices. Using actual scientific data gathered from divers sources such as spacecrafts and electron-microscopes, we show that the default compression/quality tradeoffs can be significantly improved upon by using customized tables. We also show that significant improvements are possible for the standard test images Lena and Baboon. This work is part of an effort to develop a practical scheme for optimizing quantization matrices for any given image or video stream, under any given quality or compression constraints.

  14. Gravitational surface Hamiltonian and entropy quantization

    NASA Astrophysics Data System (ADS)

    Bakshi, Ashish; Majhi, Bibhas Ranjan; Samanta, Saurav

    2017-02-01

    The surface Hamiltonian corresponding to the surface part of a gravitational action has xp structure where p is conjugate momentum of x. Moreover, it leads to TS on the horizon of a black hole. Here T and S are temperature and entropy of the horizon. Imposing the hermiticity condition we quantize this Hamiltonian. This leads to an equidistant spectrum of its eigenvalues. Using this we show that the entropy of the horizon is quantized. This analysis holds for any order of Lanczos-Lovelock gravity. For general relativity, the area spectrum is consistent with Bekenstein's observation. This provides a more robust confirmation of this earlier result as the calculation is based on the direct quantization of the Hamiltonian in the sense of usual quantum mechanics.

  15. Single-ion adsorption and switching in carbon nanotubes

    DOE PAGES

    Bushmaker, Adam W.; Oklejas, Vanessa; Walker, Don; ...

    2016-01-25

    Single-ion detection has, for many years, been the domain of large devices such as the Geiger counter, and studies on interactions of ionized gasses with materials have been limited to large systems. To date, there have been no reports on single gaseous ion interaction with microelectronic devices, and single neutral atom detection techniques have shown only small, barely detectable responses. Here we report the observation of single gaseous ion adsorption on individual carbon nanotubes (CNTs), which, because of the severely restricted one-dimensional current path, experience discrete, quantized resistance increases of over two orders of magnitude. Only positive ions cause changes,more » by the mechanism of ion potentialinduced carrier depletion, which is supported by density functional and Landauer transport theory. Lastly, our observations reveal a new single-ion/CNT heterostructure with novel electronic properties, and demonstrate that as electronics are ultimately scaled towards the one-dimensional limit, atomic-scale effects become increasingly important.« less

  16. QUANTIZING TUBE

    DOEpatents

    Jensen, A.S.; Gray, G.W.

    1958-07-01

    Beam deflection tubes are described for use in switching or pulse amplitude analysis. The salient features of the invention reside in the target arrangement whereby outputs are obtained from a plurality of collector electrodes each correspondlng with a non-overlapping range of amplitudes of the input sigmal. The tube is provded with mcans for deflecting the electron beam a1ong a line in accordance with the amplitude of an input signal. The target structure consists of a first dymode positioned in the path of the beam wlth slots spaced a1ong thc deflection line, and a second dymode posltioned behind the first dainode. When the beam strikes the solid portions along the length of the first dymode the excited electrons are multiplied and collected in separate collector electrodes spaced along the beam line. Similarly, the electrons excited when the beam strikes the second dynode are multiplied and collected in separate electrodes spaced along the length of the second dyode.

  17. Adiabatic Berry phase in an atom-molecule conversion system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fu Libin; Center for Applied Physics and Technology, Peking University, Beijing 100084; Liu Jie, E-mail: liu_jie@iapcm.ac.c

    2010-11-15

    We investigate the Berry phase of adiabatic quantum evolution in the atom-molecule conversion system that is governed by a nonlinear Schroedinger equation. We find that the Berry phase consists of two parts: the usual Berry connection term and a novel term from the nonlinearity brought forth by the atom-molecule coupling. The total geometric phase can be still viewed as the flux of the magnetic field of a monopole through the surface enclosed by a closed path in parameter space. The charge of the monopole, however, is found to be one third of the elementary charge of the usual quantized monopole.more » We also derive the classical Hannay angle of a geometric nature associated with the adiabatic evolution. It exactly equals minus Berry phase, indicating a novel connection between Berry phase and Hannay angle in contrast to the usual derivative form.« less

  18. A new navigational mechanism mediated by ant ocelli.

    PubMed

    Schwarz, Sebastian; Wystrach, Antoine; Cheng, Ken

    2011-12-23

    Many animals rely on path integration for navigation and desert ants are the champions. On leaving the nest, ants continuously integrate their distance and direction of travel so that they always know their current distance and direction from the nest and can take a direct path to home. Distance information originates from a step-counter and directional information is based on a celestial compass. So far, it has been assumed that the directional information obtained from ocelli contribute to a single global path integrator, together with directional information from the dorsal rim area (DRA) of the compound eyes and distance information from the step-counter. Here, we show that ocelli mediate a distinct compass from that mediated by the compound eyes. After travelling a two-leg outbound route, untreated foragers headed towards the nest direction, showing that both legs of the route had been integrated. In contrast, foragers with covered compound eyes but uncovered ocelli steered in the direction opposite to the last leg of the outbound route. Our findings suggest that, unlike the DRA, ocelli cannot by themselves mediate path integration. Instead, ocelli mediate a distinct directional system, which buffers the most recent leg of a journey.

  19. Visual influence on path integration in darkness indicates a multimodal representation of large-scale space

    PubMed Central

    Tcheang, Lili; Bülthoff, Heinrich H.; Burgess, Neil

    2011-01-01

    Our ability to return to the start of a route recently performed in darkness is thought to reflect path integration of motion-related information. Here we provide evidence that motion-related interoceptive representations (proprioceptive, vestibular, and motor efference copy) combine with visual representations to form a single multimodal representation guiding navigation. We used immersive virtual reality to decouple visual input from motion-related interoception by manipulating the rotation or translation gain of the visual projection. First, participants walked an outbound path with both visual and interoceptive input, and returned to the start in darkness, demonstrating the influences of both visual and interoceptive information in a virtual reality environment. Next, participants adapted to visual rotation gains in the virtual environment, and then performed the path integration task entirely in darkness. Our findings were accurately predicted by a quantitative model in which visual and interoceptive inputs combine into a single multimodal representation guiding navigation, and are incompatible with a model of separate visual and interoceptive influences on action (in which path integration in darkness must rely solely on interoceptive representations). Overall, our findings suggest that a combined multimodal representation guides large-scale navigation, consistent with a role for visual imagery or a cognitive map. PMID:21199934

  20. Tunable quantum interference in a 3D integrated circuit.

    PubMed

    Chaboyer, Zachary; Meany, Thomas; Helt, L G; Withford, Michael J; Steel, M J

    2015-04-27

    Integrated photonics promises solutions to questions of stability, complexity, and size in quantum optics. Advances in tunable and non-planar integrated platforms, such as laser-inscribed photonics, continue to bring the realisation of quantum advantages in computation and metrology ever closer, perhaps most easily seen in multi-path interferometry. Here we demonstrate control of two-photon interference in a chip-scale 3D multi-path interferometer, showing a reduced periodicity and enhanced visibility compared to single photon measurements. Observed non-classical visibilities are widely tunable, and explained well by theoretical predictions based on classical measurements. With these predictions we extract Fisher information approaching a theoretical maximum. Our results open a path to quantum enhanced phase measurements.

  1. Path integral measure, constraints and ghosts for massive gravitons with a cosmological constant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Metaxas, Dimitrios

    2009-12-15

    For massive gravity in a de Sitter background one encounters problems of stability when the curvature is larger than the graviton mass. I analyze this situation from the path integral point of view and show that it is related to the conformal factor problem of Euclidean quantum (massless) gravity. When a constraint for massive gravity is incorporated and the proper treatment of the path integral measure is taken into account one finds that, for particular choices of the DeWitt metric on the space of metrics (in fact, the same choices as in the massless case), one obtains the opposite boundmore » on the graviton mass.« less

  2. Note: A portable Raman analyzer for microfluidic chips based on a dichroic beam splitter for integration of imaging and signal collection light paths

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geng, Yijia; Xu, Shuping; Xu, Weiqing, E-mail: xuwq@jlu.edu.cn

    An integrated and portable Raman analyzer featuring an inverted probe fixed on a motor-driving adjustable optical module was designed for the combination of a microfluidic system. It possesses a micro-imaging function. The inverted configuration is advantageous to locate and focus microfluidic channels. Different from commercial micro-imaging Raman spectrometers using manual switchable light path, this analyzer adopts a dichroic beam splitter for both imaging and signal collection light paths, which avoids movable parts and improves the integration and stability of optics. Combined with surface-enhanced Raman scattering technique, this portable Raman micro-analyzer is promising as a powerful tool for microfluidic analytics.

  3. Blip decomposition of the path integral: exponential acceleration of real-time calculations on quantum dissipative systems.

    PubMed

    Makri, Nancy

    2014-10-07

    The real-time path integral representation of the reduced density matrix for a discrete system in contact with a dissipative medium is rewritten in terms of the number of blips, i.e., elementary time intervals over which the forward and backward paths are not identical. For a given set of blips, it is shown that the path sum with respect to the coordinates of all remaining time points is isomorphic to that for the wavefunction of a system subject to an external driving term and thus can be summed by an inexpensive iterative procedure. This exact decomposition reduces the number of terms by a factor that increases exponentially with propagation time. Further, under conditions (moderately high temperature and/or dissipation strength) that lead primarily to incoherent dynamics, the "fully incoherent limit" zero-blip term of the series provides a reasonable approximation to the dynamics, and the blip series converges rapidly to the exact result. Retention of only the blips required for satisfactory convergence leads to speedup of full-memory path integral calculations by many orders of magnitude.

  4. Image Coding Based on Address Vector Quantization.

    NASA Astrophysics Data System (ADS)

    Feng, Yushu

    Image coding is finding increased application in teleconferencing, archiving, and remote sensing. This thesis investigates the potential of Vector Quantization (VQ), a relatively new source coding technique, for compression of monochromatic and color images. Extensions of the Vector Quantization technique to the Address Vector Quantization method have been investigated. In Vector Quantization, the image data to be encoded are first processed to yield a set of vectors. A codeword from the codebook which best matches the input image vector is then selected. Compression is achieved by replacing the image vector with the index of the code-word which produced the best match, the index is sent to the channel. Reconstruction of the image is done by using a table lookup technique, where the label is simply used as an address for a table containing the representative vectors. A code-book of representative vectors (codewords) is generated using an iterative clustering algorithm such as K-means, or the generalized Lloyd algorithm. A review of different Vector Quantization techniques are given in chapter 1. Chapter 2 gives an overview of codebook design methods including the Kohonen neural network to design codebook. During the encoding process, the correlation of the address is considered and Address Vector Quantization is developed for color image and monochrome image coding. Address VQ which includes static and dynamic processes is introduced in chapter 3. In order to overcome the problems in Hierarchical VQ, Multi-layer Address Vector Quantization is proposed in chapter 4. This approach gives the same performance as that of the normal VQ scheme but the bit rate is about 1/2 to 1/3 as that of the normal VQ method. In chapter 5, a Dynamic Finite State VQ based on a probability transition matrix to select the best subcodebook to encode the image is developed. In chapter 6, a new adaptive vector quantization scheme, suitable for color video coding, called "A Self -Organizing Adaptive VQ Technique" is presented. In addition to chapters 2 through 6 which report on new work, this dissertation includes one chapter (chapter 1) and part of chapter 2 which review previous work on VQ and image coding, respectively. Finally, a short discussion of directions for further research is presented in conclusion.

  5. Quantization of Gaussian samples at very low SNR regime in continuous variable QKD applications

    NASA Astrophysics Data System (ADS)

    Daneshgaran, Fred; Mondin, Marina

    2016-09-01

    The main problem for information reconciliation in continuous variable Quantum Key Distribution (QKD) at low Signal to Noise Ratio (SNR) is quantization and assignment of labels to the samples of the Gaussian Random Variables (RVs) observed at Alice and Bob. Trouble is that most of the samples, assuming that the Gaussian variable is zero mean which is de-facto the case, tend to have small magnitudes and are easily disturbed by noise. Transmission over longer and longer distances increases the losses corresponding to a lower effective SNR exasperating the problem. This paper looks at the quantization problem of the Gaussian samples at very low SNR regime from an information theoretic point of view. We look at the problem of two bit per sample quantization of the Gaussian RVs at Alice and Bob and derive expressions for the mutual information between the bit strings as a result of this quantization. The quantization threshold for the Most Significant Bit (MSB) should be chosen based on the maximization of the mutual information between the quantized bit strings. Furthermore, while the LSB string at Alice and Bob are balanced in a sense that their entropy is close to maximum, this is not the case for the second most significant bit even under optimal threshold. We show that with two bit quantization at SNR of -3 dB we achieve 75.8% of maximal achievable mutual information between Alice and Bob, hence, as the number of quantization bits increases beyond 2-bits, the number of additional useful bits that can be extracted for secret key generation decreases rapidly. Furthermore, the error rates between the bit strings at Alice and Bob at the same significant bit level are rather high demanding very powerful error correcting codes. While our calculations and simulation shows that the mutual information between the LSB at Alice and Bob is 0.1044 bits, that at the MSB level is only 0.035 bits. Hence, it is only by looking at the bits jointly that we are able to achieve a mutual information of 0.2217 bits which is 75.8% of maximum achievable. The implication is that only by coding both MSB and LSB jointly can we hope to get close to this 75.8% limit. Hence, non-binary codes are essential to achieve acceptable performance.

  6. A Note on the Stochastic Nature of Feynman Quantum Paths

    NASA Astrophysics Data System (ADS)

    Botelho, Luiz C. L.

    2016-11-01

    We propose a Fresnel stochastic white noise framework to analyze the stochastic nature of the Feynman paths entering on the Feynman Path Integral expression for the Feynman Propagator of a particle quantum mechanically moving under a time-independent potential.

  7. Development of Advanced Technologies for Complete Genomic and Proteomic Characterization of Quantized Human Tumor Cells

    DTIC Science & Technology

    2014-07-01

    establishment of Glioblastoma ( GBM ) cell lines from GBM patient’s tumor samples and quantized cell populations of each of the parental GBM cell lines, we... GBM patients are now well established and from the basis of the molecular characterization of the tumor development and signatures presented by these...analysis of these quantized cell sub populations and have begun to assemble the protein signatures of GBM tumors underpinned by the comprehensive

  8. Differential calculus on quantized simple lie groups

    NASA Astrophysics Data System (ADS)

    Jurčo, Branislav

    1991-07-01

    Differential calculi, generalizations of Woronowicz's four-dimensional calculus on SU q (2), are introduced for quantized classical simple Lie groups in a constructive way. For this purpose, the approach of Faddeev and his collaborators to quantum groups was used. An equivalence of Woronowicz's enveloping algebra generated by the dual space to the left-invariant differential forms and the corresponding quantized universal enveloping algebra, is obtained for our differential calculi. Real forms for q ∈ ℝ are also discussed.

  9. Light-hole quantization in the optical response of ultra-wide GaAs/Al(x)Ga(1-x)As quantum wells.

    PubMed

    Solovyev, V V; Bunakov, V A; Schmult, S; Kukushkin, I V

    2013-01-16

    Temperature-dependent reflectivity and photoluminescence spectra are studied for undoped ultra-wide 150 and 250 nm GaAs quantum wells. It is shown that spectral features previously attributed to a size quantization of the exciton motion in the z-direction coincide well with energies of quantized levels for light holes. Furthermore, optical spectra reveal very similar properties at temperatures above the exciton dissociation point.

  10. Deformation quantizations with separation of variables on a Kähler manifold

    NASA Astrophysics Data System (ADS)

    Karabegov, Alexander V.

    1996-10-01

    We give a simple geometric description of all formal differentiable deformation quantizations on a Kähler manifold M such that for each open subset U⊂ M ⋆-multiplication from the left by a holomorphic function and from the right by an antiholomorphic function on U coincides with the pointwise multiplication by these functions. We show that these quantizations are in 1-1 correspondence with the formal deformations of the original Kähler metrics on M.

  11. Extension of loop quantum gravity to f(R) theories.

    PubMed

    Zhang, Xiangdong; Ma, Yongge

    2011-04-29

    The four-dimensional metric f(R) theories of gravity are cast into connection-dynamical formalism with real su(2) connections as configuration variables. Through this formalism, the classical metric f(R) theories are quantized by extending the loop quantization scheme of general relativity. Our results imply that the nonperturbative quantization procedure of loop quantum gravity is valid not only for general relativity but also for a rather general class of four-dimensional metric theories of gravity.

  12. PLANE-INTEGRATED OPEN-PATH FOURIER TRANSFORM INFRARED SPECTROMETRY METHODOLOGY FOR ANAEROBIC SWINE LAGOON EMISSION MEASUREMENTS

    EPA Science Inventory

    Emissions of ammonia and methane from an anaerobic lagoon at a swine animal feeding operation were evaluated five times over a period of two years. The plane-integrated (PI) open-path Fourier transform infrared spectrometry (OP-FTIR) methodology was used to transect the plume at ...

  13. Path integral Monte Carlo ground state approach: formalism, implementation, and applications

    NASA Astrophysics Data System (ADS)

    Yan, Yangqian; Blume, D.

    2017-11-01

    Monte Carlo techniques have played an important role in understanding strongly correlated systems across many areas of physics, covering a wide range of energy and length scales. Among the many Monte Carlo methods applicable to quantum mechanical systems, the path integral Monte Carlo approach with its variants has been employed widely. Since semi-classical or classical approaches will not be discussed in this review, path integral based approaches can for our purposes be divided into two categories: approaches applicable to quantum mechanical systems at zero temperature and approaches applicable to quantum mechanical systems at finite temperature. While these two approaches are related to each other, the underlying formulation and aspects of the algorithm differ. This paper reviews the path integral Monte Carlo ground state (PIGS) approach, which solves the time-independent Schrödinger equation. Specifically, the PIGS approach allows for the determination of expectation values with respect to eigen states of the few- or many-body Schrödinger equation provided the system Hamiltonian is known. The theoretical framework behind the PIGS algorithm, implementation details, and sample applications for fermionic systems are presented.

  14. Hidden symmetries for ellipsoid-solitonic deformations of Kerr-Sen black holes and quantum anomalies

    NASA Astrophysics Data System (ADS)

    Vacaru, Sergiu I.

    2013-02-01

    We prove the existence of hidden symmetries in the general relativity theory defined by exact solutions with generic off-diagonal metrics, nonholonomic (non-integrable) constraints, and deformations of the frame and linear connection structure. A special role in characterization of such spacetimes is played by the corresponding nonholonomic generalizations of Stackel-Killing and Killing-Yano tensors. There are constructed new classes of black hole solutions and we study hidden symmetries for ellipsoidal and/or solitonic deformations of "prime" Kerr-Sen black holes into "target" off-diagonal metrics. In general, the classical conserved quantities (integrable and not-integrable) do not transfer to the quantized systems and produce quantum gravitational anomalies. We prove that such anomalies can be eliminated via corresponding nonholonomic deformations of fundamental geometric objects (connections and corresponding Riemannian and Ricci tensors) and by frame transforms.

  15. Foundations of Quantum Mechanics: Derivation of a dissipative Schrödinger equation from first principles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gonçalves, L.A.; Olavo, L.S.F., E-mail: olavolsf@gmail.com

    Dissipation in Quantum Mechanics took some time to become a robust field of investigation after the birth of the field. The main issue hindering developments in the field is that the Quantization process was always tightly connected to the Hamiltonian formulation of Classical Mechanics. In this paper we present a quantization process that does not depend upon the Hamiltonian formulation of Classical Mechanics (although still departs from Classical Mechanics) and thus overcome the problem of finding, from first principles, a completely general Schrödinger equation encompassing dissipation. This generalized process of quantization is shown to be nothing but an extension ofmore » a more restricted version that is shown to produce the Schrödinger equation for Hamiltonian systems from first principles (even for Hamiltonian velocity dependent potential). - Highlights: • A Quantization process independent of the Hamiltonian formulation of quantum Mechanics is proposed. • This quantization method is applied to dissipative or absorptive systems. • A Dissipative Schrödinger equation is derived from first principles.« less

  16. Can one ADM quantize relativistic bosonicstrings and membranes?

    NASA Astrophysics Data System (ADS)

    Moncrief, Vincent

    2006-04-01

    The standard methods for quantizing relativistic strings diverge significantly from the Dirac-Wheeler-DeWitt program for quantization of generally covariant systems and one wonders whether the latter could be successfully implemented as an alternative to the former. As a first step in this direction, we consider the possibility of quantizing strings (and also relativistic membranes) via a partially gauge-fixed ADM (Arnowitt, Deser and Misner) formulation of the reduced field equations for these systems. By exploiting some (Euclidean signature) Hamilton-Jacobi techniques that Mike Ryan and I had developed previously for the quantization of Bianchi IX cosmological models, I show how to construct Diff( S 1)-invariant (or Diff(Σ)-invariant in the case of membranes) ground state wave functionals for the cases of co-dimension one strings and membranes embedded in Minkowski spacetime. I also show that the reduced Hamiltonian density operators for these systems weakly commute when applied to physical (i.e. Diff( S 1) or Diff(Σ)-invariant) states. While many open questions remain, these preliminary results seem to encourage further research along the same lines.

  17. On the classical and quantum integrability of systems of resonant oscillators

    NASA Astrophysics Data System (ADS)

    Marino, Massimo

    2017-01-01

    We study in this paper systems of harmonic oscillators with resonant frequencies. For these systems we present general procedures for the construction of sets of functionally independent constants of motion, which can be used for the definition of generalized actionangle variables, in accordance with the general description of degenerate integrable systems which was presented by Nekhoroshev in a seminal paper in 1972. We then apply to these classical integrable systems the procedure of quantization which has been proposed to the author by Nekhoroshev during his last years of activity at Milan University. This procedure is based on the construction of linear operators by means of the symmetrization of the classical constants of motion mentioned above. For 3 oscillators with resonance 1: 1: 2, by using a computer program we have discovered an exceptional integrable system, which cannot be obtained with the standard methods based on the obvious symmetries of the Hamiltonian function. In this exceptional case, quantum integrability can be realized only by means of a modification of the symmetrization procedure.

  18. Coupled binary embedding for large-scale image retrieval.

    PubMed

    Zheng, Liang; Wang, Shengjin; Tian, Qi

    2014-08-01

    Visual matching is a crucial step in image retrieval based on the bag-of-words (BoW) model. In the baseline method, two keypoints are considered as a matching pair if their SIFT descriptors are quantized to the same visual word. However, the SIFT visual word has two limitations. First, it loses most of its discriminative power during quantization. Second, SIFT only describes the local texture feature. Both drawbacks impair the discriminative power of the BoW model and lead to false positive matches. To tackle this problem, this paper proposes to embed multiple binary features at indexing level. To model correlation between features, a multi-IDF scheme is introduced, through which different binary features are coupled into the inverted file. We show that matching verification methods based on binary features, such as Hamming embedding, can be effectively incorporated in our framework. As an extension, we explore the fusion of binary color feature into image retrieval. The joint integration of the SIFT visual word and binary features greatly enhances the precision of visual matching, reducing the impact of false positive matches. Our method is evaluated through extensive experiments on four benchmark datasets (Ukbench, Holidays, DupImage, and MIR Flickr 1M). We show that our method significantly improves the baseline approach. In addition, large-scale experiments indicate that the proposed method requires acceptable memory usage and query time compared with other approaches. Further, when global color feature is integrated, our method yields competitive performance with the state-of-the-arts.

  19. Tackling higher derivative ghosts with the Euclidean path integral

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fontanini, Michele; Department of Physics, Syracuse University, Syracuse, New York 13244; Trodden, Mark

    2011-05-15

    An alternative to the effective field theory approach to treat ghosts in higher derivative theories is to attempt to integrate them out via the Euclidean path integral formalism. It has been suggested that this method could provide a consistent framework within which we might tolerate the ghost degrees of freedom that plague, among other theories, the higher derivative gravity models that have been proposed to explain cosmic acceleration. We consider the extension of this idea to treating a class of terms with order six derivatives, and find that for a general term the Euclidean path integral approach works in themore » most trivial background, Minkowski. Moreover we see that even in de Sitter background, despite some difficulties, it is possible to define a probability distribution for tensorial perturbations of the metric.« less

  20. Application of heterogeneous pulse coupled neural network in image quantization

    NASA Astrophysics Data System (ADS)

    Huang, Yi; Ma, Yide; Li, Shouliang; Zhan, Kun

    2016-11-01

    On the basis of the different strengths of synaptic connections between actual neurons, this paper proposes a heterogeneous pulse coupled neural network (HPCNN) algorithm to perform quantization on images. HPCNNs are developed from traditional pulse coupled neural network (PCNN) models, which have different parameters corresponding to different image regions. This allows pixels of different gray levels to be classified broadly into two categories: background regional and object regional. Moreover, an HPCNN also satisfies human visual characteristics. The parameters of the HPCNN model are calculated automatically according to these categories, and quantized results will be optimal and more suitable for humans to observe. At the same time, the experimental results of natural images from the standard image library show the validity and efficiency of our proposed quantization method.

  1. Tampered Region Localization of Digital Color Images Based on JPEG Compression Noise

    NASA Astrophysics Data System (ADS)

    Wang, Wei; Dong, Jing; Tan, Tieniu

    With the availability of various digital image edit tools, seeing is no longer believing. In this paper, we focus on tampered region localization for image forensics. We propose an algorithm which can locate tampered region(s) in a lossless compressed tampered image when its unchanged region is output of JPEG decompressor. We find the tampered region and the unchanged region have different responses for JPEG compression. The tampered region has stronger high frequency quantization noise than the unchanged region. We employ PCA to separate different spatial frequencies quantization noises, i.e. low, medium and high frequency quantization noise, and extract high frequency quantization noise for tampered region localization. Post-processing is involved to get final localization result. The experimental results prove the effectiveness of our proposed method.

  2. Vector quantizer designs for joint compression and terrain categorization of multispectral imagery

    NASA Technical Reports Server (NTRS)

    Gorman, John D.; Lyons, Daniel F.

    1994-01-01

    Two vector quantizer designs for compression of multispectral imagery and their impact on terrain categorization performance are evaluated. The mean-squared error (MSE) and classification performance of the two quantizers are compared, and it is shown that a simple two-stage design minimizing MSE subject to a constraint on classification performance has a significantly better classification performance than a standard MSE-based tree-structured vector quantizer followed by maximum likelihood classification. This improvement in classification performance is obtained with minimal loss in MSE performance. The results show that it is advantageous to tailor compression algorithm designs to the required data exploitation tasks. Applications of joint compression/classification include compression for the archival or transmission of Landsat imagery that is later used for land utility surveys and/or radiometric analysis.

  3. Nonperturbative light-front Hamiltonian methods

    NASA Astrophysics Data System (ADS)

    Hiller, J. R.

    2016-09-01

    We examine the current state-of-the-art in nonperturbative calculations done with Hamiltonians constructed in light-front quantization of various field theories. The language of light-front quantization is introduced, and important (numerical) techniques, such as Pauli-Villars regularization, discrete light-cone quantization, basis light-front quantization, the light-front coupled-cluster method, the renormalization group procedure for effective particles, sector-dependent renormalization, and the Lanczos diagonalization method, are surveyed. Specific applications are discussed for quenched scalar Yukawa theory, ϕ4 theory, ordinary Yukawa theory, supersymmetric Yang-Mills theory, quantum electrodynamics, and quantum chromodynamics. The content should serve as an introduction to these methods for anyone interested in doing such calculations and as a rallying point for those who wish to solve quantum chromodynamics in terms of wave functions rather than random samplings of Euclidean field configurations.

  4. A new post-phase rotation based dynamic receive beamforming architecture for smartphone-based wireless ultrasound imaging

    NASA Astrophysics Data System (ADS)

    Park, Minsuk; Kang, Jeeun; Lee, Gunho; Kim, Min; Song, Tai-Kyong

    2016-04-01

    Recently, a portable US imaging system using smart devices is highlighted for enhancing the portability of diagnosis. Especially, the system combination can enhance the user experience during whole US diagnostic procedures by employing the advanced wireless communication technology integrated in a smart device, e.g., WiFi, Bluetooth, etc. In this paper, an effective post-phase rotation-based dynamic receive beamforming (PRBF-POST) method is presented for wireless US imaging device integrating US probe system and commercial smart device. In conventional, the frame rate of conventional PRBF (PRBF-CON) method suffers from the large amount of calculations for the bifurcated processing paths of in-phase and quadrature signal components as the number of channel increase. Otherwise, the proposed PRBF-POST method can preserve the frame rate regardless of the number of channels by firstly aggregating the baseband IQ data along the channels whose phase quantization levels are identical ahead of phase rotation and summation procedures on a smart device. To evaluate the performance of the proposed PRBF-POST method, the pointspread functions of PRBF-CON and PRBF-POST methods were compared each other. Also, the frame rate of each PRBF method was measured 20-times to calculate the average frame rate and its standard deviation. As a result, the PRBFCON and PRBF-POST methods indicates identical beamforming performance in the Field-II simulation (correlation coefficient = 1). Also, the proposed PRBF-POST method indicates the consistent frame rate for varying number of channels (i.e., 44.25, 44.32, and 44.35 fps for 16, 64, and 128 channels, respectively), while the PRBF-CON method shows the decrease of frame rate as the number of channel increase (39.73, 13.19, and 3.8 fps). These results indicate that the proposed PRBF-POST method can be more advantageous for implementing the wireless US imaging system than the PRBF-CON method.

  5. Path integral pricing of Wasabi option in the Black-Scholes model

    NASA Astrophysics Data System (ADS)

    Cassagnes, Aurelien; Chen, Yu; Ohashi, Hirotada

    2014-11-01

    In this paper, using path integral techniques, we derive a formula for a propagator arising in the study of occupation time derivatives. Using this result we derive a fair price for the case of the cumulative Parisian option. After confirming the validity of the derived result using Monte Carlo simulation, a new type of heavily path dependent derivative product is investigated. We derive an approximation for our so-called Wasabi option fair price and check the accuracy of our result with a Monte Carlo simulation.

  6. IntPath--an integrated pathway gene relationship database for model organisms and important pathogens.

    PubMed

    Zhou, Hufeng; Jin, Jingjing; Zhang, Haojun; Yi, Bo; Wozniak, Michal; Wong, Limsoon

    2012-01-01

    Pathway data are important for understanding the relationship between genes, proteins and many other molecules in living organisms. Pathway gene relationships are crucial information for guidance, prediction, reference and assessment in biochemistry, computational biology, and medicine. Many well-established databases--e.g., KEGG, WikiPathways, and BioCyc--are dedicated to collecting pathway data for public access. However, the effectiveness of these databases is hindered by issues such as incompatible data formats, inconsistent molecular representations, inconsistent molecular relationship representations, inconsistent referrals to pathway names, and incomprehensive data from different databases. In this paper, we overcome these issues through extraction, normalization and integration of pathway data from several major public databases (KEGG, WikiPathways, BioCyc, etc). We build a database that not only hosts our integrated pathway gene relationship data for public access but also maintains the necessary updates in the long run. This public repository is named IntPath (Integrated Pathway gene relationship database for model organisms and important pathogens). Four organisms--S. cerevisiae, M. tuberculosis H37Rv, H. Sapiens and M. musculus--are included in this version (V2.0) of IntPath. IntPath uses the "full unification" approach to ensure no deletion and no introduced noise in this process. Therefore, IntPath contains much richer pathway-gene and pathway-gene pair relationships and much larger number of non-redundant genes and gene pairs than any of the single-source databases. The gene relationships of each gene (measured by average node degree) per pathway are significantly richer. The gene relationships in each pathway (measured by average number of gene pairs per pathway) are also considerably richer in the integrated pathways. Moderate manual curation are involved to get rid of errors and noises from source data (e.g., the gene ID errors in WikiPathways and relationship errors in KEGG). We turn complicated and incompatible xml data formats and inconsistent gene and gene relationship representations from different source databases into normalized and unified pathway-gene and pathway-gene pair relationships neatly recorded in simple tab-delimited text format and MySQL tables, which facilitates convenient automatic computation and large-scale referencing in many related studies. IntPath data can be downloaded in text format or MySQL dump. IntPath data can also be retrieved and analyzed conveniently through web service by local programs or through web interface by mouse clicks. Several useful analysis tools are also provided in IntPath. We have overcome in IntPath the issues of compatibility, consistency, and comprehensiveness that often hamper effective use of pathway databases. We have included four organisms in the current release of IntPath. Our methodology and programs described in this work can be easily applied to other organisms; and we will include more model organisms and important pathogens in future releases of IntPath. IntPath maintains regular updates and is freely available at http://compbio.ddns.comp.nus.edu.sg:8080/IntPath.

  7. Quantized phase coding and connected region labeling for absolute phase retrieval.

    PubMed

    Chen, Xiangcheng; Wang, Yuwei; Wang, Yajun; Ma, Mengchao; Zeng, Chunnian

    2016-12-12

    This paper proposes an absolute phase retrieval method for complex object measurement based on quantized phase-coding and connected region labeling. A specific code sequence is embedded into quantized phase of three coded fringes. Connected regions of different codes are labeled and assigned with 3-digit-codes combining the current period and its neighbors. Wrapped phase, more than 36 periods, can be restored with reference to the code sequence. Experimental results verify the capability of the proposed method to measure multiple isolated objects.

  8. The wavelet/scalar quantization compression standard for digital fingerprint images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradley, J.N.; Brislawn, C.M.

    1994-04-01

    A new digital image compression standard has been adopted by the US Federal Bureau of Investigation for use on digitized gray-scale fingerprint images. The algorithm is based on adaptive uniform scalar quantization of a discrete wavelet transform image decomposition and is referred to as the wavelet/scalar quantization standard. The standard produces archival quality images at compression ratios of around 20:1 and will allow the FBI to replace their current database of paper fingerprint cards with digital imagery.

  9. Table look-up estimation of signal and noise parameters from quantized observables

    NASA Technical Reports Server (NTRS)

    Vilnrotter, V. A.; Rodemich, E. R.

    1986-01-01

    A table look-up algorithm for estimating underlying signal and noise parameters from quantized observables is examined. A general mathematical model is developed, and a look-up table designed specifically for estimating parameters from four-bit quantized data is described. Estimator performance is evaluated both analytically and by means of numerical simulation, and an example is provided to illustrate the use of the look-up table for estimating signal-to-noise ratios commonly encountered in Voyager-type data.

  10. Digital television system design study

    NASA Technical Reports Server (NTRS)

    Huth, G. K.

    1976-01-01

    The use of digital techniques for transmission of pictorial data is discussed for multi-frame images (television). Video signals are processed in a manner which includes quantization and coding such that they are separable from the noise introduced into the channel. The performance of digital television systems is determined by the nature of the processing techniques (i.e., whether the video signal itself or, instead, something related to the video signal is quantized and coded) and to the quantization and coding schemes employed.

  11. Rotating effects on the Landau quantization for an atom with a magnetic quadrupole moment

    NASA Astrophysics Data System (ADS)

    Fonseca, I. C.; Bakke, K.

    2016-01-01

    Based on the single particle approximation [Dmitriev et al., Phys. Rev. C 50, 2358 (1994) and C.-C. Chen, Phys. Rev. A 51, 2611 (1995)], the Landau quantization associated with an atom with a magnetic quadrupole moment is introduced, and then, rotating effects on this analogue of the Landau quantization is investigated. It is shown that rotating effects can modify the cyclotron frequency and breaks the degeneracy of the analogue of the Landau levels.

  12. Investigation of Coding Techniques for Memory and Delay Efficient Interleaving in Slow Rayleigh Fading

    DTIC Science & Technology

    1991-11-01

    2 mega joule/m 2 (MJ/m 2 ) curie 3.700000 x E +1 *giga becquerel (GBq) degree (angle) 1.745329 x E -2 radian (rad) degree Farenheit tK = (tp...quantization assigned two quantization values. One value was assigned for demodulation values that was larger than zero and another quantization value to...demodulation values that were smaller than zero (for maximum-likelihood decisions). Logic 0 was assigned for a positive demodulation value and a logic 1 was

  13. Rotating effects on the Landau quantization for an atom with a magnetic quadrupole moment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fonseca, I. C.; Bakke, K., E-mail: kbakke@fisica.ufpb.br

    2016-01-07

    Based on the single particle approximation [Dmitriev et al., Phys. Rev. C 50, 2358 (1994) and C.-C. Chen, Phys. Rev. A 51, 2611 (1995)], the Landau quantization associated with an atom with a magnetic quadrupole moment is introduced, and then, rotating effects on this analogue of the Landau quantization is investigated. It is shown that rotating effects can modify the cyclotron frequency and breaks the degeneracy of the analogue of the Landau levels.

  14. Gold nanoparticles produced in situ mediate bioelectricity and hydrogen production in a microbial fuel cell by quantized capacitance charging.

    PubMed

    Kalathil, Shafeer; Lee, Jintae; Cho, Moo Hwan

    2013-02-01

    Oppan quantized style: By adding a gold precursor at its cathode, a microbial fuel cell (MFC) is demonstrated to form gold nanoparticles that can be used to simultaneously produce bioelectricity and hydrogen. By exploiting the quantized capacitance charging effect, the gold nanoparticles mediate the production of hydrogen without requiring an external power supply, while the MFC produces a stable power density. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Reformulation of the covering and quantizer problems as ground states of interacting particles.

    PubMed

    Torquato, S

    2010-11-01

    It is known that the sphere-packing problem and the number-variance problem (closely related to an optimization problem in number theory) can be posed as energy minimizations associated with an infinite number of point particles in d-dimensional Euclidean space R(d) interacting via certain repulsive pair potentials. We reformulate the covering and quantizer problems as the determination of the ground states of interacting particles in R(d) that generally involve single-body, two-body, three-body, and higher-body interactions. This is done by linking the covering and quantizer problems to certain optimization problems involving the "void" nearest-neighbor functions that arise in the theory of random media and statistical mechanics. These reformulations, which again exemplify the deep interplay between geometry and physics, allow one now to employ theoretical and numerical optimization techniques to analyze and solve these energy minimization problems. The covering and quantizer problems have relevance in numerous applications, including wireless communication network layouts, the search of high-dimensional data parameter spaces, stereotactic radiation therapy, data compression, digital communications, meshing of space for numerical analysis, and coding and cryptography, among other examples. In the first three space dimensions, the best known solutions of the sphere-packing and number-variance problems (or their "dual" solutions) are directly related to those of the covering and quantizer problems, but such relationships may or may not exist for d≥4 , depending on the peculiarities of the dimensions involved. Our reformulation sheds light on the reasons for these similarities and differences. We also show that disordered saturated sphere packings provide relatively thin (economical) coverings and may yield thinner coverings than the best known lattice coverings in sufficiently large dimensions. In the case of the quantizer problem, we derive improved upper bounds on the quantizer error using sphere-packing solutions, which are generally substantially sharper than an existing upper bound in low to moderately large dimensions. We also demonstrate that disordered saturated sphere packings yield relatively good quantizers. Finally, we remark on possible applications of our results for the detection of gravitational waves.

  16. Reformulation of the covering and quantizer problems as ground states of interacting particles

    NASA Astrophysics Data System (ADS)

    Torquato, S.

    2010-11-01

    It is known that the sphere-packing problem and the number-variance problem (closely related to an optimization problem in number theory) can be posed as energy minimizations associated with an infinite number of point particles in d -dimensional Euclidean space Rd interacting via certain repulsive pair potentials. We reformulate the covering and quantizer problems as the determination of the ground states of interacting particles in Rd that generally involve single-body, two-body, three-body, and higher-body interactions. This is done by linking the covering and quantizer problems to certain optimization problems involving the “void” nearest-neighbor functions that arise in the theory of random media and statistical mechanics. These reformulations, which again exemplify the deep interplay between geometry and physics, allow one now to employ theoretical and numerical optimization techniques to analyze and solve these energy minimization problems. The covering and quantizer problems have relevance in numerous applications, including wireless communication network layouts, the search of high-dimensional data parameter spaces, stereotactic radiation therapy, data compression, digital communications, meshing of space for numerical analysis, and coding and cryptography, among other examples. In the first three space dimensions, the best known solutions of the sphere-packing and number-variance problems (or their “dual” solutions) are directly related to those of the covering and quantizer problems, but such relationships may or may not exist for d≥4 , depending on the peculiarities of the dimensions involved. Our reformulation sheds light on the reasons for these similarities and differences. We also show that disordered saturated sphere packings provide relatively thin (economical) coverings and may yield thinner coverings than the best known lattice coverings in sufficiently large dimensions. In the case of the quantizer problem, we derive improved upper bounds on the quantizer error using sphere-packing solutions, which are generally substantially sharper than an existing upper bound in low to moderately large dimensions. We also demonstrate that disordered saturated sphere packings yield relatively good quantizers. Finally, we remark on possible applications of our results for the detection of gravitational waves.

  17. Black holes and beyond

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mathur, Samir D., E-mail: mathur.16@osu.edu

    The black hole information paradox forces us into a strange situation: we must find a way to break the semiclassical approximation in a domain where no quantum gravity effects would normally be expected. Traditional quantizations of gravity do not exhibit any such breakdown, and this forces us into a difficult corner: either we must give up quantum mechanics or we must accept the existence of troublesome 'remnants'. In string theory, however, the fundamental quanta are extended objects, and it turns out that the bound states of such objects acquire a size that grows with the number of quanta in themore » bound state. The interior of the black hole gets completely altered to a 'fuzzball' structure, and information is able to escape in radiation from the hole. The semiclassical approximation can break at macroscopic scales due to the large entropy of the hole: the measure in the path integral competes with the classical action, instead of giving a subleading correction. Putting this picture of black hole microstates together with ideas about entangled states leads to a natural set of conjectures on many long-standing questions in gravity: the significance of Rindler and de Sitter entropies, the notion of black hole complementarity, and the fate of an observer falling into a black hole. - Highlights: Black-Right-Pointing-Pointer The information paradox is a serious problem. Black-Right-Pointing-Pointer To solve it we need to find 'hair' on black holes. Black-Right-Pointing-Pointer In string theory we find 'hair' by the fuzzball construction. Black-Right-Pointing-Pointer Fuzzballs help to resolve many other issues in gravity.« less

  18. Nuclear quantum effects and kinetic isotope effects in enzyme reactions.

    PubMed

    Vardi-Kilshtain, Alexandra; Nitoker, Neta; Major, Dan Thomas

    2015-09-15

    Enzymes are extraordinarily effective catalysts evolved to perform well-defined and highly specific chemical transformations. Studying the nature of rate enhancements and the mechanistic strategies in enzymes is very important, both from a basic scientific point of view, as well as in order to improve rational design of biomimetics. Kinetic isotope effect (KIE) is a very important tool in the study of chemical reactions and has been used extensively in the field of enzymology. Theoretically, the prediction of KIEs in condensed phase environments such as enzymes is challenging due to the need to include nuclear quantum effects (NQEs). Herein we describe recent progress in our group in the development of multi-scale simulation methods for the calculation of NQEs and accurate computation of KIEs. We also describe their application to several enzyme systems. In particular we describe the use of combined quantum mechanics/molecular mechanics (QM/MM) methods in classical and quantum simulations. The development of various novel path-integral methods is reviewed. These methods are tailor suited to enzyme systems, where only a few degrees of freedom involved in the chemistry need to be quantized. The application of the hybrid QM/MM quantum-classical simulation approach to three case studies is presented. The first case involves the proton transfer in alanine racemase. The second case presented involves orotidine 5'-monophosphate decarboxylase where multidimensional free energy simulations together with kinetic isotope effects are combined in the study of the reaction mechanism. Finally, we discuss the proton transfer in nitroalkane oxidase, where the enzyme employs tunneling as a catalytic fine-tuning tool. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Green function of the double-fractional Fokker-Planck equation: path integral and stochastic differential equations.

    PubMed

    Kleinert, H; Zatloukal, V

    2013-11-01

    The statistics of rare events, the so-called black-swan events, is governed by non-Gaussian distributions with heavy power-like tails. We calculate the Green functions of the associated Fokker-Planck equations and solve the related stochastic differential equations. We also discuss the subject in the framework of path integration.

  20. Low-coherence interferometric sensor system utilizing an integrated optics configuration

    NASA Astrophysics Data System (ADS)

    Plissi, M. V.; Rogers, A. J.; Brassington, D. J.; Wilson, M. G. F.

    1995-08-01

    The implementation of a twin Mach-Zehnder reference interferometer in an integrated optics substrate is described. From measurements of the fringe visibilities, an identification of the fringe order is attempted as a way to provide an absolute sensor for any parameter capable of modifying the difference in path length between two interfering optical paths.

  1. Explaining Technology Integration in K-12 Classrooms: A Multilevel Path Analysis Model

    ERIC Educational Resources Information Center

    Liu, Feng; Ritzhaupt, Albert D.; Dawson, Kara; Barron, Ann E.

    2017-01-01

    The purpose of this research was to design and test a model of classroom technology integration in the context of K-12 schools. The proposed multilevel path analysis model includes teacher, contextual, and school related variables on a teacher's use of technology and confidence and comfort using technology as mediators of classroom technology…

  2. Path integral learning of multidimensional movement trajectories

    NASA Astrophysics Data System (ADS)

    André, João; Santos, Cristina; Costa, Lino

    2013-10-01

    This paper explores the use of Path Integral Methods, particularly several variants of the recent Path Integral Policy Improvement (PI2) algorithm in multidimensional movement parametrized policy learning. We rely on Dynamic Movement Primitives (DMPs) to codify discrete and rhythmic trajectories, and apply the PI2-CMA and PIBB methods in the learning of optimal policy parameters, according to different cost functions that inherently encode movement objectives. Additionally we merge both of these variants and propose the PIBB-CMA algorithm, comparing all of them with the vanilla version of PI2. From the obtained results we conclude that PIBB-CMA surpasses all other methods in terms of convergence speed and iterative final cost, which leads to an increased interest in its application to more complex robotic problems.

  3. Path-integral and Ornstein-Zernike study of quantum fluid structures on the crystallization line

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sesé, Luis M., E-mail: msese@ccia.uned.es

    2016-03-07

    Liquid neon, liquid para-hydrogen, and the quantum hard-sphere fluid are studied with path integral Monte Carlo simulations and the Ornstein-Zernike pair equation on their respective crystallization lines. The results cover the whole sets of structures in the r-space and the k-space and, for completeness, the internal energies, pressures and isothermal compressibilities. Comparison with experiment is made wherever possible, and the possibilities of establishing k-space criteria for quantum crystallization based on the path-integral centroids are discussed. In this regard, the results show that the centroid structure factor contains two significant parameters related to its main peak features (amplitude and shape) thatmore » can be useful to characterize freezing.« less

  4. Spin Path Integrals and Generations

    NASA Astrophysics Data System (ADS)

    Brannen, Carl

    2010-11-01

    The spin of a free electron is stable but its position is not. Recent quantum information research by G. Svetlichny, J. Tolar, and G. Chadzitaskos have shown that the Feynman position path integral can be mathematically defined as a product of incompatible states; that is, as a product of mutually unbiased bases (MUBs). Since the more common use of MUBs is in finite dimensional Hilbert spaces, this raises the question “what happens when spin path integrals are computed over products of MUBs?” Such an assumption makes spin no longer stable. We show that the usual spin-1/2 is obtained in the long-time limit in three orthogonal solutions that we associate with the three elementary particle generations. We give applications to the masses of the elementary leptons.

  5. Data assimilation using a GPU accelerated path integral Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Quinn, John C.; Abarbanel, Henry D. I.

    2011-09-01

    The answers to data assimilation questions can be expressed as path integrals over all possible state and parameter histories. We show how these path integrals can be evaluated numerically using a Markov Chain Monte Carlo method designed to run in parallel on a graphics processing unit (GPU). We demonstrate the application of the method to an example with a transmembrane voltage time series of a simulated neuron as an input, and using a Hodgkin-Huxley neuron model. By taking advantage of GPU computing, we gain a parallel speedup factor of up to about 300, compared to an equivalent serial computation on a CPU, with performance increasing as the length of the observation time used for data assimilation increases.

  6. A joint watermarking/encryption algorithm for verifying medical image integrity and authenticity in both encrypted and spatial domains.

    PubMed

    Bouslimi, D; Coatrieux, G; Roux, Ch

    2011-01-01

    In this paper, we propose a new joint watermarking/encryption algorithm for the purpose of verifying the reliability of medical images in both encrypted and spatial domains. It combines a substitutive watermarking algorithm, the quantization index modulation (QIM), with a block cipher algorithm, the Advanced Encryption Standard (AES), in CBC mode of operation. The proposed solution gives access to the outcomes of the image integrity and of its origins even though the image is stored encrypted. Experimental results achieved on 8 bits encoded Ultrasound images illustrate the overall performances of the proposed scheme. By making use of the AES block cipher in CBC mode, the proposed solution is compliant with or transparent to the DICOM standard.

  7. An efficient system for reliably transmitting image and video data over low bit rate noisy channels

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.; Huang, Y. F.; Stevenson, Robert L.

    1994-01-01

    This research project is intended to develop an efficient system for reliably transmitting image and video data over low bit rate noisy channels. The basic ideas behind the proposed approach are the following: employ statistical-based image modeling to facilitate pre- and post-processing and error detection, use spare redundancy that the source compression did not remove to add robustness, and implement coded modulation to improve bandwidth efficiency and noise rejection. Over the last six months, progress has been made on various aspects of the project. Through our studies of the integrated system, a list-based iterative Trellis decoder has been developed. The decoder accepts feedback from a post-processor which can detect channel errors in the reconstructed image. The error detection is based on the Huber Markov random field image model for the compressed image. The compression scheme used here is that of JPEG (Joint Photographic Experts Group). Experiments were performed and the results are quite encouraging. The principal ideas here are extendable to other compression techniques. In addition, research was also performed on unequal error protection channel coding, subband vector quantization as a means of source coding, and post processing for reducing coding artifacts. Our studies on unequal error protection (UEP) coding for image transmission focused on examining the properties of the UEP capabilities of convolutional codes. The investigation of subband vector quantization employed a wavelet transform with special emphasis on exploiting interband redundancy. The outcome of this investigation included the development of three algorithms for subband vector quantization. The reduction of transform coding artifacts was studied with the aid of a non-Gaussian Markov random field model. This results in improved image decompression. These studies are summarized and the technical papers included in the appendices.

  8. Dielectric response properties of parabolically-confined nanostructures in a quantizing magnetic field

    NASA Astrophysics Data System (ADS)

    Sabeeh, Kashif

    This thesis presents theoretical studies of dielectric response properties of parabolically-confined nanostructures in a magnetic field. We have determined the retarded Schrodinger Green's function for an electron in such a parabolically confined system in the presence of a time dependent electric field and an ambient magnetic field. Following an operator equation of motion approach developed by Schwinger, we calculate the result in closed form in terms of elementary functions in direct-time representation. From the retarded Schrodinger Green's function we construct the closed-form thermodynamic Green's function for a parabolically confined quantum-dot in a magnetic field to determine its plasmon spectrum. Due to confinement and Landau quantization this system is fully quantized, with an infinite number of collective modes. The RPA integral equation for the inverse dielectric function is solved using Fredholm theory in the nondegenerate and quantum limit to determine the frequencies with which the plasmons participate in response to excitation by an external potential. We exhibit results for the variation of plasmon frequency as a function of magnetic field strength and of confinement frequency. A calculation of the van der Waals interaction energy between two harmonically confined quantum dots is discussed in terms of the dipole-dipole correlation function. The results are presented as a function of confinement strength and distance between the dots. We also rederive a result of Fertig & Halperin [32] for the tunneling-scattering of an electron through a saddle potential which is also known as a quantum point contact (QPC), in the presence of a magnetic field. Using the retarded Green's function we confirm the result for the transmission coefficient and analyze it.

  9. The role of spatial memory and frames of reference in the precision of angular path integration.

    PubMed

    Arthur, Joeanna C; Philbeck, John W; Kleene, Nicholas J; Chichka, David

    2012-09-01

    Angular path integration refers to the ability to maintain an estimate of self-location after a rotational displacement by integrating internally-generated (idiothetic) self-motion signals over time. Previous work has found that non-sensory inputs, namely spatial memory, can play a powerful role in angular path integration (Arthur et al., 2007, 2009). Here we investigated the conditions under which spatial memory facilitates angular path integration. We hypothesized that the benefit of spatial memory is particularly likely in spatial updating tasks in which one's self-location estimate is referenced to external space. To test this idea, we administered passive, non-visual body rotations (ranging 40°-140°) about the yaw axis and asked participants to use verbal reports or open-loop manual pointing to indicate the magnitude of the rotation. Prior to some trials, previews of the surrounding environment were given. We found that when participants adopted an egocentric frame of reference, the previously-observed benefit of previews on within-subject response precision was not manifested, regardless of whether remembered spatial frameworks were derived from vision or spatial language. We conclude that the powerful effect of spatial memory is dependent on one's frame of reference during self-motion updating. Copyright © 2012 Elsevier B.V. All rights reserved.

  10. Accelerated path integral methods for atomistic simulations at ultra-low temperatures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uhl, Felix, E-mail: felix.uhl@rub.de; Marx, Dominik; Ceriotti, Michele

    2016-08-07

    Path integral methods provide a rigorous and systematically convergent framework to include the quantum mechanical nature of atomic nuclei in the evaluation of the equilibrium properties of molecules, liquids, or solids at finite temperature. Such nuclear quantum effects are often significant for light nuclei already at room temperature, but become crucial at cryogenic temperatures such as those provided by superfluid helium as a solvent. Unfortunately, the cost of converged path integral simulations increases significantly upon lowering the temperature so that the computational burden of simulating matter at the typical superfluid helium temperatures becomes prohibitive. Here we investigate how accelerated pathmore » integral techniques based on colored noise generalized Langevin equations, in particular the so-called path integral generalized Langevin equation thermostat (PIGLET) variant, perform in this extreme quantum regime using as an example the quasi-rigid methane molecule and its highly fluxional protonated cousin, CH{sub 5}{sup +}. We show that the PIGLET technique gives a speedup of two orders of magnitude in the evaluation of structural observables and quantum kinetic energy at ultralow temperatures. Moreover, we computed the spatial spread of the quantum nuclei in CH{sub 4} to illustrate the limits of using such colored noise thermostats close to the many body quantum ground state.« less

  11. Path integration guided with a quality map for shape reconstruction in the fringe reflection technique

    NASA Astrophysics Data System (ADS)

    Jing, Xiaoli; Cheng, Haobo; Wen, Yongfu

    2018-04-01

    A new local integration algorithm called quality map path integration (QMPI) is reported for shape reconstruction in the fringe reflection technique. A quality map is proposed to evaluate the quality of gradient data locally, and functions as a guideline for the integrated path. The presented method can be employed in wavefront estimation from its slopes over the general shaped surface with slope noise equivalent to that in practical measurements. Moreover, QMPI is much better at handling the slope data with local noise, which may be caused by the irregular shapes of the surface under test. The performance of QMPI is discussed by simulations and experiment. It is shown that QMPI not only improves the accuracy of local integration, but can also be easily implemented with no iteration compared to Southwell zonal reconstruction (SZR). From an engineering point-of-view, the proposed method may also provide an efficient and stable approach for different shapes with high-precise demand.

  12. A taxonomy of integral reaction path analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grcar, Joseph F.; Day, Marcus S.; Bell, John B.

    2004-12-23

    W. C. Gardiner observed that achieving understanding through combustion modeling is limited by the ability to recognize the implications of what has been computed and to draw conclusions about the elementary steps underlying the reaction mechanism. This difficulty can be overcome in part by making better use of reaction path analysis in the context of multidimensional flame simulations. Following a survey of current practice, an integral reaction flux is formulated in terms of conserved scalars that can be calculated in a fully automated way. Conditional analyses are then introduced, and a taxonomy for bidirectional path analysis is explored. Many examplesmore » illustrate the resulting path analysis and uncover some new results about nonpremixed methane-air laminar jets.« less

  13. Functional integration of vertical flight path and speed control using energy principles

    NASA Technical Reports Server (NTRS)

    Lambregts, A. A.

    1984-01-01

    A generalized automatic flight control system was developed which integrates all longitudinal flight path and speed control functions previously provided by a pitch autopilot and autothrottle. In this design, a net thrust command is computed based on total energy demand arising from both flight path and speed targets. The elevator command is computed based on the energy distribution error between flight path and speed. The engine control is configured to produce the commanded net thrust. The design incorporates control strategies and hierarchy to deal systematically and effectively with all aircraft operational requirements, control nonlinearities, and performance limits. Consistent decoupled maneuver control is achieved for all modes and flight conditions without outer loop gain schedules, control law submodes, or control function duplication.

  14. Error diffusion concept for multi-level quantization

    NASA Astrophysics Data System (ADS)

    Broja, Manfred; Michalowski, Kristina; Bryngdahl, Olof

    1990-11-01

    The error diffusion binarization procedure is adapted to multi-level quantization. The threshold parameters then available have a noticeable influence on the process. Characteristic features of the technique are shown together with experimental results.

  15. Natural inflation from polymer quantization

    NASA Astrophysics Data System (ADS)

    Ali, Masooma; Seahra, Sanjeev S.

    2017-11-01

    We study the polymer quantization of a homogeneous massive scalar field in the early Universe using a prescription inequivalent to those previously appearing in the literature. Specifically, we assume a Hilbert space for which the scalar field momentum is well defined but its amplitude is not. This is closer in spirit to the quantization scheme of loop quantum gravity, in which no unique configuration operator exists. We show that in the semiclassical approximation, the main effect of this polymer quantization scheme is to compactify the phase space of chaotic inflation in the field amplitude direction. This gives rise to an effective scalar potential closely resembling that of hybrid natural inflation. Unlike polymer schemes in which the scalar field amplitude is well defined, the semiclassical dynamics involves a past cosmological singularity; i.e., this approach does not mitigate the big bang.

  16. Optimal sampling and quantization of synthetic aperture radar signals

    NASA Technical Reports Server (NTRS)

    Wu, C.

    1978-01-01

    Some theoretical and experimental results on optimal sampling and quantization of synthetic aperture radar (SAR) signals are presented. It includes a description of a derived theoretical relationship between the pixel signal to noise ratio of processed SAR images and the number of quantization bits per sampled signal, assuming homogeneous extended targets. With this relationship known, a solution may be realized for the problem of optimal allocation of a fixed data bit-volume (for specified surface area and resolution criterion) between the number of samples and the number of bits per sample. The results indicate that to achieve the best possible image quality for a fixed bit rate and a given resolution criterion, one should quantize individual samples coarsely and thereby maximize the number of multiple looks. The theoretical results are then compared with simulation results obtained by processing aircraft SAR data.

  17. Effect of temperature degeneracy and Landau quantization on drift solitary waves and double layers

    NASA Astrophysics Data System (ADS)

    Shan, Shaukat Ali; Haque, Q.

    2018-01-01

    The linear and nonlinear drift ion acoustic waves have been investigated in an inhomogeneous, magnetized, dense degenerate, and quantized magnetic field plasma. The linear drift ion acoustic wave propagation along with the nonlinear structures like double layers and solitary waves has been found to be strongly dependent on the drift speed, magnetic field quantization parameter β, and the temperature degeneracy. The graphical illustrations show that the frequency of linear waves and the amplitude of the solitary waves increase with the increase in temperature degeneracy and Landau quantization effect, while the amplitude of the double layers decreases with the increase in η and T. The relevance of the present study is pointed out in the plasma environment of fast ignition inertial confinement fusion, the white dwarf stars, and short pulsed petawatt laser technology.

  18. Time-Symmetric Quantization in Spacetimes with Event Horizons

    NASA Astrophysics Data System (ADS)

    Kobakhidze, Archil; Rodd, Nicholas

    2013-08-01

    The standard quantization formalism in spacetimes with event horizons implies a non-unitary evolution of quantum states, as initial pure states may evolve into thermal states. This phenomenon is behind the famous black hole information loss paradox which provoked long-standing debates on the compatibility of quantum mechanics and gravity. In this paper we demonstrate that within an alternative time-symmetric quantization formalism thermal radiation is absent and states evolve unitarily in spacetimes with event horizons. We also discuss the theoretical consistency of the proposed formalism. We explicitly demonstrate that the theory preserves the microcausality condition and suggest a "reinterpretation postulate" to resolve other apparent pathologies associated with negative energy states. Accordingly as there is a consistent alternative, we argue that choosing to use time-asymmetric quantization is a necessary condition for the black hole information loss paradox.

  19. Adaptive robust fault tolerant control design for a class of nonlinear uncertain MIMO systems with quantization.

    PubMed

    Ao, Wei; Song, Yongdong; Wen, Changyun

    2017-05-01

    In this paper, we investigate the adaptive control problem for a class of nonlinear uncertain MIMO systems with actuator faults and quantization effects. Under some mild conditions, an adaptive robust fault-tolerant control is developed to compensate the affects of uncertainties, actuator failures and errors caused by quantization, and a range of the parameters for these quantizers is established. Furthermore, a Lyapunov-like approach is adopted to demonstrate that the ultimately uniformly bounded output tracking error is guaranteed by the controller, and the signals of the closed-loop system are ensured to be bounded, even in the presence of at most m-q actuators stuck or outage. Finally, numerical simulations are provided to verify and illustrate the effectiveness of the proposed adaptive schemes. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  20. On a canonical quantization of 3D Anti de Sitter pure gravity

    NASA Astrophysics Data System (ADS)

    Kim, Jihun; Porrati, Massimo

    2015-10-01

    We perform a canonical quantization of pure gravity on AdS 3 using as a technical tool its equivalence at the classical level with a Chern-Simons theory with gauge group SL(2,{R})× SL(2,{R}) . We first quantize the theory canonically on an asymptotically AdS space -which is topologically the real line times a Riemann surface with one connected boundary. Using the "constrain first" approach we reduce canonical quantization to quantization of orbits of the Virasoro group and Kähler quantization of Teichmüller space. After explicitly computing the Kähler form for the torus with one boundary component and after extending that result to higher genus, we recover known results, such as that wave functions of SL(2,{R}) Chern-Simons theory are conformal blocks. We find new restrictions on the Hilbert space of pure gravity by imposing invariance under large diffeomorphisms and normalizability of the wave function. The Hilbert space of pure gravity is shown to be the target space of Conformal Field Theories with continuous spectrum and a lower bound on operator dimensions. A projection defined by topology changing amplitudes in Euclidean gravity is proposed. It defines an invariant subspace that allows for a dual interpretation in terms of a Liouville CFT. Problems and features of the CFT dual are assessed and a new definition of the Hilbert space, exempt from those problems, is proposed in the case of highly-curved AdS 3.

  1. Cortical Hubs Form a Module for Multisensory Integration on Top of the Hierarchy of Cortical Networks

    PubMed Central

    Zamora-López, Gorka; Zhou, Changsong; Kurths, Jürgen

    2009-01-01

    Sensory stimuli entering the nervous system follow particular paths of processing, typically separated (segregated) from the paths of other modal information. However, sensory perception, awareness and cognition emerge from the combination of information (integration). The corticocortical networks of cats and macaque monkeys display three prominent characteristics: (i) modular organisation (facilitating the segregation), (ii) abundant alternative processing paths and (iii) the presence of highly connected hubs. Here, we study in detail the organisation and potential function of the cortical hubs by graph analysis and information theoretical methods. We find that the cortical hubs form a spatially delocalised, but topologically central module with the capacity to integrate multisensory information in a collaborative manner. With this, we resolve the underlying anatomical substrate that supports the simultaneous capacity of the cortex to segregate and to integrate multisensory information. PMID:20428515

  2. MULTI-POLLUTANT CONCENTRATION MEASUREMENTS AROUND A CONCENTRATED SWINE PRODUCTION FACILITY USING OPEN-PATH FTIR SPECTROMETRY

    EPA Science Inventory

    Open-path Fourier transform infrared (OP/FTIR) spectrometry was used to measure the concentrations of ammonia, methane, and other atmospheric gasses around an integrated industrial swine production facility in eastern North Carolina. Several single-path measurements were made ove...

  3. Book Review:

    NASA Astrophysics Data System (ADS)

    Das, Ashok

    2007-01-01

    It is not usual for someone to write a book on someone else's Ph.D. thesis, but then Feynman was not a usual physicist. He was without doubt one of the most original physicists of the twentieth century, who has strongly influenced the developments in quantum field theory through his many ingenious contributions. Path integral approach to quantum theories is one such contribution which pervades almost all areas of physics. What is astonishing is that he developed this idea as a graduate student for his Ph.D. thesis which has been printed, for the first time, in the present book along with two other related articles. The early developments in quantum theory, by Heisenberg and Schrödinger, were based on the Hamiltonian formulation, where one starts with the Hamiltonian description of a classical system and then promotes the classical observables to noncommuting quantum operators. However, Dirac had already stressed in an article in 1932 (this article is also reproduced in the present book) that the Lagrangian is more fundamental than the Hamiltonian, at least from the point of view of relativistic invariance and he wondered how the Lagrangian may enter into the quantum description. He had developed this idea through his 'transformation matrix' theory and had even hinted on how the action of the classical theory may enter such a description. However, although the brief paper by Dirac contained the basic essential ideas, it did not fully develop the idea of a Lagrangian description in detail in the functional language. Feynman, on the other hand, was interested in the electromagnetic interactions of the electron from a completely different point of view rooted in a theory involving action-at-a-distance. His theory (along with John Wheeler) did not have a Hamiltonian description and, in order to quantize such a theory, he needed an alternative formulation of quantum mechanics. When the article by Dirac was brought to his attention, he immediately realized what he was looking for and developed fully what is known today as the path integral approach to quantum theories. Although his main motivation was in the study of theories involving the concept of action-at-a-distance, as he emphasizes in his thesis, his formulation of quantum theories applies to any theory in general. The thesis develops quite systematically and in detail all the concepts of functionals necessary for this formulation. The motivation and the physical insights are described in the brilliant 'Feynman' style. It is incredible that even at that young age, the signs of his legendary teaching style were evident in his presentation of the material in the thesis. The path integral approach is now something that every graduate student in theoretical physics is supposed to know. There are several books on the subject, even one by Feynman himself (and Hibbs). Nonetheless, the thesis provides a very good background for the way these ideas came about. The two companion articles, although available in print, also gives a complete picture of the development of this line of thinking. The helpful introductory remarks by the editor also puts things in the proper historical perspective. This book would be very helpful to anyone interested in the development of modern ideas in physics.

  4. Maslov indices, Poisson brackets, and singular differential forms

    NASA Astrophysics Data System (ADS)

    Esterlis, I.; Haggard, H. M.; Hedeman, A.; Littlejohn, R. G.

    2014-06-01

    Maslov indices are integers that appear in semiclassical wave functions and quantization conditions. They are often notoriously difficult to compute. We present methods of computing the Maslov index that rely only on typically elementary Poisson brackets and simple linear algebra. We also present a singular differential form, whose integral along a curve gives the Maslov index of that curve. The form is closed but not exact, and transforms by an exact differential under canonical transformations. We illustrate the method with the 6j-symbol, which is important in angular-momentum theory and in quantum gravity.

  5. Research on conceptual/innovative design for the life cycle

    NASA Technical Reports Server (NTRS)

    Cagan, Jonathan; Agogino, Alice M.

    1990-01-01

    The goal of this research is developing and integrating qualitative and quantitative methods for life cycle design. The definition of the problem includes formal computer-based methods limited to final detailing stages of design; CAD data bases do not capture design intent or design history; and life cycle issues were ignored during early stages of design. Viewgraphs outline research in conceptual design; the SYMON (SYmbolic MONotonicity analyzer) algorithm; multistart vector quantization optimization algorithm; intelligent manufacturing: IDES - Influence Diagram Architecture; and 1st PRINCE (FIRST PRINciple Computational Evaluator).

  6. Image segmentation using fuzzy LVQ clustering networks

    NASA Technical Reports Server (NTRS)

    Tsao, Eric Chen-Kuo; Bezdek, James C.; Pal, Nikhil R.

    1992-01-01

    In this note we formulate image segmentation as a clustering problem. Feature vectors extracted from a raw image are clustered into subregions, thereby segmenting the image. A fuzzy generalization of a Kohonen learning vector quantization (LVQ) which integrates the Fuzzy c-Means (FCM) model with the learning rate and updating strategies of the LVQ is used for this task. This network, which segments images in an unsupervised manner, is thus related to the FCM optimization problem. Numerical examples on photographic and magnetic resonance images are given to illustrate this approach to image segmentation.

  7. Real-time path planning and autonomous control for helicopter autorotation

    NASA Astrophysics Data System (ADS)

    Yomchinda, Thanan

    Autorotation is a descending maneuver that can be used to recover helicopters in the event of total loss of engine power; however it is an extremely difficult and complex maneuver. The objective of this work is to develop a real-time system which provides full autonomous control for autorotation landing of helicopters. The work includes the development of an autorotation path planning method and integration of the path planner with a primary flight control system. The trajectory is divided into three parts: entry, descent and flare. Three different optimization algorithms are used to generate trajectories for each of these segments. The primary flight control is designed using a linear dynamic inversion control scheme, and a path following control law is developed to track the autorotation trajectories. Details of the path planning algorithm, trajectory following control law, and autonomous autorotation system implementation are presented. The integrated system is demonstrated in real-time high fidelity simulations. Results indicate feasibility of the capability of the algorithms to operate in real-time and of the integrated systems ability to provide safe autorotation landings. Preliminary simulations of autonomous autorotation on a small UAV are presented which will lead to a final hardware demonstration of the algorithms.

  8. Integrated optimization of unmanned aerial vehicle task allocation and path planning under steady wind.

    PubMed

    Luo, He; Liang, Zhengzheng; Zhu, Moning; Hu, Xiaoxuan; Wang, Guoqiang

    2018-01-01

    Wind has a significant effect on the control of fixed-wing unmanned aerial vehicles (UAVs), resulting in changes in their ground speed and direction, which has an important influence on the results of integrated optimization of UAV task allocation and path planning. The objective of this integrated optimization problem changes from minimizing flight distance to minimizing flight time. In this study, the Euclidean distance between any two targets is expanded to the Dubins path length, considering the minimum turning radius of fixed-wing UAVs. According to the vector relationship between wind speed, UAV airspeed, and UAV ground speed, a method is proposed to calculate the flight time of UAV between targets. On this basis, a variable-speed Dubins path vehicle routing problem (VS-DP-VRP) model is established with the purpose of minimizing the time required for UAVs to visit all the targets and return to the starting point. By designing a crossover operator and mutation operator, the genetic algorithm is used to solve the model, the results of which show that an effective UAV task allocation and path planning solution under steady wind can be provided.

  9. Integrated optimization of unmanned aerial vehicle task allocation and path planning under steady wind

    PubMed Central

    Liang, Zhengzheng; Zhu, Moning; Hu, Xiaoxuan; Wang, Guoqiang

    2018-01-01

    Wind has a significant effect on the control of fixed-wing unmanned aerial vehicles (UAVs), resulting in changes in their ground speed and direction, which has an important influence on the results of integrated optimization of UAV task allocation and path planning. The objective of this integrated optimization problem changes from minimizing flight distance to minimizing flight time. In this study, the Euclidean distance between any two targets is expanded to the Dubins path length, considering the minimum turning radius of fixed-wing UAVs. According to the vector relationship between wind speed, UAV airspeed, and UAV ground speed, a method is proposed to calculate the flight time of UAV between targets. On this basis, a variable-speed Dubins path vehicle routing problem (VS-DP-VRP) model is established with the purpose of minimizing the time required for UAVs to visit all the targets and return to the starting point. By designing a crossover operator and mutation operator, the genetic algorithm is used to solve the model, the results of which show that an effective UAV task allocation and path planning solution under steady wind can be provided. PMID:29561888

  10. Coarse-grained representation of the quasi adiabatic propagator path integral for the treatment of non-Markovian long-time bath memory

    NASA Astrophysics Data System (ADS)

    Richter, Martin; Fingerhut, Benjamin P.

    2017-06-01

    The description of non-Markovian effects imposed by low frequency bath modes poses a persistent challenge for path integral based approaches like the iterative quasi-adiabatic propagator path integral (iQUAPI) method. We present a novel approximate method, termed mask assisted coarse graining of influence coefficients (MACGIC)-iQUAPI, that offers appealing computational savings due to substantial reduction of considered path segments for propagation. The method relies on an efficient path segment merging procedure via an intermediate coarse grained representation of Feynman-Vernon influence coefficients that exploits physical properties of system decoherence. The MACGIC-iQUAPI method allows us to access the regime of biological significant long-time bath memory on the order of hundred propagation time steps while retaining convergence to iQUAPI results. Numerical performance is demonstrated for a set of benchmark problems that cover bath assisted long range electron transfer, the transition from coherent to incoherent dynamics in a prototypical molecular dimer and excitation energy transfer in a 24-state model of the Fenna-Matthews-Olson trimer complex where in all cases excellent agreement with numerically exact reference data is obtained.

  11. The Holographic Electron Density Theorem, de-quantization, re-quantization, and nuclear charge space extrapolations of the Universal Molecule Model

    NASA Astrophysics Data System (ADS)

    Mezey, Paul G.

    2017-11-01

    Two strongly related theorems on non-degenerate ground state electron densities serve as the basis of "Molecular Informatics". The Hohenberg-Kohn theorem is a statement on global molecular information, ensuring that the complete electron density contains the complete molecular information. However, the Holographic Electron Density Theorem states more: the local information present in each and every positive volume density fragment is already complete: the information in the fragment is equivalent to the complete molecular information. In other words, the complete molecular information provided by the Hohenberg-Kohn Theorem is already provided, in full, by any positive volume, otherwise arbitrarily small electron density fragment. In this contribution some of the consequences of the Holographic Electron Density Theorem are discussed within the framework of the "Nuclear Charge Space" and the Universal Molecule Model. In the Nuclear Charge Space" the nuclear charges are regarded as continuous variables, and in the more general Universal Molecule Model some other quantized parameteres are also allowed to become "de-quantized and then re-quantized, leading to interrelations among real molecules through abstract molecules. Here the specific role of the Holographic Electron Density Theorem is discussed within the above context.

  12. Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain

    PubMed Central

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality. PMID:23049544

  13. Medical image compression based on vector quantization with variable block sizes in wavelet domain.

    PubMed

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.

  14. Spectrally efficient digitized radio-over-fiber system with k-means clustering-based multidimensional quantization.

    PubMed

    Zhang, Lu; Pang, Xiaodan; Ozolins, Oskars; Udalcovs, Aleksejs; Popov, Sergei; Xiao, Shilin; Hu, Weisheng; Chen, Jiajia

    2018-04-01

    We propose a spectrally efficient digitized radio-over-fiber (D-RoF) system by grouping highly correlated neighboring samples of the analog signals into multidimensional vectors, where the k-means clustering algorithm is adopted for adaptive quantization. A 30  Gbit/s D-RoF system is experimentally demonstrated to validate the proposed scheme, reporting a carrier aggregation of up to 40 100 MHz orthogonal frequency division multiplexing (OFDM) channels with quadrate amplitude modulation (QAM) order of 4 and an aggregation of 10 100 MHz OFDM channels with a QAM order of 16384. The equivalent common public radio interface rates from 37 to 150  Gbit/s are supported. Besides, the error vector magnitude (EVM) of 8% is achieved with the number of quantization bits of 4, and the EVM can be further reduced to 1% by increasing the number of quantization bits to 7. Compared with conventional pulse coding modulation-based D-RoF systems, the proposed D-RoF system improves the signal-to-noise-ratio up to ∼9  dB and greatly reduces the EVM, given the same number of quantization bits.

  15. Prediction-guided quantization for video tone mapping

    NASA Astrophysics Data System (ADS)

    Le Dauphin, Agnès.; Boitard, Ronan; Thoreau, Dominique; Olivier, Yannick; Francois, Edouard; LeLéannec, Fabrice

    2014-09-01

    Tone Mapping Operators (TMOs) compress High Dynamic Range (HDR) content to address Low Dynamic Range (LDR) displays. However, before reaching the end-user, this tone mapped content is usually compressed for broadcasting or storage purposes. Any TMO includes a quantization step to convert floating point values to integer ones. In this work, we propose to adapt this quantization, in the loop of an encoder, to reduce the entropy of the tone mapped video content. Our technique provides an appropriate quantization for each mode of both the Intra and Inter-prediction that is performed in the loop of a block-based encoder. The mode that minimizes a rate-distortion criterion uses its associated quantization to provide integer values for the rest of the encoding process. The method has been implemented in HEVC and was tested over two different scenarios: the compression of tone mapped LDR video content (using the HM10.0) and the compression of perceptually encoded HDR content (HM14.0). Results show an average bit-rate reduction under the same PSNR for all the sequences and TMO considered of 20.3% and 27.3% for tone mapped content and 2.4% and 2.7% for HDR content.

  16. Improved image decompression for reduced transform coding artifacts

    NASA Technical Reports Server (NTRS)

    Orourke, Thomas P.; Stevenson, Robert L.

    1994-01-01

    The perceived quality of images reconstructed from low bit rate compression is severely degraded by the appearance of transform coding artifacts. This paper proposes a method for producing higher quality reconstructed images based on a stochastic model for the image data. Quantization (scalar or vector) partitions the transform coefficient space and maps all points in a partition cell to a representative reconstruction point, usually taken as the centroid of the cell. The proposed image estimation technique selects the reconstruction point within the quantization partition cell which results in a reconstructed image which best fits a non-Gaussian Markov random field (MRF) image model. This approach results in a convex constrained optimization problem which can be solved iteratively. At each iteration, the gradient projection method is used to update the estimate based on the image model. In the transform domain, the resulting coefficient reconstruction points are projected to the particular quantization partition cells defined by the compressed image. Experimental results will be shown for images compressed using scalar quantization of block DCT and using vector quantization of subband wavelet transform. The proposed image decompression provides a reconstructed image with reduced visibility of transform coding artifacts and superior perceived quality.

  17. Landau quantization effects on hole-acoustic instability in semiconductor plasmas

    NASA Astrophysics Data System (ADS)

    Sumera, P.; Rasheed, A.; Jamil, M.; Siddique, M.; Areeb, F.

    2017-12-01

    The growth rate of the hole acoustic waves (HAWs) exciting in magnetized semiconductor quantum plasma pumped by the electron beam has been investigated. The instability of the waves contains quantum effects including the exchange and correlation potential, Bohm potential, Fermi-degenerate pressure, and the magnetic quantization of semiconductor plasma species. The effects of various plasma parameters, which include relative concentration of plasma particles, beam electron temperature, beam speed, plasma temperature (temperature of electrons/holes), and Landau electron orbital magnetic quantization parameter η, on the growth rate of HAWs, have been discussed. The numerical study of our model of acoustic waves has been applied, as an example, to the GaAs semiconductor exposed to electron beam in the magnetic field environment. An increment in either the concentration of the semiconductor electrons or the speed of beam electrons, in the presence of magnetic quantization of fermion orbital motion, enhances remarkably the growth rate of the HAWs. Although the growth rate of the waves reduces with a rise in the thermal temperature of plasma species, at a particular temperature, we receive a higher instability due to the contribution of magnetic quantization of fermions to it.

  18. Ab initio molecular dynamics with nuclear quantum effects at classical cost: Ring polymer contraction for density functional theory.

    PubMed

    Marsalek, Ondrej; Markland, Thomas E

    2016-02-07

    Path integral molecular dynamics simulations, combined with an ab initio evaluation of interactions using electronic structure theory, incorporate the quantum mechanical nature of both the electrons and nuclei, which are essential to accurately describe systems containing light nuclei. However, path integral simulations have traditionally required a computational cost around two orders of magnitude greater than treating the nuclei classically, making them prohibitively costly for most applications. Here we show that the cost of path integral simulations can be dramatically reduced by extending our ring polymer contraction approach to ab initio molecular dynamics simulations. By using density functional tight binding as a reference system, we show that our ring polymer contraction scheme gives rapid and systematic convergence to the full path integral density functional theory result. We demonstrate the efficiency of this approach in ab initio simulations of liquid water and the reactive protonated and deprotonated water dimer systems. We find that the vast majority of the nuclear quantum effects are accurately captured using contraction to just the ring polymer centroid, which requires the same number of density functional theory calculations as a classical simulation. Combined with a multiple time step scheme using the same reference system, which allows the time step to be increased, this approach is as fast as a typical classical ab initio molecular dynamics simulation and 35× faster than a full path integral calculation, while still exactly including the quantum sampling of nuclei. This development thus offers a route to routinely include nuclear quantum effects in ab initio molecular dynamics simulations at negligible computational cost.

  19. Assessment of Hydrogen Sulfide Minimum Detection Limits of an Open Path Tunable Diode Laser

    EPA Science Inventory

    During June 2007, U.S. EPA conducted a feasibility study to determine whether the EPA OTM 10 measurement approach, also known as radial plume mapping (RPM), was feasible. A Boreal open-path tunable diode laser (OP-TDL) to collect path-integrated hydrogen sulfide measurements alon...

  20. Creativity, Spirituality, and Transcendence: Paths to Integrity and Wisdom in the Mature Self. Publications in Creativity Research.

    ERIC Educational Resources Information Center

    Miller, Melvin E., Ed.; Cook-Greuter, Susanne R., Ed.

    This book contains 11 papers on creativity, spirituality, and transcendence as paths to integrity and wisdom in the mature self. The book begins with the paper "Introduction--Creativity in Adulthood: Personal Maturity and Openness to Extraordinary Sources of Inspiration" (Susanne R. Cook-Greuter, Melvin E. Miller). The next four papers,…

  1. Derivation of the Schrodinger Equation from the Hamilton-Jacobi Equation in Feynman's Path Integral Formulation of Quantum Mechanics

    ERIC Educational Resources Information Center

    Field, J. H.

    2011-01-01

    It is shown how the time-dependent Schrodinger equation may be simply derived from the dynamical postulate of Feynman's path integral formulation of quantum mechanics and the Hamilton-Jacobi equation of classical mechanics. Schrodinger's own published derivations of quantum wave equations, the first of which was also based on the Hamilton-Jacobi…

  2. Finding the way with a noisy brain.

    PubMed

    Cheung, Allen; Vickerstaff, Robert

    2010-11-11

    Successful navigation is fundamental to the survival of nearly every animal on earth, and achieved by nervous systems of vastly different sizes and characteristics. Yet surprisingly little is known of the detailed neural circuitry from any species which can accurately represent space for navigation. Path integration is one of the oldest and most ubiquitous navigation strategies in the animal kingdom. Despite a plethora of computational models, from equational to neural network form, there is currently no consensus, even in principle, of how this important phenomenon occurs neurally. Recently, all path integration models were examined according to a novel, unifying classification system. Here we combine this theoretical framework with recent insights from directed walk theory, and develop an intuitive yet mathematically rigorous proof that only one class of neural representation of space can tolerate noise during path integration. This result suggests many existing models of path integration are not biologically plausible due to their intolerance to noise. This surprising result imposes significant computational limitations on the neurobiological spatial representation of all successfully navigating animals, irrespective of species. Indeed, noise-tolerance may be an important functional constraint on the evolution of neuroarchitectural plans in the animal kingdom.

  3. A path integral methodology for obtaining thermodynamic properties of nonadiabatic systems using Gaussian mixture distributions

    NASA Astrophysics Data System (ADS)

    Raymond, Neil; Iouchtchenko, Dmitri; Roy, Pierre-Nicholas; Nooijen, Marcel

    2018-05-01

    We introduce a new path integral Monte Carlo method for investigating nonadiabatic systems in thermal equilibrium and demonstrate an approach to reducing stochastic error. We derive a general path integral expression for the partition function in a product basis of continuous nuclear and discrete electronic degrees of freedom without the use of any mapping schemes. We separate our Hamiltonian into a harmonic portion and a coupling portion; the partition function can then be calculated as the product of a Monte Carlo estimator (of the coupling contribution to the partition function) and a normalization factor (that is evaluated analytically). A Gaussian mixture model is used to evaluate the Monte Carlo estimator in a computationally efficient manner. Using two model systems, we demonstrate our approach to reduce the stochastic error associated with the Monte Carlo estimator. We show that the selection of the harmonic oscillators comprising the sampling distribution directly affects the efficiency of the method. Our results demonstrate that our path integral Monte Carlo method's deviation from exact Trotter calculations is dominated by the choice of the sampling distribution. By improving the sampling distribution, we can drastically reduce the stochastic error leading to lower computational cost.

  4. Correspondence between quantization schemes for two-player nonzero-sum games and CNOT complexity

    NASA Astrophysics Data System (ADS)

    Vijayakrishnan, V.; Balakrishnan, S.

    2018-05-01

    The well-known quantization schemes for two-player nonzero-sum games are Eisert-Wilkens-Lewenstein scheme and Marinatto-Weber scheme. In this work, we establish the connection between the two schemes from the perspective of quantum circuits. Further, we provide the correspondence between any game quantization schemes and the CNOT complexity, where CNOT complexity is up to the local unitary operations. While CNOT complexity is known to be useful in the analysis of universal quantum circuit, in this work, we find its applicability in quantum game theory.

  5. Equivalence of Einstein and Jordan frames in quantized anisotropic cosmological models

    NASA Astrophysics Data System (ADS)

    Pandey, Sachin; Pal, Sridip; Banerjee, Narayan

    2018-06-01

    The present work shows that the mathematical equivalence of the Jordan frame and its conformally transformed version, the Einstein frame, so as far as Brans-Dicke theory is concerned, survives a quantization of cosmological models, arising as solutions to the Brans-Dicke theory. We work with the Wheeler-deWitt quantization scheme and take up quite a few anisotropic cosmological models as examples. We effectively show that the transformation from the Jordan to the Einstein frame is a canonical one and hence two frames furnish equivalent description of same physical scenario.

  6. Covariant spinor representation of iosp(d,2/2) and quantization of the spinning relativistic particle

    NASA Astrophysics Data System (ADS)

    Jarvis, P. D.; Corney, S. P.; Tsohantjis, I.

    1999-12-01

    A covariant spinor representation of iosp(d,2/2) is constructed for the quantization of the spinning relativistic particle. It is found that, with appropriately defined wavefunctions, this representation can be identified with the state space arising from the canonical extended BFV-BRST quantization of the spinning particle with admissible gauge fixing conditions after a contraction procedure. For this model, the cohomological determination of physical states can thus be obtained purely from the representation theory of the iosp(d,2/2) algebra.

  7. Luminance-model-based DCT quantization for color image compression

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Peterson, Heidi A.

    1992-01-01

    A model is developed to approximate visibility thresholds for discrete cosine transform (DCT) coefficient quantization error based on the peak-to-peak luminance of the error image. Experimentally measured visibility thresholds for R, G, and B DCT basis functions can be predicted by a simple luminance-based detection model. This model allows DCT coefficient quantization matrices to be designed for display conditions other than those of the experimental measurements: other display luminances, other veiling luminances, and other spatial frequencies (different pixel spacings, viewing distances, and aspect ratios).

  8. Pseudoclassical Foldy-Wouthuysen transformation and canonical quantization of (D-2n)-dimensional relativistic particle with spin in an external electromagnetic field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grigoryan, G.V.; Grigoryan, R.P.

    1995-09-01

    The canonical quantization of a (D=2n)-dimensional Dirac particle with spin in an arbitrary external electromagnetic field is performed in a gauge that makes it possible to describe simultaneously particles and antiparticles (both massive and massless) already at the classical level. A pseudoclassical Foldy-Wouthuysen transformation is used to find the canonical (Newton-Wigner) coordinates. The connection between this quantization scheme and Blount`s picture describing the behavior of a Dirac particle in an external electromagnetic field is discussed.

  9. Image Data Compression Having Minimum Perceptual Error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1997-01-01

    A method is presented for performing color or grayscale image compression that eliminates redundant and invisible image components. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The quantization matrix comprises visual masking by luminance and contrast technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  10. Quantum Mechanics, Path Integrals and Option Pricing:. Reducing the Complexity of Finance

    NASA Astrophysics Data System (ADS)

    Baaquie, Belal E.; Corianò, Claudio; Srikant, Marakani

    2003-04-01

    Quantum Finance represents the synthesis of the techniques of quantum theory (quantum mechanics and quantum field theory) to theoretical and applied finance. After a brief overview of the connection between these fields, we illustrate some of the methods of lattice simulations of path integrals for the pricing of options. The ideas are sketched out for simple models, such as the Black-Scholes model, where analytical and numerical results are compared. Application of the method to nonlinear systems is also briefly overviewed. More general models, for exotic or path-dependent options are discussed.

  11. Path integration of the time-dependent forced oscillator with a two-time quadratic action

    NASA Astrophysics Data System (ADS)

    Zhang, Tian Rong; Cheng, Bin Kang

    1986-03-01

    Using the prodistribution theory proposed by DeWitt-Morette [C. DeWitt-Morette, Commun. Math. Phys. 28, 47 (1972); C. DeWitt-Morette, A. Maheshwari, and B. Nelson, Phys. Rep. 50, 257 (1979)], the path integration of a time-dependent forced harmonic oscillator with a two-time quadratic action has been given in terms of the solutions of some integrodifferential equations. We then evaluate explicitly both the classical path and the propagator for the specific kernel introduced by Feynman in the polaron problem. Our results include the previous known results as special cases.

  12. Magnetic expansion of Nekrasov theory: The SU(2) pure gauge theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He Wei; Miao Yangang

    It is recently claimed by Nekrasov and Shatashvili that the N=2 gauge theories in the {Omega} background with {epsilon}{sub 1}=({h_bar}/2{pi}), {epsilon}{sub 2}=0 are related to the quantization of certain algebraic integrable systems. We study the special case of SU(2) pure gauge theory; the corresponding integrable model is the A{sub 1} Toda model, which reduces to the sine-Gordon quantum mechanics problem. The quantum effects can be expressed as the WKB series written analytically in terms of hypergeometric functions. We obtain the magnetic and dyonic expansions of the Nekrasov theory by studying the property of hypergeometric functions in the magnetic and dyonicmore » regions on the moduli space. We also discuss the relation between the electric-magnetic duality of gauge theory and the action-action duality of the integrable system.« less

  13. JND measurements of the speech formants parameters and its implication in the LPC pole quantization

    NASA Astrophysics Data System (ADS)

    Orgad, Yaakov

    1988-08-01

    The inherent sensitivity of auditory perception is explicitly used with the objective of designing an efficient speech encoder. Speech can be modelled by a filter representing the vocal tract shape that is driven by an excitation signal representing glottal air flow. This work concentrates on the filter encoding problem, assuming that excitation signal encoding is optimal. Linear predictive coding (LPC) techniques were used to model a short speech segment by an all-pole filter; each pole was directly related to the speech formants. Measurements were made of the auditory just noticeable difference (JND) corresponding to the natural speech formants, with the LPC filter poles as the best candidates to represent the speech spectral envelope. The JND is the maximum precision required in speech quantization; it was defined on the basis of the shift of one pole parameter of a single frame of a speech segment, necessary to induce subjective perception of the distortion, with .75 probability. The average JND in LPC filter poles in natural speech was found to increase with increasing pole bandwidth and, to a lesser extent, frequency. The JND measurements showed a large spread of the residuals around the average values, indicating that inter-formant coupling and, perhaps, other, not yet fully understood, factors were not taken into account at this stage of the research. A future treatment should consider these factors. The average JNDs obtained in this work were used to design pole quantization tables for speech coding and provided a better bit-rate than the standard quantizer of reflection coefficient; a 30-bits-per-frame pole quantizer yielded a speech quality similar to that obtained with a standard 41-bits-per-frame reflection coefficient quantizer. Owing to the complexity of the numerical root extraction system, the practical implementation of the pole quantization approach remains to be proved.

  14. Approaching the Planck scale from a generally relativistic point of view: A philosophical appraisal of loop quantum gravity

    NASA Astrophysics Data System (ADS)

    Wuthrich, Christian

    My dissertation studies the foundations of loop quantum gravity (LQG), a candidate for a quantum theory of gravity based on classical general relativity. At the outset, I discuss two---and I claim separate---questions: first, do we need a quantum theory of gravity at all; and second, if we do, does it follow that gravity should or even must be quantized? My evaluation of different arguments either way suggests that while no argument can be considered conclusive, there are strong indications that gravity should be quantized. LQG attempts a canonical quantization of general relativity and thereby provokes a foundational interest as it must take a stance on many technical issues tightly linked to the interpretation of general relativity. Most importantly, it codifies general relativity's main innovation, the so-called background independence, in a formalism suitable for quantization. This codification pulls asunder what has been joined together in general relativity: space and time. It is thus a central issue whether or not general relativity's four-dimensional structure can be retrieved in the alternative formalism and how it fares through the quantization process. I argue that the rightful four-dimensional spacetime structure can only be partially retrieved at the classical level. What happens at the quantum level is an entirely open issue. Known examples of classically singular behaviour which gets regularized by quantization evoke an admittedly pious hope that the singularities which notoriously plague the classical theory may be washed away by quantization. This work scrutinizes pronouncements claiming that the initial singularity of classical cosmological models vanishes in quantum cosmology based on LQG and concludes that these claims must be severely qualified. In particular, I explicate why casting the quantum cosmological models in terms of a deterministic temporal evolution fails to capture the concepts at work adequately. Finally, a scheme is developed of how the re-emergence of the smooth spacetime from the underlying discrete quantum structure could be understood.

  15. Integration across Time Determines Path Deviation Discrimination for Moving Objects

    PubMed Central

    Whitaker, David; Levi, Dennis M.; Kennedy, Graeme J.

    2008-01-01

    Background Human vision is vital in determining our interaction with the outside world. In this study we characterize our ability to judge changes in the direction of motion of objects–a common task which can allow us either to intercept moving objects, or else avoid them if they pose a threat. Methodology/Principal Findings Observers were presented with objects which moved across a computer monitor on a linear path until the midline, at which point they changed their direction of motion, and observers were required to judge the direction of change. In keeping with the variety of objects we encounter in the real world, we varied characteristics of the moving stimuli such as velocity, extent of motion path and the object size. Furthermore, we compared performance for moving objects with the ability of observers to detect a deviation in a line which formed the static trace of the motion path, since it has been suggested that a form of static memory trace may form the basis for these types of judgment. The static line judgments were well described by a ‘scale invariant’ model in which any two stimuli which possess the same two-dimensional geometry (length/width) result in the same level of performance. Performance for the moving objects was entirely different. Irrespective of the path length, object size or velocity of motion, path deviation thresholds depended simply upon the duration of the motion path in seconds. Conclusions/Significance Human vision has long been known to integrate information across space in order to solve spatial tasks such as judgment of orientation or position. Here we demonstrate an intriguing mechanism which integrates direction information across time in order to optimize the judgment of path deviation for moving objects. PMID:18414653

  16. An Alternative to the Gauge Theoretic Setting

    NASA Astrophysics Data System (ADS)

    Schroer, Bert

    2011-10-01

    The standard formulation of quantum gauge theories results from the Lagrangian (functional integral) quantization of classical gauge theories. A more intrinsic quantum theoretical access in the spirit of Wigner's representation theory shows that there is a fundamental clash between the pointlike localization of zero mass (vector, tensor) potentials and the Hilbert space (positivity, unitarity) structure of QT. The quantization approach has no other way than to stay with pointlike localization and sacrifice the Hilbert space whereas the approach built on the intrinsic quantum concept of modular localization keeps the Hilbert space and trades the conflict creating pointlike generation with the tightest consistent localization: semiinfinite spacelike string localization. Whereas these potentials in the presence of interactions stay quite close to associated pointlike field strengths, the interacting matter fields to which they are coupled bear the brunt of the nonlocal aspect in that they are string-generated in a way which cannot be undone by any differentiation. The new stringlike approach to gauge theory also revives the idea of a Schwinger-Higgs screening mechanism as a deeper and less metaphoric description of the Higgs spontaneous symmetry breaking and its accompanying tale about "God's particle" and its mass generation for all the other particles.

  17. Exponentially more precise quantum simulation of fermions in the configuration interaction representation

    NASA Astrophysics Data System (ADS)

    Babbush, Ryan; Berry, Dominic W.; Sanders, Yuval R.; Kivlichan, Ian D.; Scherer, Artur; Wei, Annie Y.; Love, Peter J.; Aspuru-Guzik, Alán

    2018-01-01

    We present a quantum algorithm for the simulation of molecular systems that is asymptotically more efficient than all previous algorithms in the literature in terms of the main problem parameters. As in Babbush et al (2016 New Journal of Physics 18, 033032), we employ a recently developed technique for simulating Hamiltonian evolution using a truncated Taylor series to obtain logarithmic scaling with the inverse of the desired precision. The algorithm of this paper involves simulation under an oracle for the sparse, first-quantized representation of the molecular Hamiltonian known as the configuration interaction (CI) matrix. We construct and query the CI matrix oracle to allow for on-the-fly computation of molecular integrals in a way that is exponentially more efficient than classical numerical methods. Whereas second-quantized representations of the wavefunction require \\widetilde{{ O }}(N) qubits, where N is the number of single-particle spin-orbitals, the CI matrix representation requires \\widetilde{{ O }}(η ) qubits, where η \\ll N is the number of electrons in the molecule of interest. We show that the gate count of our algorithm scales at most as \\widetilde{{ O }}({η }2{N}3t).

  18. Open Group Transformations Within the Sp(2)-Formalism

    NASA Astrophysics Data System (ADS)

    Batalin, Igor; Marnelius, Robert

    Previously we have shown that open groups whose generators are in arbitrary involutions may be quantized within a ghost extended framework in terms of the nilpotent BFV-BRST charge operator. Here we show that they may also be quantized within an Sp(2)-frame in which there are two odd anticommuting operators called Sp(2)-charges. Previous results for finite open group transformations are generalized to the Sp(2)-formalism. We show that in order to define open group transformations on the whole ghost extended space we need Sp(2)-charges in the nonminimal sector which contains dynamical Lagrange multipliers. We give an Sp(2)-version of the quantum master equation with extended Sp(2)-charges and a master charge of a more involved form, which is proposed to represent the integrability conditions of defining operators of connection operators and which therefore should encode the generalized quantum Maurer-Cartan equations for arbitrary open groups. General solutions of this master equation are given in explicit form. A further extended Sp(2)-formalism is proposed in which the group parameters are quadrupled to a supersymmetric set and from which all results may be derived.

  19. Silicon Metal-oxide-semiconductor Quantum Dots for Single-electron Pumping

    PubMed Central

    Rossi, Alessandro; Tanttu, Tuomo; Hudson, Fay E.; Sun, Yuxin; Möttönen, Mikko; Dzurak, Andrew S.

    2015-01-01

    As mass-produced silicon transistors have reached the nano-scale, their behavior and performances are increasingly affected, and often deteriorated, by quantum mechanical effects such as tunneling through single dopants, scattering via interface defects, and discrete trap charge states. However, progress in silicon technology has shown that these phenomena can be harnessed and exploited for a new class of quantum-based electronics. Among others, multi-layer-gated silicon metal-oxide-semiconductor (MOS) technology can be used to control single charge or spin confined in electrostatically-defined quantum dots (QD). These QD-based devices are an excellent platform for quantum computing applications and, recently, it has been demonstrated that they can also be used as single-electron pumps, which are accurate sources of quantized current for metrological purposes. Here, we discuss in detail the fabrication protocol for silicon MOS QDs which is relevant to both quantum computing and quantum metrology applications. Moreover, we describe characterization methods to test the integrity of the devices after fabrication. Finally, we give a brief description of the measurement set-up used for charge pumping experiments and show representative results of electric current quantization. PMID:26067215

  20. On the Perturbative Equivalence Between the Hamiltonian and Lagrangian Quantizations

    NASA Astrophysics Data System (ADS)

    Batalin, I. A.; Tyutin, I. V.

    The Hamiltonian (BFV) and Lagrangian (BV) quantization schemes are proved to be perturbatively equivalent to each other. It is shown in particular that the quantum master equation being treated perturbatively possesses a local formal solution.

  1. Fill-in binary loop pulse-torque quantizer

    NASA Technical Reports Server (NTRS)

    Lory, C. B.

    1975-01-01

    Fill-in binary (FIB) loop provides constant heating of torque generator, an advantage of binary current switching. At the same time, it avoids mode-related dead zone and data delay of binary, an advantage of ternary quantization.

  2. IntPath--an integrated pathway gene relationship database for model organisms and important pathogens

    PubMed Central

    2012-01-01

    Background Pathway data are important for understanding the relationship between genes, proteins and many other molecules in living organisms. Pathway gene relationships are crucial information for guidance, prediction, reference and assessment in biochemistry, computational biology, and medicine. Many well-established databases--e.g., KEGG, WikiPathways, and BioCyc--are dedicated to collecting pathway data for public access. However, the effectiveness of these databases is hindered by issues such as incompatible data formats, inconsistent molecular representations, inconsistent molecular relationship representations, inconsistent referrals to pathway names, and incomprehensive data from different databases. Results In this paper, we overcome these issues through extraction, normalization and integration of pathway data from several major public databases (KEGG, WikiPathways, BioCyc, etc). We build a database that not only hosts our integrated pathway gene relationship data for public access but also maintains the necessary updates in the long run. This public repository is named IntPath (Integrated Pathway gene relationship database for model organisms and important pathogens). Four organisms--S. cerevisiae, M. tuberculosis H37Rv, H. Sapiens and M. musculus--are included in this version (V2.0) of IntPath. IntPath uses the "full unification" approach to ensure no deletion and no introduced noise in this process. Therefore, IntPath contains much richer pathway-gene and pathway-gene pair relationships and much larger number of non-redundant genes and gene pairs than any of the single-source databases. The gene relationships of each gene (measured by average node degree) per pathway are significantly richer. The gene relationships in each pathway (measured by average number of gene pairs per pathway) are also considerably richer in the integrated pathways. Moderate manual curation are involved to get rid of errors and noises from source data (e.g., the gene ID errors in WikiPathways and relationship errors in KEGG). We turn complicated and incompatible xml data formats and inconsistent gene and gene relationship representations from different source databases into normalized and unified pathway-gene and pathway-gene pair relationships neatly recorded in simple tab-delimited text format and MySQL tables, which facilitates convenient automatic computation and large-scale referencing in many related studies. IntPath data can be downloaded in text format or MySQL dump. IntPath data can also be retrieved and analyzed conveniently through web service by local programs or through web interface by mouse clicks. Several useful analysis tools are also provided in IntPath. Conclusions We have overcome in IntPath the issues of compatibility, consistency, and comprehensiveness that often hamper effective use of pathway databases. We have included four organisms in the current release of IntPath. Our methodology and programs described in this work can be easily applied to other organisms; and we will include more model organisms and important pathogens in future releases of IntPath. IntPath maintains regular updates and is freely available at http://compbio.ddns.comp.nus.edu.sg:8080/IntPath. PMID:23282057

  3. Theory of quantized systems: formal basis for DEVS/HLA distributed simulation environment

    NASA Astrophysics Data System (ADS)

    Zeigler, Bernard P.; Lee, J. S.

    1998-08-01

    In the context of a DARPA ASTT project, we are developing an HLA-compliant distributed simulation environment based on the DEVS formalism. This environment will provide a user- friendly, high-level tool-set for developing interoperable discrete and continuous simulation models. One application is the study of contract-based predictive filtering. This paper presents a new approach to predictive filtering based on a process called 'quantization' to reduce state update transmission. Quantization, which generates state updates only at quantum level crossings, abstracts a sender model into a DEVS representation. This affords an alternative, efficient approach to embedding continuous models within distributed discrete event simulations. Applications of quantization to message traffic reduction are discussed. The theory has been validated by DEVSJAVA simulations of test cases. It will be subject to further test in actual distributed simulations using the DEVS/HLA modeling and simulation environment.

  4. Locally adaptive vector quantization: Data compression with feature preservation

    NASA Technical Reports Server (NTRS)

    Cheung, K. M.; Sayano, M.

    1992-01-01

    A study of a locally adaptive vector quantization (LAVQ) algorithm for data compression is presented. This algorithm provides high-speed one-pass compression and is fully adaptable to any data source and does not require a priori knowledge of the source statistics. Therefore, LAVQ is a universal data compression algorithm. The basic algorithm and several modifications to improve performance are discussed. These modifications are nonlinear quantization, coarse quantization of the codebook, and lossless compression of the output. Performance of LAVQ on various images using irreversible (lossy) coding is comparable to that of the Linde-Buzo-Gray algorithm, but LAVQ has a much higher speed; thus this algorithm has potential for real-time video compression. Unlike most other image compression algorithms, LAVQ preserves fine detail in images. LAVQ's performance as a lossless data compression algorithm is comparable to that of Lempel-Ziv-based algorithms, but LAVQ uses far less memory during the coding process.

  5. Landau quantization of Dirac fermions in graphene and its multilayers

    NASA Astrophysics Data System (ADS)

    Yin, Long-Jing; Bai, Ke-Ke; Wang, Wen-Xiao; Li, Si-Yu; Zhang, Yu; He, Lin

    2017-08-01

    When electrons are confined in a two-dimensional (2D) system, typical quantum-mechanical phenomena such as Landau quantization can be detected. Graphene systems, including the single atomic layer and few-layer stacked crystals, are ideal 2D materials for studying a variety of quantum-mechanical problems. In this article, we review the experimental progress in the unusual Landau quantized behaviors of Dirac fermions in monolayer and multilayer graphene by using scanning tunneling microscopy (STM) and scanning tunneling spectroscopy (STS). Through STS measurement of the strong magnetic fields, distinct Landau-level spectra and rich level-splitting phenomena are observed in different graphene layers. These unique properties provide an effective method for identifying the number of layers, as well as the stacking orders, and investigating the fundamentally physical phenomena of graphene. Moreover, in the presence of a strain and charged defects, the Landau quantization of graphene can be significantly modified, leading to unusual spectroscopic and electronic properties.

  6. Necessary conditions for the optimality of variable rate residual vector quantizers

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Smith, Mark J. T.; Barnes, Christopher F.

    1993-01-01

    Residual vector quantization (RVQ), or multistage VQ, as it is also called, has recently been shown to be a competitive technique for data compression. The competitive performance of RVQ reported in results from the joint optimization of variable rate encoding and RVQ direct-sum code books. In this paper, necessary conditions for the optimality of variable rate RVQ's are derived, and an iterative descent algorithm based on a Lagrangian formulation is introduced for designing RVQ's having minimum average distortion subject to an entropy constraint. Simulation results for these entropy-constrained RVQ's (EC-RVQ's) are presented for memory less Gaussian, Laplacian, and uniform sources. A Gauss-Markov source is also considered. The performance is superior to that of entropy-constrained scalar quantizers (EC-SQ's) and practical entropy-constrained vector quantizers (EC-VQ's), and is competitive with that of some of the best source coding techniques that have appeared in the literature.

  7. More on quantum groups from the quantization point of view

    NASA Astrophysics Data System (ADS)

    Jurčo, Branislav

    1994-12-01

    Star products on the classical double group of a simple Lie group and on corresponding symplectic groupoids are given so that the quantum double and the “quantized tangent bundle” are obtained in the deformation description. “Complex” quantum groups and bicovariant quantum Lie algebras are discussed from this point of view. Further we discuss the quantization of the Poisson structure on the symmetric algebra S(g) leading to the quantized enveloping algebra U h (g) as an example of biquantization in the sense of Turaev. Description of U h (g) in terms of the generators of the bicovariant differential calculus on F(G q ) is very convenient for this purpose. Finaly we interpret in the deformation framework some well known properties of compact quantum groups as simple consequences of corresponding properties of classical compact Lie groups. An analogue of the classical Kirillov's universal character formula is given for the unitary irreducble representation in the compact case.

  8. Quantization of gauge fields, graph polynomials and graph homology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kreimer, Dirk, E-mail: kreimer@physik.hu-berlin.de; Sars, Matthias; Suijlekom, Walter D. van

    2013-09-15

    We review quantization of gauge fields using algebraic properties of 3-regular graphs. We derive the Feynman integrand at n loops for a non-abelian gauge theory quantized in a covariant gauge from scalar integrands for connected 3-regular graphs, obtained from the two Symanzik polynomials. The transition to the full gauge theory amplitude is obtained by the use of a third, new, graph polynomial, the corolla polynomial. This implies effectively a covariant quantization without ghosts, where all the relevant signs of the ghost sector are incorporated in a double complex furnished by the corolla polynomial–we call it cycle homology–and by graph homology.more » -- Highlights: •We derive gauge theory Feynman from scalar field theory with 3-valent vertices. •We clarify the role of graph homology and cycle homology. •We use parametric renormalization and the new corolla polynomial.« less

  9. Augmenting Phase Space Quantization to Introduce Additional Physical Effects

    NASA Astrophysics Data System (ADS)

    Robbins, Matthew P. G.

    Quantum mechanics can be done using classical phase space functions and a star product. The state of the system is described by a quasi-probability distribution. A classical system can be quantized in phase space in different ways with different quasi-probability distributions and star products. A transition differential operator relates different phase space quantizations. The objective of this thesis is to introduce additional physical effects into the process of quantization by using the transition operator. As prototypical examples, we first look at the coarse-graining of the Wigner function and the damped simple harmonic oscillator. By generalizing the transition operator and star product to also be functions of the position and momentum, we show that additional physical features beyond damping and coarse-graining can be introduced into a quantum system, including the generalized uncertainty principle of quantum gravity phenomenology, driving forces, and decoherence.

  10. Event-triggered H∞ state estimation for semi-Markov jumping discrete-time neural networks with quantization.

    PubMed

    Rakkiyappan, R; Maheswari, K; Velmurugan, G; Park, Ju H

    2018-05-17

    This paper investigates H ∞ state estimation problem for a class of semi-Markovian jumping discrete-time neural networks model with event-triggered scheme and quantization. First, a new event-triggered communication scheme is introduced to determine whether or not the current sampled sensor data should be broad-casted and transmitted to the quantizer, which can save the limited communication resource. Second, a novel communication framework is employed by the logarithmic quantizer that quantifies and reduces the data transmission rate in the network, which apparently improves the communication efficiency of networks. Third, a stabilization criterion is derived based on the sufficient condition which guarantees a prescribed H ∞ performance level in the estimation error system in terms of the linear matrix inequalities. Finally, numerical simulations are given to illustrate the correctness of the proposed scheme. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Distributed Adaptive Containment Control for a Class of Nonlinear Multiagent Systems With Input Quantization.

    PubMed

    Wang, Chenliang; Wen, Changyun; Hu, Qinglei; Wang, Wei; Zhang, Xiuyu

    2018-06-01

    This paper is devoted to distributed adaptive containment control for a class of nonlinear multiagent systems with input quantization. By employing a matrix factorization and a novel matrix normalization technique, some assumptions involving control gain matrices in existing results are relaxed. By fusing the techniques of sliding mode control and backstepping control, a two-step design method is proposed to construct controllers and, with the aid of neural networks, all system nonlinearities are allowed to be unknown. Moreover, a linear time-varying model and a similarity transformation are introduced to circumvent the obstacle brought by quantization, and the controllers need no information about the quantizer parameters. The proposed scheme is able to ensure the boundedness of all closed-loop signals and steer the containment errors into an arbitrarily small residual set. The simulation results illustrate the effectiveness of the scheme.

  12. Model predictive control of non-linear systems over networks with data quantization and packet loss.

    PubMed

    Yu, Jimin; Nan, Liangsheng; Tang, Xiaoming; Wang, Ping

    2015-11-01

    This paper studies the approach of model predictive control (MPC) for the non-linear systems under networked environment where both data quantization and packet loss may occur. The non-linear controlled plant in the networked control system (NCS) is represented by a Tagaki-Sugeno (T-S) model. The sensed data and control signal are quantized in both links and described as sector bound uncertainties by applying sector bound approach. Then, the quantized data are transmitted in the communication networks and may suffer from the effect of packet losses, which are modeled as Bernoulli process. A fuzzy predictive controller which guarantees the stability of the closed-loop system is obtained by solving a set of linear matrix inequalities (LMIs). A numerical example is given to illustrate the effectiveness of the proposed method. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Thermal distributions of first, second and third quantization

    NASA Astrophysics Data System (ADS)

    McGuigan, Michael

    1989-05-01

    We treat first quantized string theory as two-dimensional gravity plus matter. This allows us to compute the two-dimensional density of one string states by the method of Darwin and Fowler. One can then use second quantized methods to form a grand microcanonical ensemble in which one can compute the density of multistring states of arbitrary momentum and mass. It is argued that modelling an elementary particle as a d-1-dimensional object whose internal degrees of freedom are described by a massless d-dimensional gas yields a density of internal states given by σ d(m)∼m -aexp((bm) {2(d-1)}/{d}) . This indicates that these objects cannot be in thermal equilibrium at any temperature unless d⩽2; that is for a string or a particle. Finally, we discuss the application of the above ideas to four-dimensional gravity and introduce an ensemble of multiuniverse states parameterized by second quantized canonical momenta and particle number.

  14. Fine structure constant and quantized optical transparency of plasmonic nanoarrays.

    PubMed

    Kravets, V G; Schedin, F; Grigorenko, A N

    2012-01-24

    Optics is renowned for displaying quantum phenomena. Indeed, studies of emission and absorption lines, the photoelectric effect and blackbody radiation helped to build the foundations of quantum mechanics. Nevertheless, it came as a surprise that the visible transparency of suspended graphene is determined solely by the fine structure constant, as this kind of universality had been previously reserved only for quantized resistance and flux quanta in superconductors. Here we describe a plasmonic system in which relative optical transparency is determined solely by the fine structure constant. The system consists of a regular array of gold nanoparticles fabricated on a thin metallic sublayer. We show that its relative transparency can be quantized in the near-infrared, which we attribute to the quantized contact resistance between the nanoparticles and the metallic sublayer. Our results open new possibilities in the exploration of universal dynamic conductance in plasmonic nanooptics.

  15. Sub-Selective Quantization for Learning Binary Codes in Large-Scale Image Search.

    PubMed

    Li, Yeqing; Liu, Wei; Huang, Junzhou

    2018-06-01

    Recently with the explosive growth of visual content on the Internet, large-scale image search has attracted intensive attention. It has been shown that mapping high-dimensional image descriptors to compact binary codes can lead to considerable efficiency gains in both storage and performing similarity computation of images. However, most existing methods still suffer from expensive training devoted to large-scale binary code learning. To address this issue, we propose a sub-selection based matrix manipulation algorithm, which can significantly reduce the computational cost of code learning. As case studies, we apply the sub-selection algorithm to several popular quantization techniques including cases using linear and nonlinear mappings. Crucially, we can justify the resulting sub-selective quantization by proving its theoretic properties. Extensive experiments are carried out on three image benchmarks with up to one million samples, corroborating the efficacy of the sub-selective quantization method in terms of image retrieval.

  16. Path-integral methods for analyzing the effects of fluctuations in stochastic hybrid neural networks.

    PubMed

    Bressloff, Paul C

    2015-01-01

    We consider applications of path-integral methods to the analysis of a stochastic hybrid model representing a network of synaptically coupled spiking neuronal populations. The state of each local population is described in terms of two stochastic variables, a continuous synaptic variable and a discrete activity variable. The synaptic variables evolve according to piecewise-deterministic dynamics describing, at the population level, synapses driven by spiking activity. The dynamical equations for the synaptic currents are only valid between jumps in spiking activity, and the latter are described by a jump Markov process whose transition rates depend on the synaptic variables. We assume a separation of time scales between fast spiking dynamics with time constant [Formula: see text] and slower synaptic dynamics with time constant τ. This naturally introduces a small positive parameter [Formula: see text], which can be used to develop various asymptotic expansions of the corresponding path-integral representation of the stochastic dynamics. First, we derive a variational principle for maximum-likelihood paths of escape from a metastable state (large deviations in the small noise limit [Formula: see text]). We then show how the path integral provides an efficient method for obtaining a diffusion approximation of the hybrid system for small ϵ. The resulting Langevin equation can be used to analyze the effects of fluctuations within the basin of attraction of a metastable state, that is, ignoring the effects of large deviations. We illustrate this by using the Langevin approximation to analyze the effects of intrinsic noise on pattern formation in a spatially structured hybrid network. In particular, we show how noise enlarges the parameter regime over which patterns occur, in an analogous fashion to PDEs. Finally, we carry out a [Formula: see text]-loop expansion of the path integral, and use this to derive corrections to voltage-based mean-field equations, analogous to the modified activity-based equations generated from a neural master equation.

  17. Mixed Linear/Square-Root Encoded Single-Slope Ramp Provides Low-Noise ADC with High Linearity for Focal Plane Arrays

    NASA Technical Reports Server (NTRS)

    Wrigley, Chris J.; Hancock, Bruce R.; Newton, Kenneth W.; Cunningham, Thomas J.

    2013-01-01

    Single-slope analog-to-digital converters (ADCs) are particularly useful for onchip digitization in focal plane arrays (FPAs) because of their inherent monotonicity, relative simplicity, and efficiency for column-parallel applications, but they are comparatively slow. Squareroot encoding can allow the number of code values to be reduced without loss of signal-to-noise ratio (SNR) by keeping the quantization noise just below the signal shot noise. This encoding can be implemented directly by using a quadratic ramp. The reduction in the number of code values can substantially increase the quantization speed. However, in an FPA, the fixed pattern noise (FPN) limits the use of small quantization steps at low signal levels. If the zero-point is adjusted so that the lowest column is onscale, the other columns, including those at the center of the distribution, will be pushed up the ramp where the quantization noise is higher. Additionally, the finite frequency response of the ramp buffer amplifier and the comparator distort the shape of the ramp, so that the effective ramp value at the time the comparator trips differs from the intended value, resulting in errors. Allowing increased settling time decreases the quantization speed, while increasing the bandwidth increases the noise. The FPN problem is solved by breaking the ramp into two portions, with some fraction of the available code values allocated to a linear ramp and the remainder to a quadratic ramp. To avoid large transients, both the value and the slope of the linear and quadratic portions should be equal where they join. The span of the linear portion must cover the minimum offset, but not necessarily the maximum, since the fraction of the pixels above the upper limit will still be correctly quantized, albeit with increased quantization noise. The required linear span, maximum signal and ratio of quantization noise to shot noise at high signal, along with the continuity requirement, determines the number of code values that must be allocated to each portion. The distortion problem is solved by using a lookup table to convert captured code values back to signal levels. The values in this table will be similar to the intended ramp value, but with a correction for the finite bandwidth effects. Continuous-time comparators are used, and their bandwidth is set below the step rate, which smoothes the ramp and reduces the noise. No settling time is needed, as would be the case for clocked comparators, but the low bandwidth enhances the distortion of the non-linear portion. This is corrected by use of a return lookup table, which differs from the one used to generate the ramp. The return lookup table is obtained by calibrating against a stepped precision DC reference. This results in a residual non-linearity well below the quantization noise. This method can also compensate for differential non-linearity (DNL) in the DAC used to generate the ramp. The use of a ramp with a combination of linear and quadratic portions for a single-slope ADC is novel. The number of steps is minimized by keeping the step size just below the photon shot noise. This in turn maximizes the speed of the conversion. High resolution is maintained by keeping small quantization steps at low signals, and noise is minimized by allowing the lowest analog bandwidth, all without increasing the quantization noise. A calibrated return lookup table allows the system to maintain excellent linearity.

  18. MPI CyberMotion Simulator: implementation of a novel motion simulator to investigate multisensory path integration in three dimensions.

    PubMed

    Barnett-Cowan, Michael; Meilinger, Tobias; Vidal, Manuel; Teufel, Harald; Bülthoff, Heinrich H

    2012-05-10

    Path integration is a process in which self-motion is integrated over time to obtain an estimate of one's current position relative to a starting point (1). Humans can do path integration based exclusively on visual (2-3), auditory (4), or inertial cues (5). However, with multiple cues present, inertial cues - particularly kinaesthetic - seem to dominate (6-7). In the absence of vision, humans tend to overestimate short distances (<5 m) and turning angles (<30°), but underestimate longer ones (5). Movement through physical space therefore does not seem to be accurately represented by the brain. Extensive work has been done on evaluating path integration in the horizontal plane, but little is known about vertical movement (see (3) for virtual movement from vision alone). One reason for this is that traditional motion simulators have a small range of motion restricted mainly to the horizontal plane. Here we take advantage of a motion simulator (8-9) with a large range of motion to assess whether path integration is similar between horizontal and vertical planes. The relative contributions of inertial and visual cues for path navigation were also assessed. 16 observers sat upright in a seat mounted to the flange of a modified KUKA anthropomorphic robot arm. Sensory information was manipulated by providing visual (optic flow, limited lifetime star field), vestibular-kinaesthetic (passive self motion with eyes closed), or visual and vestibular-kinaesthetic motion cues. Movement trajectories in the horizontal, sagittal and frontal planes consisted of two segment lengths (1st: 0.4 m, 2nd: 1 m; ±0.24 m/s(2) peak acceleration). The angle of the two segments was either 45° or 90°. Observers pointed back to their origin by moving an arrow that was superimposed on an avatar presented on the screen. Observers were more likely to underestimate angle size for movement in the horizontal plane compared to the vertical planes. In the frontal plane observers were more likely to overestimate angle size while there was no such bias in the sagittal plane. Finally, observers responded slower when answering based on vestibular-kinaesthetic information alone. Human path integration based on vestibular-kinaesthetic information alone thus takes longer than when visual information is present. That pointing is consistent with underestimating and overestimating the angle one has moved through in the horizontal and vertical planes respectively, suggests that the neural representation of self-motion through space is non-symmetrical which may relate to the fact that humans experience movement mostly within the horizontal plane.

  19. Quantization and Quantum-Like Phenomena: A Number Amplitude Approach

    NASA Astrophysics Data System (ADS)

    Robinson, T. R.; Haven, E.

    2015-12-01

    Historically, quantization has meant turning the dynamical variables of classical mechanics that are represented by numbers into their corresponding operators. Thus the relationships between classical variables determine the relationships between the corresponding quantum mechanical operators. Here, we take a radically different approach to this conventional quantization procedure. Our approach does not rely on any relations based on classical Hamiltonian or Lagrangian mechanics nor on any canonical quantization relations, nor even on any preconceptions of particle trajectories in space and time. Instead we examine the symmetry properties of certain Hermitian operators with respect to phase changes. This introduces harmonic operators that can be identified with a variety of cyclic systems, from clocks to quantum fields. These operators are shown to have the characteristics of creation and annihilation operators that constitute the primitive fields of quantum field theory. Such an approach not only allows us to recover the Hamiltonian equations of classical mechanics and the Schrödinger wave equation from the fundamental quantization relations, but also, by freeing the quantum formalism from any physical connotation, makes it more directly applicable to non-physical, so-called quantum-like systems. Over the past decade or so, there has been a rapid growth of interest in such applications. These include, the use of the Schrödinger equation in finance, second quantization and the number operator in social interactions, population dynamics and financial trading, and quantum probability models in cognitive processes and decision-making. In this paper we try to look beyond physical analogies to provide a foundational underpinning of such applications.

  20. Generalized noise terms for the quantized fluctuational electrodynamics

    NASA Astrophysics Data System (ADS)

    Partanen, Mikko; Häyrynen, Teppo; Tulkki, Jukka; Oksanen, Jani

    2017-03-01

    The quantization of optical fields in vacuum has been known for decades, but extending the field quantization to lossy and dispersive media in nonequilibrium conditions has proven to be complicated due to the position-dependent electric and magnetic responses of the media. In fact, consistent position-dependent quantum models for the photon number in resonant structures have only been formulated very recently and only for dielectric media. Here we present a general position-dependent quantized fluctuational electrodynamics (QFED) formalism that extends the consistent field quantization to describe the photon number also in the presence of magnetic field-matter interactions. It is shown that the magnetic fluctuations provide an additional degree of freedom in media where the magnetic coupling to the field is prominent. Therefore, the field quantization requires an additional independent noise operator that is commuting with the conventional bosonic noise operator describing the polarization current fluctuations in dielectric media. In addition to allowing the detailed description of field fluctuations, our methods provide practical tools for modeling optical energy transfer and the formation of thermal balance in general dielectric and magnetic nanodevices. We use QFED to investigate the magnetic properties of microcavity systems to demonstrate an example geometry in which it is possible to probe fields arising from the electric and magnetic source terms. We show that, as a consequence of the magnetic Purcell effect, the tuning of the position of an emitter layer placed inside a vacuum cavity can make the emissivity of a magnetic emitter to exceed the emissivity of a corresponding electric emitter.

Top