Canonical methods in classical and quantum gravity: An invitation to canonical LQG
NASA Astrophysics Data System (ADS)
Reyes, Juan D.
2018-04-01
Loop Quantum Gravity (LQG) is a candidate quantum theory of gravity still under construction. LQG was originally conceived as a background independent canonical quantization of Einstein’s general relativity theory. This contribution provides some physical motivations and an overview of some mathematical tools employed in canonical Loop Quantum Gravity. First, Hamiltonian classical methods are reviewed from a geometric perspective. Canonical Dirac quantization of general gauge systems is sketched next. The Hamiltonian formultation of gravity in geometric ADM and connection-triad variables is then presented to finally lay down the canonical loop quantization program. The presentation is geared toward advanced undergradute or graduate students in physics and/or non-specialists curious about LQG.
Basic Brackets of a 2D Model for the Hodge Theory Without its Canonical Conjugate Momenta
NASA Astrophysics Data System (ADS)
Kumar, R.; Gupta, S.; Malik, R. P.
2016-06-01
We deduce the canonical brackets for a two (1+1)-dimensional (2D) free Abelian 1-form gauge theory by exploiting the beauty and strength of the continuous symmetries of a Becchi-Rouet-Stora-Tyutin (BRST) invariant Lagrangian density that respects, in totality, six continuous symmetries. These symmetries entail upon this model to become a field theoretic example of Hodge theory. Taken together, these symmetries enforce the existence of exactly the same canonical brackets amongst the creation and annihilation operators that are found to exist within the standard canonical quantization scheme. These creation and annihilation operators appear in the normal mode expansion of the basic fields of this theory. In other words, we provide an alternative to the canonical method of quantization for our present model of Hodge theory where the continuous internal symmetries play a decisive role. We conjecture that our method of quantization is valid for a class of field theories that are tractable physical examples for the Hodge theory. This statement is true in any arbitrary dimension of spacetime.
Quantized discrete space oscillators
NASA Technical Reports Server (NTRS)
Uzes, C. A.; Kapuscik, Edward
1993-01-01
A quasi-canonical sequence of finite dimensional quantizations was found which has canonical quantization as its limit. In order to demonstrate its practical utility and its numerical convergence, this formalism is applied to the eigenvalue and 'eigenfunction' problem of several harmonic and anharmonic oscillators.
Deformation of second and third quantization
NASA Astrophysics Data System (ADS)
Faizal, Mir
2015-03-01
In this paper, we will deform the second and third quantized theories by deforming the canonical commutation relations in such a way that they become consistent with the generalized uncertainty principle. Thus, we will first deform the second quantized commutator and obtain a deformed version of the Wheeler-DeWitt equation. Then we will further deform the third quantized theory by deforming the third quantized canonical commutation relation. This way we will obtain a deformed version of the third quantized theory for the multiverse.
On two mathematical problems of canonical quantization. IV
NASA Astrophysics Data System (ADS)
Kirillov, A. I.
1992-11-01
A method for solving the problem of reconstructing a measure beginning with its logarithmic derivative is presented. The method completes that of solving the stochastic differential equation via Dirichlet forms proposed by S. Albeverio and M. Rockner. As a result one obtains the mathematical apparatus for the stochastic quantization. The apparatus is applied to prove the existence of the Feynman-Kac measure of the sine-Gordon and λφ2n/(1 + K2φ2n)-models. A synthesis of both mathematical problems of canonical quantization is obtained in the form of a second-order martingale problem for vacuum noise. It is shown that in stochastic mechanics the martingale problem is an analog of Newton's second law and enables us to find the Nelson's stochastic trajectories without determining the wave functions.
BRST quantization of cosmological perturbations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Armendariz-Picon, Cristian; Şengör, Gizem
2016-11-08
BRST quantization is an elegant and powerful method to quantize theories with local symmetries. In this article we study the Hamiltonian BRST quantization of cosmological perturbations in a universe dominated by a scalar field, along with the closely related quantization method of Dirac. We describe how both formalisms apply to perturbations in a time-dependent background, and how expectation values of gauge-invariant operators can be calculated in the in-in formalism. Our analysis focuses mostly on the free theory. By appropriate canonical transformations we simplify and diagonalize the free Hamiltonian. BRST quantization in derivative gauges allows us to dramatically simplify the structuremore » of the propagators, whereas Dirac quantization, which amounts to quantization in synchronous gauge, dispenses with the need to introduce ghosts and preserves the locality of the gauge-fixed action.« less
Canonical field anticommutators in the extended gauged Rarita-Schwinger theory
NASA Astrophysics Data System (ADS)
Adler, Stephen L.; Henneaux, Marc; Pais, Pablo
2017-10-01
We reexamine canonical quantization of the gauged Rarita-Schwinger theory using the extended theory, incorporating a dimension 1/2 auxiliary spin-1/2 field Λ , in which there is an exact off-shell gauge invariance. In Λ =0 gauge, which reduces to the original unextended theory, our results agree with those found by Johnson and Sudarshan, and later verified by Velo and Zwanziger, which give a canonical Rarita-Schwinger field Dirac bracket that is singular for small gauge fields. In gauge covariant radiation gauge, the Dirac bracket of the Rarita-Schwinger fields is nonsingular, but does not correspond to a positive semidefinite anticommutator, and the Dirac bracket of the auxiliary fields has a singularity of the same form as found in the unextended theory. These results indicate that gauged Rarita-Schwinger theory is somewhat pathological, and cannot be canonically quantized within a conventional positive semidefinite metric Hilbert space. We leave open the questions of whether consistent quantizations can be achieved by using an indefinite metric Hilbert space, by path integral methods, or by appropriate couplings to conventional dimension 3/2 spin-1/2 fields.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grigoryan, G.V.; Grigoryan, R.P.
1995-09-01
The canonical quantization of a (D=2n)-dimensional Dirac particle with spin in an arbitrary external electromagnetic field is performed in a gauge that makes it possible to describe simultaneously particles and antiparticles (both massive and massless) already at the classical level. A pseudoclassical Foldy-Wouthuysen transformation is used to find the canonical (Newton-Wigner) coordinates. The connection between this quantization scheme and Blount`s picture describing the behavior of a Dirac particle in an external electromagnetic field is discussed.
Quantization of Non-Lagrangian Systems
NASA Astrophysics Data System (ADS)
Kochan, Denis
A novel method for quantization of non-Lagrangian (open) systems is proposed. It is argued that the essential object, which provides both classical and quantum evolution, is a certain canonical two-form defined in extended velocity space. In this setting classical dynamics is recovered from the stringy-type variational principle, which employs umbilical surfaces instead of histories of the system. Quantization is then accomplished in accordance with the introduced variational principle. The path integral for the transition probability amplitude (propagator) is rearranged to a surface functional integral. In the standard case of closed (Lagrangian) systems the presented method reduces to the standard Feynman's approach. The inverse problem of the calculus of variation, the problem of quantization ambiguity and the quantum mechanics in the presence of friction are analyzed in detail.
The coordinate coherent states approach revisited
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miao, Yan-Gang, E-mail: miaoyg@nankai.edu.cn; Zhang, Shao-Jun, E-mail: sjzhang@mail.nankai.edu.cn
2013-02-15
We revisit the coordinate coherent states approach through two different quantization procedures in the quantum field theory on the noncommutative Minkowski plane. The first procedure, which is based on the normal commutation relation between an annihilation and creation operators, deduces that a point mass can be described by a Gaussian function instead of the usual Dirac delta function. However, we argue this specific quantization by adopting the canonical one (based on the canonical commutation relation between a field and its conjugate momentum) and show that a point mass should still be described by the Dirac delta function, which implies thatmore » the concept of point particles is still valid when we deal with the noncommutativity by following the coordinate coherent states approach. In order to investigate the dependence on quantization procedures, we apply the two quantization procedures to the Unruh effect and Hawking radiation and find that they give rise to significantly different results. Under the first quantization procedure, the Unruh temperature and Unruh spectrum are not deformed by noncommutativity, but the Hawking temperature is deformed by noncommutativity while the radiation specturm is untack. However, under the second quantization procedure, the Unruh temperature and Hawking temperature are untack but the both spectra are modified by an effective greybody (deformed) factor. - Highlights: Black-Right-Pointing-Pointer Suggest a canonical quantization in the coordinate coherent states approach. Black-Right-Pointing-Pointer Prove the validity of the concept of point particles. Black-Right-Pointing-Pointer Apply the canonical quantization to the Unruh effect and Hawking radiation. Black-Right-Pointing-Pointer Find no deformations in the Unruh temperature and Hawking temperature. Black-Right-Pointing-Pointer Provide the modified spectra of the Unruh effect and Hawking radiation.« less
On a canonical quantization of 3D Anti de Sitter pure gravity
NASA Astrophysics Data System (ADS)
Kim, Jihun; Porrati, Massimo
2015-10-01
We perform a canonical quantization of pure gravity on AdS 3 using as a technical tool its equivalence at the classical level with a Chern-Simons theory with gauge group SL(2,{R})× SL(2,{R}) . We first quantize the theory canonically on an asymptotically AdS space -which is topologically the real line times a Riemann surface with one connected boundary. Using the "constrain first" approach we reduce canonical quantization to quantization of orbits of the Virasoro group and Kähler quantization of Teichmüller space. After explicitly computing the Kähler form for the torus with one boundary component and after extending that result to higher genus, we recover known results, such as that wave functions of SL(2,{R}) Chern-Simons theory are conformal blocks. We find new restrictions on the Hilbert space of pure gravity by imposing invariance under large diffeomorphisms and normalizability of the wave function. The Hilbert space of pure gravity is shown to be the target space of Conformal Field Theories with continuous spectrum and a lower bound on operator dimensions. A projection defined by topology changing amplitudes in Euclidean gravity is proposed. It defines an invariant subspace that allows for a dual interpretation in terms of a Liouville CFT. Problems and features of the CFT dual are assessed and a new definition of the Hilbert space, exempt from those problems, is proposed in the case of highly-curved AdS 3.
NASA Astrophysics Data System (ADS)
DeWitt, Bryce S.
2017-06-01
During the period June-July 1957 six physicists met at the Institute for Theoretical Physics of the University of Copenhagen in Denmark to work together on problems connected with the quantization of the gravitational field. A large part of the discussion was devoted to exposition of the individual work of the various participants, but a number of new results were also obtained. The topics investigated by these physicists are outlined in this report and may be grouped under the following main headings: The theory of measurement. Topographical problems in general relativity. Feynman quantization. Canonical quantization. Approximation methods. Special problems.
Paul Weiss and the genesis of canonical quantization
NASA Astrophysics Data System (ADS)
Rickles, Dean; Blum, Alexander
2015-12-01
This paper describes the life and work of a figure who, we argue, was of primary importance during the early years of field quantisation and (albeit more indirectly) quantum gravity. A student of Dirac and Born, he was interned in Canada during the second world war as an enemy alien and after his release never seemed to regain a good foothold in physics, identifying thereafter as a mathematician. He developed a general method of quantizing (linear and non-linear) field theories based on the parameters labelling an arbitrary hypersurface. This method (the `parameter formalism' often attributed to Dirac), though later discarded, was employed (and viewed at the time as an extremely important tool) by the leading figures associated with canonical quantum gravity: Dirac, Pirani and Schild, Bergmann, DeWitt, and others. We argue that he deserves wider recognition for this and other innovations.
Covariant scalar representation of ? and quantization of the scalar relativistic particle
NASA Astrophysics Data System (ADS)
Jarvis, P. D.; Tsohantjis, I.
1996-03-01
A covariant scalar representation of iosp(d,2/2) is constructed and analysed in comparison with existing BFV-BRST methods for the quantization of the scalar relativistic particle. It is found that, with appropriately defined wavefunctions, this iosp(d,2/2) produced representation can be identified with the state space arising from the canonical BFV-BRST quantization of the modular-invariant, unoriented scalar particle (or antiparticle) with admissible gauge-fixing conditions. For this model, the cohomological determination of physical states can thus be obtained purely from the representation theory of the iosp(d,2/2) algebra.
Coherent state quantization of quaternions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muraleetharan, B., E-mail: bbmuraleetharan@jfn.ac.lk, E-mail: santhar@gmail.com; Thirulogasanthar, K., E-mail: bbmuraleetharan@jfn.ac.lk, E-mail: santhar@gmail.com
Parallel to the quantization of the complex plane, using the canonical coherent states of a right quaternionic Hilbert space, quaternion field of quaternionic quantum mechanics is quantized. Associated upper symbols, lower symbols, and related quantities are analyzed. Quaternionic version of the harmonic oscillator and Weyl-Heisenberg algebra are also obtained.
Thermal distributions of first, second and third quantization
NASA Astrophysics Data System (ADS)
McGuigan, Michael
1989-05-01
We treat first quantized string theory as two-dimensional gravity plus matter. This allows us to compute the two-dimensional density of one string states by the method of Darwin and Fowler. One can then use second quantized methods to form a grand microcanonical ensemble in which one can compute the density of multistring states of arbitrary momentum and mass. It is argued that modelling an elementary particle as a d-1-dimensional object whose internal degrees of freedom are described by a massless d-dimensional gas yields a density of internal states given by σ d(m)∼m -aexp((bm) {2(d-1)}/{d}) . This indicates that these objects cannot be in thermal equilibrium at any temperature unless d⩽2; that is for a string or a particle. Finally, we discuss the application of the above ideas to four-dimensional gravity and introduce an ensemble of multiuniverse states parameterized by second quantized canonical momenta and particle number.
NASA Astrophysics Data System (ADS)
Melas, Evangelos
2011-07-01
The 3+1 (canonical) decomposition of all geometries admitting two-dimensional space-like surfaces is exhibited as a generalization of a previous work. A proposal, consisting of a specific re-normalization Assumption and an accompanying Requirement, which has been put forward in the 2+1 case is now generalized to 3+1 dimensions. This enables the canonical quantization of these geometries through a generalization of Kuchař's quantization scheme in the case of infinite degrees of freedom. The resulting Wheeler-deWitt equation is based on a re-normalized manifold parameterized by three smooth scalar functionals. The entire space of solutions to this equation is analytically given, a fact that is entirely new to the present case. This is made possible by exploiting the freedom left by the imposition of the Requirement and contained in the third functional.
On the quantization of the massless Bateman system
NASA Astrophysics Data System (ADS)
Takahashi, K.
2018-03-01
The so-called Bateman system for the damped harmonic oscillator is reduced to a genuine dual dissipation system (DDS) by setting the mass to zero. We explore herein the condition under which the canonical quantization of the DDS is consistently performed. The roles of the observable and auxiliary coordinates are discriminated. The results show that the complete and orthogonal Fock space of states can be constructed on the stable vacuum if an anti-Hermite representation of the canonical Hamiltonian is adopted. The amplitude of the one-particle wavefunction is consistent with the classical solution. The fields can be quantized as bosonic or fermionic. For bosonic systems, the quantum fluctuation of the field is directly associated with the dissipation rate.
NASA Astrophysics Data System (ADS)
Husain, Viqar
2012-03-01
Research on quantum gravity from a non-perturbative 'quantization of geometry' perspective has been the focus of much research in the past two decades, due to the Ashtekar-Barbero Hamiltonian formulation of general relativity. This approach provides an SU(2) gauge field as the canonical configuration variable; the analogy with Yang-Mills theory at the kinematical level opened up some research space to reformulate the old Wheeler-DeWitt program into what is now known as loop quantum gravity (LQG). The author is known for his work in the LQG approach to cosmology, which was the first application of this formalism that provided the possibility of exploring physical questions. Therefore the flavour of the book is naturally informed by this history. The book is based on a set of graduate-level lectures designed to impart a working knowledge of the canonical approach to gravitation. It is more of a textbook than a treatise, unlike three other recent books in this area by Kiefer [1], Rovelli [2] and Thiemann [3]. The style and choice of topics of these authors are quite different; Kiefer's book provides a broad overview of the path integral and canonical quantization methods from a historical perspective, whereas Rovelli's book focuses on philosophical and formalistic aspects of the problems of time and observables, and gives a development of spin-foam ideas. Thiemann's is much more a mathematical physics book, focusing entirely on the theory of representing constraint operators on a Hilbert space and charting a mathematical trajectory toward a physical Hilbert space for quantum gravity. The significant difference from these books is that Bojowald covers mainly classical topics until the very last chapter, which contains the only discussion of quantization. In its coverage of classical gravity, the book has some content overlap with Poisson's book [4], and with Ryan and Shepley's older work on relativistic cosmology [5]; for instance the contents of chapter five of the book are also covered in detail, and with more worked examples, in the former book, and the entire focus of the latter is Bianchi models. After a brief introduction outlining the aim of the book, the second chapter provides the canonical theory of homogeneous isotropic cosmology with scalar matter; this covers the basics and linear perturbation theory, and is meant as a first taste of what is to come. The next chapter is a thorough introduction of the canonical formulation of general relativity in both the ADM and Ashtekar-Barbero variables. This chapter contains details useful for graduate students which are either scattered or missing in the literature. Applications of the canonical formalism are in the following chapter. These cover standard material and techniques for obtaining mini(midi)-superspace models, including the Bianchi and Gowdy cosmologies, and spherically symmetric reductions. There is also a brief discussion of the two-dimensional dilaton gravity. The spherically symmetric reduction is presented in detail also in the connection-triad variables. The chapter on global and asymptotic properties gives introductions to geodesic and null congruences, trapped surfaces, a survey of singularity theorems, horizons and asymptotic properties. The chapter ends with a discussion of junction conditions and the Vaidya solution. As already mentioned, this material is covered in detail in Poisson's book. The final chapter on quantization describes and contrasts the Dirac and reduced phase space methods. It also gives an introduction to background independent quantization using the holonomy-flux operators, which forms the basis of the LQG program. The application of this method to cosmology and its affect on the Friedmann equation is covered next, followed by a brief introduction to the effective constraint method, which is another area developed by the author. I think this book is a useful addition to the literature for graduate students, and potentially also for researchers in other areas who wish to learn about the canonical approach to gravity. However, given the brief chapter on quantization, the book would go well with a review paper, or parts of the other three quantum gravity books cited above. References [1] Kiefer C 2006 Quantum Gravity 2nd ed. (Oxford University Press) [2] Rovelli C 2007 Quantum Gravity (Cambridge University Press) [3] Thiemann T 2008 Modern Canonical Quantum Gravity (Cambridge University Press) [4] Posson E 2004 A Relativist's Toolkit: The Mathematics of Black-Hole Mechanics (Cambridge University Press) [5] Ryan M P and Shepley L C 1975 Homogeneous Relativistic Cosmology (Princeton University Press)
Consistency of certain constitutive relations with quantum electromagnetism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horsley, S. A. R.
2011-12-15
Recent work by Philbin [New J. Phys. 12, 123008 (2010)] has provided a Lagrangian theory that establishes a general method for the canonical quantization of the electromagnetic field in any dispersive, lossy, linear dielectric. Working from this theory, we extend the Lagrangian description to reciprocal and nonreciprocal magnetoelectric (bianisotropic) media, showing that some versions of the constitutive relations are inconsistent with a real Lagrangian, and hence with quantization. This amounts to a restriction on the magnitude of the magnetoelectric coupling. Moreover, from the point of view of quantization, moving media are shown to be fundamentally different from stationary magnetoelectrics, despitemore » the formal similarity in the constitutive relations.« less
Quantization of simple parametrized systems
NASA Astrophysics Data System (ADS)
Ruffini, G.
2005-11-01
I study the canonical formulation and quantization of some simple parametrized systems, including the non-relativistic parametrized particle and the relativistic parametrized particle. Using Dirac's formalism I construct for each case the classical reduced phase space and study the dependence on the gauge fixing used. Two separate features of these systems can make this construction difficult: the actions are not invariant at the boundaries, and the constraints may have disconnected solution spaces. The relativistic particle is affected by both, while the non-relativistic particle displays only by the first. Analyzing the role of canonical transformations in the reduced phase space, I show that a change of gauge fixing is equivalent to a canonical transformation. In the relativistic case, quantization of one branch of the constraint at the time is applied and I analyze the electromagenetic backgrounds in which it is possible to quantize simultaneously both branches and still obtain a covariant unitary quantum theory. To preserve unitarity and space-time covariance, second quantization is needed unless there is no electric field. I motivate a definition of the inner product in all these cases and derive the Klein-Gordon inner product for the relativistic case. I construct phase space path integral representations for amplitudes for the BFV and the Faddeev path integrals, from which the path integrals in coordinate space (Faddeev-Popov and geometric path integrals) are derived.
Unique Fock quantization of scalar cosmological perturbations
NASA Astrophysics Data System (ADS)
Fernández-Méndez, Mikel; Mena Marugán, Guillermo A.; Olmedo, Javier; Velhinho, José M.
2012-05-01
We investigate the ambiguities in the Fock quantization of the scalar perturbations of a Friedmann-Lemaître-Robertson-Walker model with a massive scalar field as matter content. We consider the case of compact spatial sections (thus avoiding infrared divergences), with the topology of a three-sphere. After expanding the perturbations in series of eigenfunctions of the Laplace-Beltrami operator, the Hamiltonian of the system is written up to quadratic order in them. We fix the gauge of the local degrees of freedom in two different ways, reaching in both cases the same qualitative results. A canonical transformation, which includes the scaling of the matter-field perturbations by the scale factor of the geometry, is performed in order to arrive at a convenient formulation of the system. We then study the quantization of these perturbations in the classical background determined by the homogeneous variables. Based on previous work, we introduce a Fock representation for the perturbations in which: (a) the complex structure is invariant under the isometries of the spatial sections and (b) the field dynamics is implemented as a unitary operator. These two properties select not only a unique unitary equivalence class of representations, but also a preferred field description, picking up a canonical pair of field variables among all those that can be obtained by means of a time-dependent scaling of the matter field (completed into a linear canonical transformation). Finally, we present an equivalent quantization constructed in terms of gauge-invariant quantities. We prove that this quantization can be attained by a mode-by-mode time-dependent linear canonical transformation which admits a unitary implementation, so that it is also uniquely determined.
Berezin-Toeplitz quantization and naturally defined star products for Kähler manifolds
NASA Astrophysics Data System (ADS)
Schlichenmaier, Martin
2018-04-01
For compact quantizable Kähler manifolds the Berezin-Toeplitz quantization schemes, both operator and deformation quantization (star product) are reviewed. The treatment includes Berezin's covariant symbols and the Berezin transform. The general compact quantizable case was done by Bordemann-Meinrenken-Schlichenmaier, Schlichenmaier, and Karabegov-Schlichenmaier. For star products on Kähler manifolds, separation of variables, or equivalently star product of (anti-) Wick type, is a crucial property. As canonically defined star products the Berezin-Toeplitz, Berezin, and the geometric quantization are treated. It turns out that all three are equivalent, but different.
A Algebraic Approach to the Quantization of Constrained Systems: Finite Dimensional Examples.
NASA Astrophysics Data System (ADS)
Tate, Ranjeet Shekhar
1992-01-01
General relativity has two features in particular, which make it difficult to apply to it existing schemes for the quantization of constrained systems. First, there is no background structure in the theory, which could be used, e.g., to regularize constraint operators, to identify a "time" or to define an inner product on physical states. Second, in the Ashtekar formulation of general relativity, which is a promising avenue to quantum gravity, the natural variables for quantization are not canonical; and, classically, there are algebraic identities between them. Existing schemes are usually not concerned with such identities. Thus, from the point of view of canonical quantum gravity, it has become imperative to find a framework for quantization which provides a general prescription to find the physical inner product, and is flexible enough to accommodate non -canonical variables. In this dissertation I present an algebraic formulation of the Dirac approach to the quantization of constrained systems. The Dirac quantization program is augmented by a general principle to find the inner product on physical states. Essentially, the Hermiticity conditions on physical operators determine this inner product. I also clarify the role in quantum theory of possible algebraic identities between the elementary variables. I use this approach to quantize various finite dimensional systems. Some of these models test the new aspects of the algebraic framework. Others bear qualitative similarities to general relativity, and may give some insight into the pitfalls lurking in quantum gravity. The previous quantizations of one such model had many surprising features. When this model is quantized using the algebraic program, there is no longer any unexpected behaviour. I also construct the complete quantum theory for a previously unsolved relativistic cosmology. All these models indicate that the algebraic formulation provides powerful new tools for quantization. In (spatially compact) general relativity, the Hamiltonian is constrained to vanish. I present various approaches one can take to obtain an interpretation of the quantum theory of such "dynamically constrained" systems. I apply some of these ideas to the Bianchi I cosmology, and analyze the issue of the initial singularity in quantum theory.
Fractional-calculus diffusion equation
2010-01-01
Background Sequel to the work on the quantization of nonconservative systems using fractional calculus and quantization of a system with Brownian motion, which aims to consider the dissipation effects in quantum-mechanical description of microscale systems. Results The canonical quantization of a system represented classically by one-dimensional Fick's law, and the diffusion equation is carried out according to the Dirac method. A suitable Lagrangian, and Hamiltonian, describing the diffusive system, are constructed and the Hamiltonian is transformed to Schrodinger's equation which is solved. An application regarding implementation of the developed mathematical method to the analysis of diffusion, osmosis, which is a biological application of the diffusion process, is carried out. Schrödinger's equation is solved. Conclusions The plot of the probability function represents clearly the dissipative and drift forces and hence the osmosis, which agrees totally with the macro-scale view, or the classical-version osmosis. PMID:20492677
A Heisenberg Algebra Bundle of a Vector Field in Three-Space and its Weyl Quantization
NASA Astrophysics Data System (ADS)
Binz, Ernst; Pods, Sonja
2006-01-01
In these notes we associate a natural Heisenberg group bundle Ha with a singularity free smooth vector field X = (id,a) on a submanifold M in a Euclidean three-space. This bundle yields naturally an infinite dimensional Heisenberg group HX∞. A representation of the C*-group algebra of HX∞ is a quantization. It causes a natural Weyl-deformation quantization of X. The influence of the topological structure of M on this quantization is encoded in the Chern class of a canonical complex line bundle inside Ha.
Quantization of Simple Parametrized Systems
NASA Astrophysics Data System (ADS)
Ruffini, Giulio
1995-01-01
I study the canonical formulation and quantization of some simple parametrized systems using Dirac's formalism and the Becchi-Rouet-Stora-Tyutin (BRST) extended phase space method. These systems include the parametrized particle and minisuperspace. Using Dirac's formalism I first analyze for each case the construction of the classical reduced phase space. There are two separate features of these systems that may make this construction difficult: (a) Because of the boundary conditions used, the actions are not gauge invariant at the boundaries. (b) The constraints may have a disconnected solution space. The relativistic particle and minisuperspace have such complicated constraints, while the non-relativistic particle displays only the first feature. I first show that a change of gauge fixing is equivalent to a canonical transformation in the reduced phase space, thus resolving the problems associated with the first feature above. Then I consider the quantization of these systems using several approaches: Dirac's method, Dirac-Fock quantization, and the BRST formalism. In the cases of the relativistic particle and minisuperspace I consider first the quantization of one branch of the constraint at the time and then discuss the backgrounds in which it is possible to quantize simultaneously both branches. I motivate and define the inner product, and obtain, for example, the Klein-Gordon inner product for the relativistic case. Then I show how to construct phase space path integral representations for amplitudes in these approaches--the Batalin-Fradkin-Vilkovisky (BFV) and the Faddeev path integrals --from which one can then derive the path integrals in coordinate space--the Faddeev-Popov path integral and the geometric path integral. In particular I establish the connection between the Hilbert space representation and the range of the lapse in the path integrals. I also examine the class of paths that contribute in the path integrals and how they affect space-time covariance, concluding that it is consistent to take paths that move forward in time only when there is no electric field. The key elements in this analysis are the space-like paths and the behavior of the action under the non-trivial ( Z_2) element of the reparametrization group.
Canonical quantization of general relativity in discrete space-times.
Gambini, Rodolfo; Pullin, Jorge
2003-01-17
It has long been recognized that lattice gauge theory formulations, when applied to general relativity, conflict with the invariance of the theory under diffeomorphisms. We analyze discrete lattice general relativity and develop a canonical formalism that allows one to treat constrained theories in Lorentzian signature space-times. The presence of the lattice introduces a "dynamical gauge" fixing that makes the quantization of the theories conceptually clear, albeit computationally involved. The problem of a consistent algebra of constraints is automatically solved in our approach. The approach works successfully in other field theories as well, including topological theories. A simple cosmological application exhibits quantum elimination of the singularity at the big bang.
FAST TRACK COMMUNICATION: Quantization over boson operator spaces
NASA Astrophysics Data System (ADS)
Prosen, Tomaž; Seligman, Thomas H.
2010-10-01
The framework of third quantization—canonical quantization in the Liouville space—is developed for open many-body bosonic systems. We show how to diagonalize the quantum Liouvillean for an arbitrary quadratic n-boson Hamiltonian with arbitrary linear Lindblad couplings to the baths and, as an example, explicitly work out a general case of a single boson.
A heat kernel proof of the index theorem for deformation quantization
NASA Astrophysics Data System (ADS)
Karabegov, Alexander
2017-11-01
We give a heat kernel proof of the algebraic index theorem for deformation quantization with separation of variables on a pseudo-Kähler manifold. We use normalizations of the canonical trace density of a star product and of the characteristic classes involved in the index formula for which this formula contains no extra constant factors.
Polymer-Fourier quantization of the scalar field revisited
NASA Astrophysics Data System (ADS)
Garcia-Chung, Angel; Vergara, J. David
2016-10-01
The polymer quantization of the Fourier modes of the real scalar field is studied within algebraic scheme. We replace the positive linear functional of the standard Poincaré invariant quantization by a singular one. This singular positive linear functional is constructed as mimicking the singular limit of the complex structure of the Poincaré invariant Fock quantization. The resulting symmetry group of such polymer quantization is the subgroup SDiff(ℝ4) which is a subgroup of Diff(ℝ4) formed by spatial volume preserving diffeomorphisms. In consequence, this yields an entirely different irreducible representation of the canonical commutation relations, nonunitary equivalent to the standard Fock representation. We also compared the Poincaré invariant Fock vacuum with the polymer Fourier vacuum.
NASA Astrophysics Data System (ADS)
Nielsen, N. K.; Quaade, U. J.
1995-07-01
The physical phase space of the relativistic top, as defined by Hansson and Regge, is expressed in terms of canonical coordinates of the Poincaré group manifold. The system is described in the Hamiltonian formalism by the mass-shell condition and constraints that reduce the number of spin degrees of freedom. The constraints are second class and are modified into a set of first class constraints by adding combinations of gauge-fixing functions. The Batalin-Fradkin-Vilkovisky method is then applied to quantize the system in the path integral formalism in Hamiltonian form. It is finally shown that different gauge choices produce different equivalent forms of the constraints.
The canonical quantization of chaotic maps on the torus
NASA Astrophysics Data System (ADS)
Rubin, Ron Shai
In this thesis, a quantization method for classical maps on the torus is presented. The quantum algebra of observables is defined as the quantization of measurable functions on the torus with generators exp (2/pi ix) and exp (2/pi ip). The Hilbert space we use remains the infinite-dimensional L2/ (/IR, dx). The dynamics is given by a unitary quantum propagator such that as /hbar /to 0, the classical dynamics is returned. We construct such a quantization for the Kronecker map, the cat map, the baker's map, the kick map, and the Harper map. For the cat map, we find the same for the propagator on the plane the same integral kernel conjectured in (HB) using semiclassical methods. We also define a quantum 'integral over phase space' as a trace over the quantum algebra. Using this definition, we proceed to define quantum ergodicity and mixing for maps on the torus. We prove that the quantum cat map and Kronecker map are both ergodic, but only the cat map is mixing, true to its classical origins. For Planck's constant satisfying the integrality condition h = 1/N, with N/in doubz+, we construct an explicit isomorphism between L2/ (/IR, dx) and the Hilbert space of sections of an N-dimensional vector bundle over a θ-torus T2 of boundary conditions. The basis functions are distributions in L2/ (/IR, dx), given by an infinite comb of Dirac δ-functions. In Bargmann space these distributions take on the form of Jacobi ϑ-functions. Transformations from position to momentum representation can be implemented via a finite N-dimensional discrete Fourier transform. With the θ-torus, we provide a connection between the finite-dimensional quantum maps given in the physics literature and the canonical quantization presented here and found in the language of pseudo-differential operators elsewhere in mathematics circles. Specifically, at a fixed point of the dynamics on the θ-torus, we return a finite-dimensional matrix propagator. We present this connection explicitly for several examples.
Equivalence of Einstein and Jordan frames in quantized anisotropic cosmological models
NASA Astrophysics Data System (ADS)
Pandey, Sachin; Pal, Sridip; Banerjee, Narayan
2018-06-01
The present work shows that the mathematical equivalence of the Jordan frame and its conformally transformed version, the Einstein frame, so as far as Brans-Dicke theory is concerned, survives a quantization of cosmological models, arising as solutions to the Brans-Dicke theory. We work with the Wheeler-deWitt quantization scheme and take up quite a few anisotropic cosmological models as examples. We effectively show that the transformation from the Jordan to the Einstein frame is a canonical one and hence two frames furnish equivalent description of same physical scenario.
Gauge fixing and BFV quantization
NASA Astrophysics Data System (ADS)
Rogers, Alice
2000-01-01
Non-singularity conditions are established for the Batalin-Fradkin-Vilkovisky (BFV) gauge-fixing fermion which are sufficient for it to lead to the correct path integral for a theory with constraints canonically quantized in the BFV approach. The conditions ensure that the anticommutator of this fermion with the BRST charge regularizes the path integral by regularizing the trace over non-physical states in each ghost sector. The results are applied to the quantization of a system which has a Gribov problem, using a non-standard form of the gauge-fixing fermion.
NASA Astrophysics Data System (ADS)
Jarvis, P. D.; Corney, S. P.; Tsohantjis, I.
1999-12-01
A covariant spinor representation of iosp(d,2/2) is constructed for the quantization of the spinning relativistic particle. It is found that, with appropriately defined wavefunctions, this representation can be identified with the state space arising from the canonical extended BFV-BRST quantization of the spinning particle with admissible gauge fixing conditions after a contraction procedure. For this model, the cohomological determination of physical states can thus be obtained purely from the representation theory of the iosp(d,2/2) algebra.
Gong, Yunchao; Lazebnik, Svetlana; Gordo, Albert; Perronnin, Florent
2013-12-01
This paper addresses the problem of learning similarity-preserving binary codes for efficient similarity search in large-scale image collections. We formulate this problem in terms of finding a rotation of zero-centered data so as to minimize the quantization error of mapping this data to the vertices of a zero-centered binary hypercube, and propose a simple and efficient alternating minimization algorithm to accomplish this task. This algorithm, dubbed iterative quantization (ITQ), has connections to multiclass spectral clustering and to the orthogonal Procrustes problem, and it can be used both with unsupervised data embeddings such as PCA and supervised embeddings such as canonical correlation analysis (CCA). The resulting binary codes significantly outperform several other state-of-the-art methods. We also show that further performance improvements can result from transforming the data with a nonlinear kernel mapping prior to PCA or CCA. Finally, we demonstrate an application of ITQ to learning binary attributes or "classemes" on the ImageNet data set.
Quantized Algebras of Functions on Homogeneous Spaces with Poisson Stabilizers
NASA Astrophysics Data System (ADS)
Neshveyev, Sergey; Tuset, Lars
2012-05-01
Let G be a simply connected semisimple compact Lie group with standard Poisson structure, K a closed Poisson-Lie subgroup, 0 < q < 1. We study a quantization C( G q / K q ) of the algebra of continuous functions on G/ K. Using results of Soibelman and Dijkhuizen-Stokman we classify the irreducible representations of C( G q / K q ) and obtain a composition series for C( G q / K q ). We describe closures of the symplectic leaves of G/ K refining the well-known description in the case of flag manifolds in terms of the Bruhat order. We then show that the same rules describe the topology on the spectrum of C( G q / K q ). Next we show that the family of C*-algebras C( G q / K q ), 0 < q ≤ 1, has a canonical structure of a continuous field of C*-algebras and provides a strict deformation quantization of the Poisson algebra {{C}[G/K]} . Finally, extending a result of Nagy, we show that C( G q / K q ) is canonically KK-equivalent to C( G/ K).
New vertices and canonical quantization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alexandrov, Sergei
2010-07-15
We present two results on the recently proposed new spin foam models. First, we show how a (slightly modified) restriction on representations in the Engle-Pereira-Rovelli-Livine model leads to the appearance of the Ashtekar-Barbero connection, thus bringing this model even closer to loop quantum gravity. Second, we however argue that the quantization procedure used to derive the new models is inconsistent since it relies on the symplectic structure of the unconstrained BF theory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Błaszak, Maciej, E-mail: blaszakm@amu.edu.pl; Domański, Ziemowit, E-mail: ziemowit@amu.edu.pl
In the paper is presented an invariant quantization procedure of classical mechanics on the phase space over flat configuration space. Then, the passage to an operator representation of quantum mechanics in a Hilbert space over configuration space is derived. An explicit form of position and momentum operators as well as their appropriate ordering in arbitrary curvilinear coordinates is demonstrated. Finally, the extension of presented formalism onto non-flat case and related ambiguities of the process of quantization are discussed. -- Highlights: •An invariant quantization procedure of classical mechanics on the phase space over flat configuration space is presented. •The passage tomore » an operator representation of quantum mechanics in a Hilbert space over configuration space is derived. •Explicit form of position and momentum operators and their appropriate ordering in curvilinear coordinates is shown. •The invariant form of Hamiltonian operators quadratic and cubic in momenta is derived. •The extension of presented formalism onto non-flat case and related ambiguities of the quantization process are discussed.« less
Maslov indices, Poisson brackets, and singular differential forms
NASA Astrophysics Data System (ADS)
Esterlis, I.; Haggard, H. M.; Hedeman, A.; Littlejohn, R. G.
2014-06-01
Maslov indices are integers that appear in semiclassical wave functions and quantization conditions. They are often notoriously difficult to compute. We present methods of computing the Maslov index that rely only on typically elementary Poisson brackets and simple linear algebra. We also present a singular differential form, whose integral along a curve gives the Maslov index of that curve. The form is closed but not exact, and transforms by an exact differential under canonical transformations. We illustrate the method with the 6j-symbol, which is important in angular-momentum theory and in quantum gravity.
NASA Astrophysics Data System (ADS)
Batalin, I. A.; Bering, K.; Damgaard, P. H.
1998-03-01
We present a superfield formulation of the quantization program for theories with first-class constraints. An exact operator formulation is given, and we show how to set up a phase-space path integral entirely in terms of superfields. BRST transformations and canonical transformations enter on equal footing, and they allow us to establish a superspace analog of the BFV theorem. We also present a formal derivation of the Lagrangian superfield analogue of the field-antifield formalism by an integration over half of the phase-space variables.
How quantizable matter gravitates: A practitioner's guide
NASA Astrophysics Data System (ADS)
Schuller, Frederic P.; Witte, Christof
2014-05-01
We present the practical step-by-step procedure for constructing canonical gravitational dynamics and kinematics directly from any previously specified quantizable classical matter dynamics, and then illustrate the application of this recipe by way of two completely worked case studies. Following the same procedure, any phenomenological proposal for fundamental matter dynamics must be supplemented with a suitable gravity theory providing the coefficients and kinematical interpretation of the matter theory, before any of the two theories can be meaningfully compared to experimental data.
Quantization and Superselection Sectors I:. Transformation Group C*-ALGEBRAS
NASA Astrophysics Data System (ADS)
Landsman, N. P.
Quantization is defined as the act of assigning an appropriate C*-algebra { A} to a given configuration space Q, along with a prescription mapping self-adjoint elements of { A} into physically interpretable observables. This procedure is adopted to solve the problem of quantizing a particle moving on a homogeneous locally compact configuration space Q=G/H. Here { A} is chosen to be the transformation group C*-algebra corresponding to the canonical action of G on Q. The structure of these algebras and their representations are examined in some detail. Inequivalent quantizations are identified with inequivalent irreducible representations of the C*-algebra corresponding to the system, hence with its superselection sectors. Introducing the concept of a pre-Hamiltonian, we construct a large class of G-invariant time-evolutions on these algebras, and find the Hamiltonians implementing these time-evolutions in each irreducible representation of { A}. “Topological” terms in the Hamiltonian (or the corresponding action) turn out to be representation-dependent, and are automatically induced by the quantization procedure. Known “topological” charge quantization or periodicity conditions are then identically satisfied as a consequence of the representation theory of { A}.
Topologies on quantum topoi induced by quantization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakayama, Kunji
2013-07-15
In the present paper, we consider effects of quantization in a topos approach of quantum theory. A quantum system is assumed to be coded in a quantum topos, by which we mean the topos of presheaves on the context category of commutative subalgebras of a von Neumann algebra of bounded operators on a Hilbert space. A classical system is modeled by a Lie algebra of classical observables. It is shown that a quantization map from the classical observables to self-adjoint operators on the Hilbert space naturally induces geometric morphisms from presheaf topoi related to the classical system to the quantummore » topos. By means of the geometric morphisms, we give Lawvere-Tierney topologies on the quantum topos (and their equivalent Grothendieck topologies on the context category). We show that, among them, there exists a canonical one which we call a quantization topology. We furthermore give an explicit expression of a sheafification functor associated with the quantization topology.« less
Connecting dissipation and noncommutativity: A Bateman system case study
NASA Astrophysics Data System (ADS)
Pal, Sayan Kumar; Nandi, Partha; Chakraborty, Biswajit
2018-06-01
We present an approach to the problem of quantization of the damped harmonic oscillator. To start with, we adopt the standard method of doubling the degrees of freedom of the system (Bateman form) and then, by introducing some new parameters, we get a generalized coupled set of equations from the Bateman form. Using the corresponding time-independent Lagrangian, quantum effects on a pair of Bateman oscillators embedded in an ambient noncommutative space (Moyal plane) are analyzed by using both path integral and canonical quantization schemes within the framework of the Hilbert-Schmidt operator formulation. Our method is distinct from those existing in the literature and where the ambient space was taken to be commutative. Our quantization shows that we end up again with a Bateman system except that the damping factor undergoes renormalization. Strikingly, the corresponding expression shows that the renormalized damping factor can be nonzero even if "bare" one is zero to begin with. In other words, noncommutativity can act as a source of dissipation. Conversely, the noncommutative parameter θ , taken to be a free one now, can be fine tuned to get a vanishing renormalized damping factor. This indicates in some sense a "duality" between dissipation and noncommutativity. Our results match the existing results in the commutative limit.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plyushchay, Mikhail S., E-mail: mikhail.plyushchay@usach.cl
A canonical quantization scheme applied to a classical supersymmetric system with quadratic in momentum supercharges gives rise to a quantum anomaly problem described by a specific term to be quadratic in Planck constant. We reveal a close relationship between the anomaly and the Schwarzian derivative, and specify a quantization prescription which generates the anomaly-free supersymmetric quantum system with second order supercharges. We also discuss the phenomenon of a coupling-constant metamorphosis that associates quantum systems with the first-order supersymmetry to the systems with the second-order supercharges.
Quantum mechanics of a constrained particle on an ellipsoid: Bein formalism and Geometric momentum
DOE Office of Scientific and Technical Information (OSTI.GOV)
Panahi, H., E-mail: t-panahi@guilan.ac.ir; Jahangiri, L., E-mail: laleh.jahangiry@yahoo.com
2016-09-15
In this work we apply the Dirac method in order to obtain the classical relations for a particle on an ellipsoid. We also determine the quantum mechanical form of these relations by using Dirac quantization. Then by considering the canonical commutation relations between the position and momentum operators in terms of curved coordinates, we try to propose the suitable representations for momentum operator that satisfy the obtained commutators between position and momentum in Euclidean space. We see that our representations for momentum operators are the same as geometric one.
The topological particle and Morse theory
NASA Astrophysics Data System (ADS)
Rogers, Alice
2000-09-01
Canonical BRST quantization of the topological particle defined by a Morse function h is described. Stochastic calculus, using Brownian paths which implement the WKB method in a new way providing rigorous tunnelling results even in curved space, is used to give an explicit and simple expression for the matrix elements of the evolution operator for the BRST Hamiltonian. These matrix elements lead to a representation of the manifold cohomology in terms of critical points of h along lines developed by Witten (Witten E 1982 J. Diff. Geom. 17 661-92).
Baryon number violation and novel canonical anti-commutation relations
NASA Astrophysics Data System (ADS)
Fujikawa, Kazuo; Tureanu, Anca
2018-02-01
The possible neutron-antineutron oscillation is described by an effective quadratic Lagrangian analogous to the BCS theory. It is shown that the conventional equal-time anti-commutation relations of the neutron variable n (t , x →) are modified by the baryon number violating terms. This is established by the Bjorken-Johnson-Low prescription and also by the canonical quantization combined with equations of motion. This novel canonical behavior can give rise to an important physical effect, which is illustrated by analyzing the Lagrangian that violates the baryon number but gives rise to the degenerate effective Majorana fermions and thus no neutron-antineutron oscillation. Technically, this model is neatly treated using a relativistic analogue of the Bogoliubov transformation.
Swings and roundabouts: optical Poincaré spheres for polarization and Gaussian beams
NASA Astrophysics Data System (ADS)
Dennis, M. R.; Alonso, M. A.
2017-02-01
The connection between Poincaré spheres for polarization and Gaussian beams is explored, focusing on the interpretation of elliptic polarization in terms of the isotropic two-dimensional harmonic oscillator in Hamiltonian mechanics, its canonical quantization and semiclassical interpretation. This leads to the interpretation of structured Gaussian modes, the Hermite-Gaussian, Laguerre-Gaussian and generalized Hermite-Laguerre-Gaussian modes as eigenfunctions of operators corresponding to the classical constants of motion of the two-dimensional oscillator, which acquire an extra significance as families of classical ellipses upon semiclassical quantization. This article is part of the themed issue 'Optical orbital angular momentum'.
NASA Astrophysics Data System (ADS)
Seligman, Thomas H.; Prosen, Tomaž
2010-12-01
The basic ideas of second quantization and Fock space are extended to density operator states, used in treatments of open many-body systems. This can be done for fermions and bosons. While the former only requires the use of a non-orthogonal basis, the latter requires the introduction of a dual set of spaces. In both cases an operator algebra closely resembling the canonical one is developed and used to define the dual sets of bases. We here concentrated on the bosonic case where the unboundedness of the operators requires the definitions of dual spaces to support the pair of bases. Some applications, mainly to non-equilibrium steady states, will be mentioned.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seligman, Thomas H.; Centro Internacional de Ciencias, Cuernavaca, Morelos; Prosen, Tomaz
2010-12-23
The basic ideas of second quantization and Fock space are extended to density operator states, used in treatments of open many-body systems. This can be done for fermions and bosons. While the former only requires the use of a non-orthogonal basis, the latter requires the introduction of a dual set of spaces. In both cases an operator algebra closely resembling the canonical one is developed and used to define the dual sets of bases. We here concentrated on the bosonic case where the unboundedness of the operators requires the definitions of dual spaces to support the pair of bases. Somemore » applications, mainly to non-equilibrium steady states, will be mentioned.« less
The quantization of the chiral Schwinger model based on the BFT - BFV formalism
NASA Astrophysics Data System (ADS)
Kim, Won T.; Kim, Yong-Wan; Park, Mu-In; Park, Young-Jai; Yoon, Sean J.
1997-03-01
We apply the newly improved Batalin - Fradkin - Tyutin (BFT) Hamiltonian method to the chiral Schwinger model in the case of the regularization ambiguity a>1. We show that one can systematically construct the first class constraints by the BFT Hamiltonian method, and also show that the well-known Dirac brackets of the original phase space variables are exactly the Poisson brackets of the corresponding modified fields in the extended phase space. Furthermore, we show that the first class Hamiltonian is simply obtained by replacing the original fields in the canonical Hamiltonian by these modified fields. Performing the momentum integrations, we obtain the corresponding first class Lagrangian in the configuration space.
Formal Symplectic Groupoid of a Deformation Quantization
NASA Astrophysics Data System (ADS)
Karabegov, Alexander V.
2005-08-01
We give a self-contained algebraic description of a formal symplectic groupoid over a Poisson manifold M. To each natural star product on M we then associate a canonical formal symplectic groupoid over M. Finally, we construct a unique formal symplectic groupoid ‘with separation of variables’ over an arbitrary Kähler-Poisson manifold.
Quark and gluon production from a boost-invariantly expanding color electric field
NASA Astrophysics Data System (ADS)
Taya, Hidetoshi
2017-07-01
Particle production from an expanding classical color electromagnetic field is extensively studied, motivated by the early stage dynamics of ultrarelativistic heavy ion collisions. We develop a formalism at one-loop order to compute the particle spectra by canonically quantizing quark, gluon, and ghost fluctuations under the presence of such an expanding classical color background field; the canonical quantization is done in the τ -η coordinates in order to take into account manifestly the expanding geometry. As a demonstration, we model the expanding classical color background field by a boost-invariantly expanding homogeneous color electric field with lifetime T , for which we obtain analytically the quark and gluon production spectra by solving the equations of motion of QCD nonperturbatively with respect to the color electric field. In this paper we study (i) the finite lifetime effect, which is found to modify significantly the particle spectra from those expected from the Schwinger formula; (ii) the difference between the quark and gluon production; and (iii) the quark mass dependence of the production spectra. Implications of these results to ultrarelativistic heavy ion collisions are also discussed.
Canonical quantization of constrained systems and coadjoint orbits of Diff(S sup 1 )
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scherer, W.M.
It is shown that Dirac's treatment of constrained Hamiltonian systems and Schwinger's action principle quantization lead to identical commutations relations. An explicit relation between the Lagrange multipliers in the action principle approach and the additional terms in the Dirac bracket is derived. The equivalence of the two methods is demonstrated in the case of the non-linear sigma model. Dirac's method is extended to superspace and this extension is applied to the chiral superfield. The Dirac brackets of the massive interacting chiral superfluid are derived and shown to give the correct commutation relations for the component fields. The Hamiltonian of themore » theory is given and the Hamiltonian equations of motion are computed. They agree with the component field results. An infinite sequence of differential operators which are covariant under the coadjoint action of Diff(S{sup 1}) and analogues to Hill's operator is constructed. They map conformal fields of negative integer and half-integer weight to their dual space. Some properties of these operators are derived and possible applications are discussed. The Korteweg-de Vries equation is formulated as a coadjoint orbit of Diff(S{sup 1}).« less
Quantization of the Szekeres system
NASA Astrophysics Data System (ADS)
Paliathanasis, A.; Zampeli, Adamantia; Christodoulakis, T.; Mustafa, M. T.
2018-06-01
We study the quantum corrections on the Szekeres system in the context of canonical quantization in the presence of symmetries. We start from an effective point-like Lagrangian with two integrals of motion, one corresponding to the Hamiltonian and the other to a second rank killing tensor. Imposing their quantum version on the wave function results to a solution which is then interpreted in the context of Bohmian mechanics. In this semiclassical approach, it is shown that there is no quantum corrections, thus the classical trajectories of the Szekeres system are not affected at this level. Finally, we define a probability function which shows that a stationary surface of the probability corresponds to a classical exact solution.
Mathematics of Quantization and Quantum Fields
NASA Astrophysics Data System (ADS)
Dereziński, Jan; Gérard, Christian
2013-03-01
Preface; 1. Vector spaces; 2. Operators in Hilbert spaces; 3. Tensor algebras; 4. Analysis in L2(Rd); 5. Measures; 6. Algebras; 7. Anti-symmetric calculus; 8. Canonical commutation relations; 9. CCR on Fock spaces; 10. Symplectic invariance of CCR in finite dimensions; 11. Symplectic invariance of the CCR on Fock spaces; 12. Canonical anti-commutation relations; 13. CAR on Fock spaces; 14. Orthogonal invariance of CAR algebras; 15. Clifford relations; 16. Orthogonal invariance of the CAR on Fock spaces; 17. Quasi-free states; 18. Dynamics of quantum fields; 19. Quantum fields on space-time; 20. Diagrammatics; 21. Euclidean approach for bosons; 22. Interacting bosonic fields; Subject index; Symbols index.
Quantization of spinor fields. III. Fermions on coherent (Bose) domains
NASA Astrophysics Data System (ADS)
Garbaczewski, Piotr
1983-02-01
A formulation of the c-number classics-quanta correspondence rule for spinor systems requires all elements of the quantum field algebra to be expanded into power series with respect to the generators of the canonical commutation relation (CCR) algebra. On the other hand, the asymptotic completeness demand would result in the (Haag) expansions with respect to the canonical anticommutation relation (CAR) generators. We establish the conditions under which the above correspondence rule can be reconciled with the existence of Haag expansions in terms of asymptotic free Fermi fields. Then, the CAR become represented on the state space of the Bose (CCR) system.
Casual Set Approach to a Minimal Invariant Length
NASA Astrophysics Data System (ADS)
Raut, Usha
2007-04-01
Any attempt to quantize gravity would necessarily introduce a minimal observable length scale of the order of the Planck length. This conclusion is based on several different studies and thought experiments and appears to be an inescapable feature of all quantum gravity theories, irrespective of the method used to quantize gravity. Over the last few years there has been growing concern that such a minimal length might lead to a contradiction with the basic postulates of special relativity, in particular the Lorentz-Fitzgerald contraction. A few years ago, Rovelli et.al, attempted to reconcile an invariant minimal length with Special Relativity, using the framework of loop quantum gravity. However, the inherently canonical formalism of the loop quantum approach is plagued by a variety of problems, many brought on by separation of space and time co-ordinates. In this paper we use a completely different approach. Using the framework of the causal set paradigm, along with a statistical measure of closeness between Lorentzian manifolds, we re-examine the issue of introducing a minimal observable length that is not at odds with Special Relativity postulates.
Generalized centripetal force law and quantization of motion constrained on 2D surfaces
NASA Astrophysics Data System (ADS)
Liu, Q. H.; Zhang, J.; Lian, D. K.; Hu, L. D.; Li, Z.
2017-03-01
For a particle of mass μ moves on a 2D surface f(x) = 0 embedded in 3D Euclidean space of coordinates x, there is an open and controversial problem whether the Dirac's canonical quantization scheme for the constrained motion allows for the geometric potential that has been experimentally confirmed. We note that the Dirac's scheme hypothesizes that the symmetries indicated by classical brackets among positions x and momenta p and Hamiltonian Hc remain in quantum mechanics, i.e., the following Dirac brackets [ x ,Hc ] D and [ p ,Hc ] D holds true after quantization, in addition to the fundamental ones [ x , x ] D, [ x , p ] D and [ p , p ] D. This set of hypotheses implies that the Hamiltonian operator is simultaneously determined during the quantization. The quantum mechanical relations corresponding to the classical mechanical ones p / μ =[ x ,Hc ] D directly give the geometric momenta. The time t derivative of the momenta p ˙ =[ p ,Hc ] D in classical mechanics is in fact the generalized centripetal force law for particle on the 2D surface, which in quantum mechanics permits both the geometric momenta and the geometric potential.
Quantum self-gravitating collapsing matter in a quantum geometry
NASA Astrophysics Data System (ADS)
Campiglia, Miguel; Gambini, Rodolfo; Olmedo, Javier; Pullin, Jorge
2016-09-01
The problem of how space-time responds to gravitating quantum matter in full quantum gravity has been one of the main questions that any program of quantization of gravity should address. Here we analyze this issue by considering the quantization of a collapsing null shell coupled to spherically symmetric loop quantum gravity. We show that the constraint algebra of canonical gravity is Abelian both classically and when quantized using loop quantum gravity techniques. The Hamiltonian constraint is well defined and suitable Dirac observables characterizing the problem were identified at the quantum level. We can write the metric as a parameterized Dirac observable at the quantum level and study the physics of the collapsing shell and black hole formation. We show how the singularity inside the black hole is eliminated by loop quantum gravity and how the shell can traverse it. The construction is compatible with a scenario in which the shell tunnels into a baby universe inside the black hole or one in which it could emerge through a white hole.
Chern-Simons Term: Theory and Applications.
NASA Astrophysics Data System (ADS)
Gupta, Kumar Sankar
1992-01-01
We investigate the quantization and applications of Chern-Simons theories to several systems of interest. Elementary canonical methods are employed for the quantization of abelian and nonabelian Chern-Simons actions using ideas from gauge theories and quantum gravity. When the spatial slice is a disc, it yields quantum states at the edge of the disc carrying a representation of the Kac-Moody algebra. We next include sources in this model and their quantum states are shown to be those of a conformal family. Vertex operators for both abelian and nonabelian sources are constructed. The regularized abelian Wilson line is proved to be a vertex operator. The spin-statistics theorem is established for Chern-Simons dynamics using purely geometrical techniques. Chern-Simons action is associated with exotic spin and statistics in 2 + 1 dimensions. We study several systems in which the Chern-Simons action affects the spin and statistics. The first class of systems we study consist of G/H models. The solitons of these models are shown to obey anyonic statistics in the presence of a Chern-Simons term. The second system deals with the effect of the Chern -Simons term in a model for high temperature superconductivity. The coefficient of the Chern-Simons term is shown to be quantized, one of its possible values giving fermionic statistics to the solitons of this model. Finally, we study a system of spinning particles interacting with 2 + 1 gravity, the latter being described by an ISO(2,1) Chern-Simons term. An effective action for the particles is obtained by integrating out the gauge fields. Next we construct operators which exchange the particles. They are shown to satisfy the braid relations. There are ambiguities in the quantization of this system which can be exploited to give anyonic statistics to the particles. We also point out that at the level of the first quantized theory, the usual spin-statistics relation need not apply to these particles.
Quantization and instability of the damped harmonic oscillator subject to a time-dependent force
NASA Astrophysics Data System (ADS)
Majima, H.; Suzuki, A.
2011-12-01
We consider the one-dimensional motion of a particle immersed in a potential field U(x) under the influence of a frictional (dissipative) force linear in velocity ( -γẋ) and a time-dependent external force ( K(t)). The dissipative system subject to these forces is discussed by introducing the extended Bateman's system, which is described by the Lagrangian: ℒ=mẋẏ-U(x+{1}/{2}y)+U(x-{1}/{2}y)+{γ}/{2}(xẏ-yẋ)-xK(t)+yK(t), which leads to the familiar classical equations of motion for the dissipative (open) system. The equation for a variable y is the time-reversed of the x motion. We discuss the extended Bateman dual Lagrangian and Hamiltonian by setting U(x±y/2)={1}/{2}k( specifically for a dual extended damped-amplified harmonic oscillator subject to the time-dependent external force. We show the method of quantizing such dissipative systems, namely the canonical quantization of the extended Bateman's Hamiltonian ℋ. The Heisenberg equations of motion utilizing the quantized Hamiltonian ℋ̂ surely lead to the equations of motion for the dissipative dynamical quantum systems, which are the quantum analog of the corresponding classical systems. To discuss the stability of the quantum dissipative system due to the influence of an external force K(t) and the dissipative force, we derived a formula for transition amplitudes of the dissipative system with the help of the perturbation analysis. The formula is specifically applied for a damped-amplified harmonic oscillator subject to the impulsive force. This formula is used to study the influence of dissipation such as the instability due to the dissipative force and/or the applied impulsive force.
Numerical simulation of transmission coefficient using c-number Langevin equation
NASA Astrophysics Data System (ADS)
Barik, Debashis; Bag, Bidhan Chandra; Ray, Deb Shankar
2003-12-01
We numerically implement the reactive flux formalism on the basis of a recently proposed c-number Langevin equation [Barik et al., J. Chem. Phys. 119, 680 (2003); Banerjee et al., Phys. Rev. E 65, 021109 (2002)] to calculate transmission coefficient. The Kramers' turnover, the T2 enhancement of the rate at low temperatures and other related features of temporal behavior of the transmission coefficient over a range of temperature down to absolute zero, noise correlation, and friction are examined for a double well potential and compared with other known results. This simple method is based on canonical quantization and Wigner quasiclassical phase space function and takes care of quantum effects due to the system order by order.
Quantization and Quantum-Like Phenomena: A Number Amplitude Approach
NASA Astrophysics Data System (ADS)
Robinson, T. R.; Haven, E.
2015-12-01
Historically, quantization has meant turning the dynamical variables of classical mechanics that are represented by numbers into their corresponding operators. Thus the relationships between classical variables determine the relationships between the corresponding quantum mechanical operators. Here, we take a radically different approach to this conventional quantization procedure. Our approach does not rely on any relations based on classical Hamiltonian or Lagrangian mechanics nor on any canonical quantization relations, nor even on any preconceptions of particle trajectories in space and time. Instead we examine the symmetry properties of certain Hermitian operators with respect to phase changes. This introduces harmonic operators that can be identified with a variety of cyclic systems, from clocks to quantum fields. These operators are shown to have the characteristics of creation and annihilation operators that constitute the primitive fields of quantum field theory. Such an approach not only allows us to recover the Hamiltonian equations of classical mechanics and the Schrödinger wave equation from the fundamental quantization relations, but also, by freeing the quantum formalism from any physical connotation, makes it more directly applicable to non-physical, so-called quantum-like systems. Over the past decade or so, there has been a rapid growth of interest in such applications. These include, the use of the Schrödinger equation in finance, second quantization and the number operator in social interactions, population dynamics and financial trading, and quantum probability models in cognitive processes and decision-making. In this paper we try to look beyond physical analogies to provide a foundational underpinning of such applications.
NASA Astrophysics Data System (ADS)
Thibes, Ronaldo
2017-02-01
We perform the canonical and path integral quantizations of a lower-order derivatives model describing Podolsky's generalized electrodynamics. The physical content of the model shows an auxiliary massive vector field coupled to the usual electromagnetic field. The equivalence with Podolsky's original model is studied at classical and quantum levels. Concerning the dynamical time evolution, we obtain a theory with two first-class and two second-class constraints in phase space. We calculate explicitly the corresponding Dirac brackets involving both vector fields. We use the Senjanovic procedure to implement the second-class constraints and the Batalin-Fradkin-Vilkovisky path integral quantization scheme to deal with the symmetries generated by the first-class constraints. The physical interpretation of the results turns out to be simpler due to the reduced derivatives order permeating the equations of motion, Dirac brackets and effective action.
BRST quantization of Yang-Mills theory: A purely Hamiltonian approach on Fock space
NASA Astrophysics Data System (ADS)
Öttinger, Hans Christian
2018-04-01
We develop the basic ideas and equations for the BRST quantization of Yang-Mills theories in an explicit Hamiltonian approach, without any reference to the Lagrangian approach at any stage of the development. We present a new representation of ghost fields that combines desirable self-adjointness properties with canonical anticommutation relations for ghost creation and annihilation operators, thus enabling us to characterize the physical states on a well-defined Fock space. The Hamiltonian is constructed by piecing together simple BRST invariant operators to obtain a minimal invariant extension of the free theory. It is verified that the evolution equations implied by the resulting minimal Hamiltonian provide a quantum version of the classical Yang-Mills equations. The modifications and requirements for the inclusion of matter are discussed in detail.
Constraining the loop quantum gravity parameter space from phenomenology
NASA Astrophysics Data System (ADS)
Brahma, Suddhasattwa; Ronco, Michele
2018-03-01
Development of quantum gravity theories rarely takes inputs from experimental physics. In this letter, we take a small step towards correcting this by establishing a paradigm for incorporating putative quantum corrections, arising from canonical quantum gravity (QG) theories, in deriving falsifiable modified dispersion relations (MDRs) for particles on a deformed Minkowski space-time. This allows us to differentiate and, hopefully, pick between several quantization choices via testable, state-of-the-art phenomenological predictions. Although a few explicit examples from loop quantum gravity (LQG) (such as the regularization scheme used or the representation of the gauge group) are shown here to establish the claim, our framework is more general and is capable of addressing other quantization ambiguities within LQG and also those arising from other similar QG approaches.
NASA Astrophysics Data System (ADS)
Wuthrich, Christian
My dissertation studies the foundations of loop quantum gravity (LQG), a candidate for a quantum theory of gravity based on classical general relativity. At the outset, I discuss two---and I claim separate---questions: first, do we need a quantum theory of gravity at all; and second, if we do, does it follow that gravity should or even must be quantized? My evaluation of different arguments either way suggests that while no argument can be considered conclusive, there are strong indications that gravity should be quantized. LQG attempts a canonical quantization of general relativity and thereby provokes a foundational interest as it must take a stance on many technical issues tightly linked to the interpretation of general relativity. Most importantly, it codifies general relativity's main innovation, the so-called background independence, in a formalism suitable for quantization. This codification pulls asunder what has been joined together in general relativity: space and time. It is thus a central issue whether or not general relativity's four-dimensional structure can be retrieved in the alternative formalism and how it fares through the quantization process. I argue that the rightful four-dimensional spacetime structure can only be partially retrieved at the classical level. What happens at the quantum level is an entirely open issue. Known examples of classically singular behaviour which gets regularized by quantization evoke an admittedly pious hope that the singularities which notoriously plague the classical theory may be washed away by quantization. This work scrutinizes pronouncements claiming that the initial singularity of classical cosmological models vanishes in quantum cosmology based on LQG and concludes that these claims must be severely qualified. In particular, I explicate why casting the quantum cosmological models in terms of a deterministic temporal evolution fails to capture the concepts at work adequately. Finally, a scheme is developed of how the re-emergence of the smooth spacetime from the underlying discrete quantum structure could be understood.
Relativistic Hamiltonian dynamics for N point particles
NASA Astrophysics Data System (ADS)
King, M. J.
1980-08-01
The theory is quantized canonically to give a relativistic quantum mechanics for N particles. The existence of such a theory has been in doubt since the proof of the No-interaction theorem. However, such a theory does exist and was generalized. This dynamics is expressed in terms of N + 1 pairs of canonical fourvectors (center-of-momentum variables or CMV). A gauge independent reduction due to N + 3 first class kinematic constraints leads to a 6N + 2 dimensional minimum kinematic phase space, K. The kinematics and dynamics of particles with intrinsic spin were also considered. To this end known constraint techniques were generalized to make use of graded Lie algebras. The (Poincare) invariant Hamiltonian is specified in terms of the gauge invarient variables of K. The covariant worldline variables of each particle were found to be gauge dependent. As such they will usually not satisfy a canonical algebra. An exception exists for free particles. The No-interaction theorem therefore is not violated.
NASA Astrophysics Data System (ADS)
Xun, D. M.; Liu, Q. H.; Zhu, X. M.
2013-11-01
A generalization of Dirac's canonical quantization scheme for a system with second-class constraints is proposed, in which the fundamental commutation relations are constituted by all commutators between positions, momenta and Hamiltonian, so they are simultaneously quantized in a self-consistent manner, rather than by those between merely positions and momenta which leads to ambiguous forms of the Hamiltonian and the momenta. The application of the generalized scheme to the quantum motion on a torus leads to a remarkable result: the quantum theory is inconsistent if built up in an intrinsic geometric manner, whereas it becomes consistent within an extrinsic examination of the torus as a submanifold in three dimensional flat space with the use of the Cartesian coordinate system. The geometric momentum and potential are then reasonably reproduced.
Quantization of systems with temporally varying discretization. II. Local evolution moves
NASA Astrophysics Data System (ADS)
Höhn, Philipp A.
2014-10-01
Several quantum gravity approaches and field theory on an evolving lattice involve a discretization changing dynamics generated by evolution moves. Local evolution moves in variational discrete systems (1) are a generalization of the Pachner evolution moves of simplicial gravity models, (2) update only a small subset of the dynamical data, (3) change the number of kinematical and physical degrees of freedom, and (4) generate a dynamical (or canonical) coarse graining or refining of the underlying discretization. To systematically explore such local moves and their implications in the quantum theory, this article suitably expands the quantum formalism for global evolution moves, constructed in Paper I [P. A. Höhn, "Quantization of systems with temporally varying discretization. I. Evolving Hilbert spaces," J. Math. Phys. 55, 083508 (2014); e-print arXiv:1401.6062 [gr-qc
Unique Fock quantization of a massive fermion field in a cosmological scenario
NASA Astrophysics Data System (ADS)
Cortez, Jerónimo; Elizaga Navascués, Beatriz; Martín-Benito, Mercedes; Mena Marugán, Guillermo A.; Velhinho, José M.
2016-04-01
It is well known that the Fock quantization of field theories in general spacetimes suffers from an infinite ambiguity, owing to the inequivalent possibilities in the selection of a representation of the canonical commutation or anticommutation relations, but also owing to the freedom in the choice of variables to describe the field among all those related by linear time-dependent transformations, including the dependence through functions of the background. In this work we remove this ambiguity (up to unitary equivalence) in the case of a massive Dirac free field propagating in a spacetime with homogeneous and isotropic spatial sections of spherical topology. Two physically reasonable conditions are imposed in order to arrive at this result: (a) The invariance of the vacuum under the spatial isometries of the background, and (b) the unitary implementability of the dynamical evolution that dictates the Dirac equation. We characterize the Fock quantizations with a nontrivial fermion dynamics that satisfy these two conditions. Then, we provide a complete proof of the unitary equivalence of the representations in this class under very mild requirements on the time variation of the background, once a criterion to discern between particles and antiparticles has been set.
Semiclassical states on Lie algebras
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsobanjan, Artur, E-mail: artur.tsobanjan@gmail.com
2015-03-15
The effective technique for analyzing representation-independent features of quantum systems based on the semiclassical approximation (developed elsewhere) has been successfully used in the context of the canonical (Weyl) algebra of the basic quantum observables. Here, we perform the important step of extending this effective technique to the quantization of a more general class of finite-dimensional Lie algebras. The case of a Lie algebra with a single central element (the Casimir element) is treated in detail by considering semiclassical states on the corresponding universal enveloping algebra. Restriction to an irreducible representation is performed by “effectively” fixing the Casimir condition, following themore » methods previously used for constrained quantum systems. We explicitly determine the conditions under which this restriction can be consistently performed alongside the semiclassical truncation.« less
The BRST complex of homological Poisson reduction
NASA Astrophysics Data System (ADS)
Müller-Lennert, Martin
2017-02-01
BRST complexes are differential graded Poisson algebras. They are associated with a coisotropic ideal J of a Poisson algebra P and provide a description of the Poisson algebra (P/J)^J as their cohomology in degree zero. Using the notion of stable equivalence introduced in Felder and Kazhdan (Contemporary Mathematics 610, Perspectives in representation theory, 2014), we prove that any two BRST complexes associated with the same coisotropic ideal are quasi-isomorphic in the case P = R[V] where V is a finite-dimensional symplectic vector space and the bracket on P is induced by the symplectic structure on V. As a corollary, the cohomology of the BRST complexes is canonically associated with the coisotropic ideal J in the symplectic case. We do not require any regularity assumptions on the constraints generating the ideal J. We finally quantize the BRST complex rigorously in the presence of infinitely many ghost variables and discuss the uniqueness of the quantization procedure.
NASA Astrophysics Data System (ADS)
Lin, Huey-Wen; Liu, Keh-Fei
2012-03-01
It is argued by the author that the canonical form of the quark energy-momentum tensor with a partial derivative instead of the covariant derivative is the correct definition for the quark momentum and angular momentum fraction of the nucleon in covariant quantization. Although it is not manifestly gauge-invariant, its matrix elements in the nucleon will be nonvanishing and are gauge-invariant. We test this idea in the path-integral quantization by calculating correlation functions on the lattice with a gauge-invariant nucleon interpolation field and replacing the gauge link in the quark lattice momentum operator with unity, which corresponds to the partial derivative in the continuum. We find that the ratios of three-point to two-point functions are zero within errors for both the u and d quarks, contrary to the case without setting the gauge links to unity.
New variables for classical and quantum gravity in all dimensions: I. Hamiltonian analysis
NASA Astrophysics Data System (ADS)
Bodendorfer, N.; Thiemann, T.; Thurn, A.
2013-02-01
Loop quantum gravity (LQG) relies heavily on a connection formulation of general relativity such that (1) the connection Poisson commutes with itself and (2) the corresponding gauge group is compact. This can be achieved starting from the Palatini or Holst action when imposing the time gauge. Unfortunately, this method is restricted to D + 1 = 4 spacetime dimensions. However, interesting string theories and supergravity theories require higher dimensions and it would therefore be desirable to have higher dimensional supergravity loop quantizations at one’s disposal in order to compare these approaches. In this series of papers we take first steps toward this goal. The present first paper develops a classical canonical platform for a higher dimensional connection formulation of the purely gravitational sector. The new ingredient is a different extension of the ADM phase space than the one used in LQG which does not require the time gauge and which generalizes to any dimension D > 1. The result is a Yang-Mills theory phase space subject to Gauß, spatial diffeomorphism and Hamiltonian constraint as well as one additional constraint, called the simplicity constraint. The structure group can be chosen to be SO(1, D) or SO(D + 1) and the latter choice is preferred for purposes of quantization.
Resolving the issue of branched Hamiltonian in modified Lanczos-Lovelock gravity
NASA Astrophysics Data System (ADS)
Ruz, Soumendranath; Mandal, Ranajit; Debnath, Subhra; Sanyal, Abhik Kumar
2016-07-01
The Hamiltonian constraint H_c = N{H} = 0, defines a diffeomorphic structure on spatial manifolds by the lapse function N in general theory of relativity. However, it is not manifest in Lanczos-Lovelock gravity, since the expression for velocity in terms of the momentum is multivalued. Thus the Hamiltonian is a branch function of momentum. Here we propose an extended theory of Lanczos-Lovelock gravity to construct a unique Hamiltonian in its minisuperspace version, which results in manifest diffeomorphic invariance and canonical quantization.
Connections between ’t Hooft’s beables and canonical descriptions of dissipative systems
NASA Astrophysics Data System (ADS)
Schuch, Dieter; Blasone, Massimo
2017-08-01
According to a proposal by ’t Hooft, information loss introduced by constraints in certain classical dissipative systems may lead to quantization. This scheme can be realized within the Bateman model of two coupled oscillators, one damped and one accelerated. In this paper we analyze the links of this approach to effective Hamiltonians where the environmental degrees of freedom do not appear explicitly but their effect leads to the same friction force appearing in the Bateman model. In particular, it is shown that by imposing constraints, the Bateman Hamiltonian can be transformed into an effective one expressed in expanding coordinates. This one can be transformed via a canonical transformation into Caldirola and Kanai’s effective Hamiltonian that can be linked to the conventional system-plus-reservoir approach, for example, in a form used by Caldeira and Leggett.
Direct Images, Fields of Hilbert Spaces, and Geometric Quantization
NASA Astrophysics Data System (ADS)
Lempert, László; Szőke, Róbert
2014-04-01
Geometric quantization often produces not one Hilbert space to represent the quantum states of a classical system but a whole family H s of Hilbert spaces, and the question arises if the spaces H s are canonically isomorphic. Axelrod et al. (J. Diff. Geo. 33:787-902, 1991) and Hitchin (Commun. Math. Phys. 131:347-380, 1990) suggest viewing H s as fibers of a Hilbert bundle H, introduce a connection on H, and use parallel transport to identify different fibers. Here we explore to what extent this can be done. First we introduce the notion of smooth and analytic fields of Hilbert spaces, and prove that if an analytic field over a simply connected base is flat, then it corresponds to a Hermitian Hilbert bundle with a flat connection and path independent parallel transport. Second we address a general direct image problem in complex geometry: pushing forward a Hermitian holomorphic vector bundle along a non-proper map . We give criteria for the direct image to be a smooth field of Hilbert spaces. Third we consider quantizing an analytic Riemannian manifold M by endowing TM with the family of adapted Kähler structures from Lempert and Szőke (Bull. Lond. Math. Soc. 44:367-374, 2012). This leads to a direct image problem. When M is homogeneous, we prove the direct image is an analytic field of Hilbert spaces. For certain such M—but not all—the direct image is even flat; which means that in those cases quantization is unique.
The Noncommutative Doplicher-Fredenhagen-Roberts-Amorim Space
NASA Astrophysics Data System (ADS)
Abreu, Everton M. C.; Mendes, Albert C. R.; Oliveira, Wilson; Zangirolami, Adriano O.
2010-10-01
This work is an effort in order to compose a pedestrian review of the recently elaborated Doplicher, Fredenhagen, Roberts and Amorim (DFRA) noncommutative (NC) space which is a minimal extension of the DFR space. In this DRFA space, the object of noncommutativity (θμν) is a variable of the NC system and has a canonical conjugate momentum. Namely, for instance, in NC quantum mechanics we will show that θij (i,j=1,2,3) is an operator in Hilbert space and we will explore the consequences of this so-called ''operationalization''. The DFRA formalism is constructed in an extended space-time with independent degrees of freedom associated with the object of noncommutativity θμν. We will study the symmetry properties of an extended x+θ space-time, given by the group P', which has the Poincaré group P as a subgroup. The Noether formalism adapted to such extended x+θ (D=4+6) space-time is depicted. A consistent algebra involving the enlarged set of canonical operators is described, which permits one to construct theories that are dynamically invariant under the action of the rotation group. In this framework it is also possible to give dynamics to the NC operator sector, resulting in new features. A consistent classical mechanics formulation is analyzed in such a way that, under quantization, it furnishes a NC quantum theory with interesting results. The Dirac formalism for constrained Hamiltonian systems is considered and the object of noncommutativity θij plays a fundamental role as an independent quantity. Next, we explain the dynamical spacetime symmetries in NC relativistic theories by using the DFRA algebra. It is also explained about the generalized Dirac equation issue, that the fermionic field depends not only on the ordinary coordinates but on θμν as well. The dynamical symmetry content of such fermionic theory is discussed, and we show that its action is invariant under P'. In the last part of this work we analyze the complex scalar fields using this new framework. As said above, in a first quantized formalism, θμν and its canonical momentum πμν are seen as operators living in some Hilbert space. In a second quantized formalism perspective, we show an explicit form for the extended Poincaré generators and the same algebra is generated via generalized Heisenberg relations. We also consider a source term and construct the general solution for the complex scalar fields using the Green function technique.
q-bosons and the q-analogue quantized field
NASA Technical Reports Server (NTRS)
Nelson, Charles A.
1995-01-01
The q-analogue coherent states are used to identify physical signatures for the presence of a 1-analogue quantized radiation field in the q-CS classical limits where the absolute value of z is large. In this quantum-optics-like limit, the fractional uncertainties of most physical quantities (momentum, position, amplitude, phase) which characterize the quantum field are O(1). They only vanish as O(1/absolute value of z) when q = 1. However, for the number operator, N, and the N-Hamiltonian for a free q-boson gas, H(sub N) = h(omega)(N + 1/2), the fractional uncertainties do still approach zero. A signature for q-boson counting statistics is that (Delta N)(exp 2)/ (N) approaches 0 as the absolute value of z approaches infinity. Except for its O(1) fractional uncertainty, the q-generalization of the Hermitian phase operator of Pegg and Barnett, phi(sub q), still exhibits normal classical behavior. The standard number-phase uncertainty-relation, Delta(N) Delta phi(sub q) = 1/2, and the approximate commutation relation, (N, phi(sub q)) = i, still hold for the single-mode q-analogue quantized field. So, N and phi(sub q) are almost canonically conjugate operators in the q-CS classical limit. The q-analogue CS's minimize this uncertainty relation for moderate (absolute value of z)(exp 2).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Escalante, Alberto, E-mail: aescalan@ifuap.buap.mx; Manuel-Cabrera, J., E-mail: jmanuel@ifuap.buap.mx
2015-10-15
A detailed Faddeev–Jackiw quantization of an Abelian and non-Abelian exotic action for gravity in three dimensions is performed. We obtain for the theories under study the constraints, the gauge transformations, the generalized Faddeev–Jackiw brackets and we perform the counting of physical degrees of freedom. In addition, we compare our results with those found in the literature where the canonical analysis is developed, in particular, we show that both the generalized Faddeev–Jackiw brackets and Dirac’s brackets coincide to each other. Finally we discuss some remarks and prospects. - Highlights: • A detailed Faddeev–Jackiw analysis for exotic action of gravity is performed.more » • We show that Dirac’s brackets and Generalized [FJ] brackets are equivalent. • Without fixing the gauge exotic action is a non-commutative theory. • The fundamental gauge transformations of the theory are found. • Dirac and Faddeev–Jackiw approaches are compared.« less
Quantization of wave equations and hermitian structures in partial differential varieties
Paneitz, S. M.; Segal, I. E.
1980-01-01
Sufficiently close to 0, the solution variety of a nonlinear relativistic wave equation—e.g., of the form □ϕ + m2ϕ + gϕp = 0—admits a canonical Lorentz-invariant hermitian structure, uniquely determined by the consideration that the action of the differential scattering transformation in each tangent space be unitary. Similar results apply to linear time-dependent equations or to equations in a curved asymptotically flat space-time. A close relation of the Riemannian structure to the determination of vacuum expectation values is developed and illustrated by an explicit determination of a perturbative 2-point function for the case of interaction arising from curvature. The theory underlying these developments is in part a generalization of that of M. G. Krein and collaborators concerning stability of differential equations in Hilbert space and in part a precise relation between the unitarization of given symplectic linear actions and their full probabilistic quantization. The unique causal structure in the infinite symplectic group is instrumental in these developments. PMID:16592923
Non-commutative Chern numbers for generic aperiodic discrete systems
NASA Astrophysics Data System (ADS)
Bourne, Chris; Prodan, Emil
2018-06-01
The search for strong topological phases in generic aperiodic materials and meta-materials is now vigorously pursued by the condensed matter physics community. In this work, we first introduce the concept of patterned resonators as a unifying theoretical framework for topological electronic, photonic, phononic etc (aperiodic) systems. We then discuss, in physical terms, the philosophy behind an operator theoretic analysis used to systematize such systems. A model calculation of the Hall conductance of a 2-dimensional amorphous lattice is given, where we present numerical evidence of its quantization in the mobility gap regime. Motivated by such facts, we then present the main result of our work, which is the extension of the Chern number formulas to Hamiltonians associated to lattices without a canonical labeling of the sites, together with index theorems that assure the quantization and stability of these Chern numbers in the mobility gap regime. Our results cover a broad range of applications, in particular, those involving quasi-crystalline, amorphous as well as synthetic (i.e. algorithmically generated) lattices.
Quantum canonical ensemble: A projection operator approach
NASA Astrophysics Data System (ADS)
Magnus, Wim; Lemmens, Lucien; Brosens, Fons
2017-09-01
Knowing the exact number of particles N, and taking this knowledge into account, the quantum canonical ensemble imposes a constraint on the occupation number operators. The constraint particularly hampers the systematic calculation of the partition function and any relevant thermodynamic expectation value for arbitrary but fixed N. On the other hand, fixing only the average number of particles, one may remove the above constraint and simply factorize the traces in Fock space into traces over single-particle states. As is well known, that would be the strategy of the grand-canonical ensemble which, however, comes with an additional Lagrange multiplier to impose the average number of particles. The appearance of this multiplier can be avoided by invoking a projection operator that enables a constraint-free computation of the partition function and its derived quantities in the canonical ensemble, at the price of an angular or contour integration. Introduced in the recent past to handle various issues related to particle-number projected statistics, the projection operator approach proves beneficial to a wide variety of problems in condensed matter physics for which the canonical ensemble offers a natural and appropriate environment. In this light, we present a systematic treatment of the canonical ensemble that embeds the projection operator into the formalism of second quantization while explicitly fixing N, the very number of particles rather than the average. Being applicable to both bosonic and fermionic systems in arbitrary dimensions, transparent integral representations are provided for the partition function ZN and the Helmholtz free energy FN as well as for two- and four-point correlation functions. The chemical potential is not a Lagrange multiplier regulating the average particle number but can be extracted from FN+1 -FN, as illustrated for a two-dimensional fermion gas.
Diffeomorphisms as symplectomorphisms in history phase space: Bosonic string model
NASA Astrophysics Data System (ADS)
Kouletsis, I.; Kuchař, K. V.
2002-06-01
The structure of the history phase space G of a covariant field system and its history group (in the sense of Isham and Linden) is analyzed on an example of a bosonic string. The history space G includes the time map
From classical to quantum mechanics: ``How to translate physical ideas into mathematical language''
NASA Astrophysics Data System (ADS)
Bergeron, H.
2001-09-01
Following previous works by E. Prugovečki [Physica A 91A, 202 (1978) and Stochastic Quantum Mechanics and Quantum Space-time (Reidel, Dordrecht, 1986)] on common features of classical and quantum mechanics, we develop a unified mathematical framework for classical and quantum mechanics (based on L2-spaces over classical phase space), in order to investigate to what extent quantum mechanics can be obtained as a simple modification of classical mechanics (on both logical and analytical levels). To obtain this unified framework, we split quantum theory in two parts: (i) general quantum axiomatics (a system is described by a state in a Hilbert space, observables are self-adjoints operators, and so on) and (ii) quantum mechanics proper that specifies the Hilbert space as L2(Rn); the Heisenberg rule [pi,qj]=-iℏδij with p=-iℏ∇, the free Hamiltonian H=-ℏ2Δ/2m and so on. We show that general quantum axiomatics (up to a supplementary "axiom of classicity") can be used as a nonstandard mathematical ground to formulate physical ideas and equations of ordinary classical statistical mechanics. So, the question of a "true quantization" with "ℏ" must be seen as an independent physical problem not directly related with quantum formalism. At this stage, we show that this nonstandard formulation of classical mechanics exhibits a new kind of operation that has no classical counterpart: this operation is related to the "quantization process," and we show why quantization physically depends on group theory (the Galilei group). This analytical procedure of quantization replaces the "correspondence principle" (or canonical quantization) and allows us to map classical mechanics into quantum mechanics, giving all operators of quantum dynamics and the Schrödinger equation. The great advantage of this point of view is that quantization is based on concrete physical arguments and not derived from some "pure algebraic rule" (we exhibit also some limit of the correspondence principle). Moreover spins for particles are naturally generated, including an approximation of their interaction with magnetic fields. We also recover by this approach the semi-classical formalism developed by E. Prugovečki [Stochastic Quantum Mechanics and Quantum Space-time (Reidel, Dordrecht, 1986)].
Poisson traces, D-modules, and symplectic resolutions
NASA Astrophysics Data System (ADS)
Etingof, Pavel; Schedler, Travis
2018-03-01
We survey the theory of Poisson traces (or zeroth Poisson homology) developed by the authors in a series of recent papers. The goal is to understand this subtle invariant of (singular) Poisson varieties, conditions for it to be finite-dimensional, its relationship to the geometry and topology of symplectic resolutions, and its applications to quantizations. The main technique is the study of a canonical D-module on the variety. In the case the variety has finitely many symplectic leaves (such as for symplectic singularities and Hamiltonian reductions of symplectic vector spaces by reductive groups), the D-module is holonomic, and hence, the space of Poisson traces is finite-dimensional. As an application, there are finitely many irreducible finite-dimensional representations of every quantization of the variety. Conjecturally, the D-module is the pushforward of the canonical D-module under every symplectic resolution of singularities, which implies that the space of Poisson traces is dual to the top cohomology of the resolution. We explain many examples where the conjecture is proved, such as symmetric powers of du Val singularities and symplectic surfaces and Slodowy slices in the nilpotent cone of a semisimple Lie algebra. We compute the D-module in the case of surfaces with isolated singularities and show it is not always semisimple. We also explain generalizations to arbitrary Lie algebras of vector fields, connections to the Bernstein-Sato polynomial, relations to two-variable special polynomials such as Kostka polynomials and Tutte polynomials, and a conjectural relationship with deformations of symplectic resolutions. In the appendix we give a brief recollection of the theory of D-modules on singular varieties that we require.
Poisson traces, D-modules, and symplectic resolutions.
Etingof, Pavel; Schedler, Travis
2018-01-01
We survey the theory of Poisson traces (or zeroth Poisson homology) developed by the authors in a series of recent papers. The goal is to understand this subtle invariant of (singular) Poisson varieties, conditions for it to be finite-dimensional, its relationship to the geometry and topology of symplectic resolutions, and its applications to quantizations. The main technique is the study of a canonical D-module on the variety. In the case the variety has finitely many symplectic leaves (such as for symplectic singularities and Hamiltonian reductions of symplectic vector spaces by reductive groups), the D-module is holonomic, and hence, the space of Poisson traces is finite-dimensional. As an application, there are finitely many irreducible finite-dimensional representations of every quantization of the variety. Conjecturally, the D-module is the pushforward of the canonical D-module under every symplectic resolution of singularities, which implies that the space of Poisson traces is dual to the top cohomology of the resolution. We explain many examples where the conjecture is proved, such as symmetric powers of du Val singularities and symplectic surfaces and Slodowy slices in the nilpotent cone of a semisimple Lie algebra. We compute the D-module in the case of surfaces with isolated singularities and show it is not always semisimple. We also explain generalizations to arbitrary Lie algebras of vector fields, connections to the Bernstein-Sato polynomial, relations to two-variable special polynomials such as Kostka polynomials and Tutte polynomials, and a conjectural relationship with deformations of symplectic resolutions. In the appendix we give a brief recollection of the theory of D-modules on singular varieties that we require.
Constructive tensorial group field theory I: The {U(1)} -{T^4_3} model
NASA Astrophysics Data System (ADS)
Lahoche, Vincent
2018-05-01
The loop vertex expansion (LVE) is a constructive technique using canonical combinatorial tools. It works well for quantum field theories without renormalization, which is the case of the field theory studied in this paper. Tensorial group field theories (TGFTs) are a new class of field theories proposed to quantize gravity. This paper is devoted to a very simple TGFT for rank three tensors with U(1) group and quartic interactions, hence nicknamed -. It has no ultraviolet divergence, and we show, with the LVE, that it is Borel summable in its coupling constant.
Equivariant branes and equivariant homological mirror symmetry
NASA Astrophysics Data System (ADS)
Ashwinkumar, Meer; Tan, Meng-Chwan
2018-03-01
We describe supersymmetric A-branes and B-branes in open N =(2 ,2 ) dynamically gauged nonlinear sigma models (GNLSM), placing emphasis on toric manifold target spaces. For a subset of toric manifolds, these equivariant branes have a mirror description as branes in gauged Landau-Ginzburg models with neutral matter. We then study correlation functions in the topological A-twisted version of the GNLSM and identify their values with open Hamiltonian Gromov-Witten invariants. Supersymmetry breaking can occur in the A-twisted GNLSM due to nonperturbative open symplectic vortices, and we canonically Becchi-Rouet-Stora-Tyutin quantize the mirror theory to analyze this phenomenon.
Simultaneous fault detection and control design for switched systems with two quantized signals.
Li, Jian; Park, Ju H; Ye, Dan
2017-01-01
The problem of simultaneous fault detection and control design for switched systems with two quantized signals is presented in this paper. Dynamic quantizers are employed, respectively, before the output is passed to fault detector, and before the control input is transmitted to the switched system. Taking the quantized errors into account, the robust performance for this kind of system is given. Furthermore, sufficient conditions for the existence of fault detector/controller are presented in the framework of linear matrix inequalities, and fault detector/controller gains and the supremum of quantizer range are derived by a convex optimized method. Finally, two illustrative examples demonstrate the effectiveness of the proposed method. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Sun, Jun; Duan, Yizhou; Li, Jiangtao; Liu, Jiaying; Guo, Zongming
2013-01-01
In the first part of this paper, we derive a source model describing the relationship between the rate, distortion, and quantization steps of the dead-zone plus uniform threshold scalar quantizers with nearly uniform reconstruction quantizers for generalized Gaussian distribution. This source model consists of rate-quantization, distortion-quantization (D-Q), and distortion-rate (D-R) models. In this part, we first rigorously confirm the accuracy of the proposed source model by comparing the calculated results with the coding data of JM 16.0. Efficient parameter estimation strategies are then developed to better employ this source model in our two-pass rate control method for H.264 variable bit rate coding. Based on our D-Q and D-R models, the proposed method is of high stability, low complexity and is easy to implement. Extensive experiments demonstrate that the proposed method achieves: 1) average peak signal-to-noise ratio variance of only 0.0658 dB, compared to 1.8758 dB of JM 16.0's method, with an average rate control error of 1.95% and 2) significant improvement in smoothing the video quality compared with the latest two-pass rate control method.
Rate and power efficient image compressed sensing and transmission
NASA Astrophysics Data System (ADS)
Olanigan, Saheed; Cao, Lei; Viswanathan, Ramanarayanan
2016-01-01
This paper presents a suboptimal quantization and transmission scheme for multiscale block-based compressed sensing images over wireless channels. The proposed method includes two stages: dealing with quantization distortion and transmission errors. First, given the total transmission bit rate, the optimal number of quantization bits is assigned to the sensed measurements in different wavelet sub-bands so that the total quantization distortion is minimized. Second, given the total transmission power, the energy is allocated to different quantization bit layers based on their different error sensitivities. The method of Lagrange multipliers with Karush-Kuhn-Tucker conditions is used to solve both optimization problems, for which the first problem can be solved with relaxation and the second problem can be solved completely. The effectiveness of the scheme is illustrated through simulation results, which have shown up to 10 dB improvement over the method without the rate and power optimization in medium and low signal-to-noise ratio cases.
Dynamic optimization and its relation to classical and quantum constrained systems
NASA Astrophysics Data System (ADS)
Contreras, Mauricio; Pellicer, Rely; Villena, Marcelo
2017-08-01
We study the structure of a simple dynamic optimization problem consisting of one state and one control variable, from a physicist's point of view. By using an analogy to a physical model, we study this system in the classical and quantum frameworks. Classically, the dynamic optimization problem is equivalent to a classical mechanics constrained system, so we must use the Dirac method to analyze it in a correct way. We find that there are two second-class constraints in the model: one fix the momenta associated with the control variables, and the other is a reminder of the optimal control law. The dynamic evolution of this constrained system is given by the Dirac's bracket of the canonical variables with the Hamiltonian. This dynamic results to be identical to the unconstrained one given by the Pontryagin equations, which are the correct classical equations of motion for our physical optimization problem. In the same Pontryagin scheme, by imposing a closed-loop λ-strategy, the optimality condition for the action gives a consistency relation, which is associated to the Hamilton-Jacobi-Bellman equation of the dynamic programming method. A similar result is achieved by quantizing the classical model. By setting the wave function Ψ(x , t) =e iS(x , t) in the quantum Schrödinger equation, a non-linear partial equation is obtained for the S function. For the right-hand side quantization, this is the Hamilton-Jacobi-Bellman equation, when S(x , t) is identified with the optimal value function. Thus, the Hamilton-Jacobi-Bellman equation in Bellman's maximum principle, can be interpreted as the quantum approach of the optimization problem.
Quantizing and sampling considerations in digital phased-locked loops
NASA Technical Reports Server (NTRS)
Hurst, G. T.; Gupta, S. C.
1974-01-01
The quantizer problem is first considered. The conditions under which the uniform white sequence model for the quantizer error is valid are established independent of the sampling rate. An equivalent spectral density is defined for the quantizer error resulting in an effective SNR value. This effective SNR may be used to determine quantized performance from infinitely fine quantized results. Attention is given to sampling rate considerations. Sampling rate characteristics of the digital phase-locked loop (DPLL) structure are investigated for the infinitely fine quantized system. The predicted phase error variance equation is examined as a function of the sampling rate. Simulation results are presented and a method is described which enables the minimum required sampling rate to be determined from the predicted phase error variance equations.
Video data compression using artificial neural network differential vector quantization
NASA Technical Reports Server (NTRS)
Krishnamurthy, Ashok K.; Bibyk, Steven B.; Ahalt, Stanley C.
1991-01-01
An artificial neural network vector quantizer is developed for use in data compression applications such as Digital Video. Differential Vector Quantization is used to preserve edge features, and a new adaptive algorithm, known as Frequency-Sensitive Competitive Learning, is used to develop the vector quantizer codebook. To develop real time performance, a custom Very Large Scale Integration Application Specific Integrated Circuit (VLSI ASIC) is being developed to realize the associative memory functions needed in the vector quantization algorithm. By using vector quantization, the need for Huffman coding can be eliminated, resulting in superior performance against channel bit errors than methods that use variable length codes.
Nonperturbative light-front Hamiltonian methods
NASA Astrophysics Data System (ADS)
Hiller, J. R.
2016-09-01
We examine the current state-of-the-art in nonperturbative calculations done with Hamiltonians constructed in light-front quantization of various field theories. The language of light-front quantization is introduced, and important (numerical) techniques, such as Pauli-Villars regularization, discrete light-cone quantization, basis light-front quantization, the light-front coupled-cluster method, the renormalization group procedure for effective particles, sector-dependent renormalization, and the Lanczos diagonalization method, are surveyed. Specific applications are discussed for quenched scalar Yukawa theory, ϕ4 theory, ordinary Yukawa theory, supersymmetric Yang-Mills theory, quantum electrodynamics, and quantum chromodynamics. The content should serve as an introduction to these methods for anyone interested in doing such calculations and as a rallying point for those who wish to solve quantum chromodynamics in terms of wave functions rather than random samplings of Euclidean field configurations.
Coherent states for the relativistic harmonic oscillator
NASA Technical Reports Server (NTRS)
Aldaya, Victor; Guerrero, J.
1995-01-01
Recently we have obtained, on the basis of a group approach to quantization, a Bargmann-Fock-like realization of the Relativistic Harmonic Oscillator as well as a generalized Bargmann transform relating fock wave functions and a set of relativistic Hermite polynomials. Nevertheless, the relativistic creation and annihilation operators satisfy typical relativistic commutation relations of the Lie product (vector-z, vector-z(sup dagger)) approximately equals Energy (an SL(2,R) algebra). Here we find higher-order polarization operators on the SL(2,R) group, providing canonical creation and annihilation operators satisfying the Lie product (vector-a, vector-a(sup dagger)) = identity vector 1, the eigenstates of which are 'true' coherent states.
Mixing properties of the one-atom maser
NASA Astrophysics Data System (ADS)
Bruneau, Laurent
2014-06-01
We study the relaxation properties of the quantized electromagnetic field in a cavity under repeated interactions with single two-level atoms, so-called one-atom maser. We improve the ergodic results obtained in Bruneau and Pillet (J Stat Phys 134(5-6):1071-1095, 2009) and prove that, whenever the atoms are initially distributed according to the canonical ensemble at temperature , all the invariant states are mixing. Under some non-resonance condition this invariant state is known to be thermal equilibirum at some renormalized temperature and we prove that the mixing is then arbitrarily slow, in other words that there is no lower bound on the relaxation speed.
On the phase form of a deformation quantization with separation of variables
NASA Astrophysics Data System (ADS)
Karabegov, Alexander
2016-06-01
Given a star product with separation of variables on a pseudo-Kähler manifold, we obtain a new formal (1, 1)-form from its classifying form and call it the phase form of the star product. The cohomology class of a star product with separation of variables equals the class of its phase form. We show that the phase forms can be arbitrary and they bijectively parametrize the star products with separation of variables. We also describe the action of a change of the formal parameter on a star product with separation of variables, its formal Berezin transform, classifying form, phase form, and canonical trace density.
Quantum Computing and Second Quantization
Makaruk, Hanna Ewa
2017-02-10
Quantum computers are by their nature many particle quantum systems. Both the many-particle arrangement and being quantum are necessary for the existence of the entangled states, which are responsible for the parallelism of the quantum computers. Second quantization is a very important approximate method of describing such systems. This lecture will present the general idea of the second quantization, and discuss shortly some of the most important formulations of second quantization.
Quantum Computing and Second Quantization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makaruk, Hanna Ewa
Quantum computers are by their nature many particle quantum systems. Both the many-particle arrangement and being quantum are necessary for the existence of the entangled states, which are responsible for the parallelism of the quantum computers. Second quantization is a very important approximate method of describing such systems. This lecture will present the general idea of the second quantization, and discuss shortly some of the most important formulations of second quantization.
Brynolfsson, Patrik; Nilsson, David; Torheim, Turid; Asklund, Thomas; Karlsson, Camilla Thellenberg; Trygg, Johan; Nyholm, Tufve; Garpebring, Anders
2017-06-22
In recent years, texture analysis of medical images has become increasingly popular in studies investigating diagnosis, classification and treatment response assessment of cancerous disease. Despite numerous applications in oncology and medical imaging in general, there is no consensus regarding texture analysis workflow, or reporting of parameter settings crucial for replication of results. The aim of this study was to assess how sensitive Haralick texture features of apparent diffusion coefficient (ADC) MR images are to changes in five parameters related to image acquisition and pre-processing: noise, resolution, how the ADC map is constructed, the choice of quantization method, and the number of gray levels in the quantized image. We found that noise, resolution, choice of quantization method and the number of gray levels in the quantized images had a significant influence on most texture features, and that the effect size varied between different features. Different methods for constructing the ADC maps did not have an impact on any texture feature. Based on our results, we recommend using images with similar resolutions and noise levels, using one quantization method, and the same number of gray levels in all quantized images, to make meaningful comparisons of texture feature results between different subjects.
Non-uniform cosine modulated filter banks using meta-heuristic algorithms in CSD space.
Kalathil, Shaeen; Elias, Elizabeth
2015-11-01
This paper presents an efficient design of non-uniform cosine modulated filter banks (CMFB) using canonic signed digit (CSD) coefficients. CMFB has got an easy and efficient design approach. Non-uniform decomposition can be easily obtained by merging the appropriate filters of a uniform filter bank. Only the prototype filter needs to be designed and optimized. In this paper, the prototype filter is designed using window method, weighted Chebyshev approximation and weighted constrained least square approximation. The coefficients are quantized into CSD, using a look-up-table. The finite precision CSD rounding, deteriorates the filter bank performances. The performances of the filter bank are improved using suitably modified meta-heuristic algorithms. The different meta-heuristic algorithms which are modified and used in this paper are Artificial Bee Colony algorithm, Gravitational Search algorithm, Harmony Search algorithm and Genetic algorithm and they result in filter banks with less implementation complexity, power consumption and area requirements when compared with those of the conventional continuous coefficient non-uniform CMFB.
Non-uniform cosine modulated filter banks using meta-heuristic algorithms in CSD space
Kalathil, Shaeen; Elias, Elizabeth
2014-01-01
This paper presents an efficient design of non-uniform cosine modulated filter banks (CMFB) using canonic signed digit (CSD) coefficients. CMFB has got an easy and efficient design approach. Non-uniform decomposition can be easily obtained by merging the appropriate filters of a uniform filter bank. Only the prototype filter needs to be designed and optimized. In this paper, the prototype filter is designed using window method, weighted Chebyshev approximation and weighted constrained least square approximation. The coefficients are quantized into CSD, using a look-up-table. The finite precision CSD rounding, deteriorates the filter bank performances. The performances of the filter bank are improved using suitably modified meta-heuristic algorithms. The different meta-heuristic algorithms which are modified and used in this paper are Artificial Bee Colony algorithm, Gravitational Search algorithm, Harmony Search algorithm and Genetic algorithm and they result in filter banks with less implementation complexity, power consumption and area requirements when compared with those of the conventional continuous coefficient non-uniform CMFB. PMID:26644921
A recursive technique for adaptive vector quantization
NASA Technical Reports Server (NTRS)
Lindsay, Robert A.
1989-01-01
Vector Quantization (VQ) is fast becoming an accepted, if not preferred method for image compression. The VQ performs well when compressing all types of imagery including Video, Electro-Optical (EO), Infrared (IR), Synthetic Aperture Radar (SAR), Multi-Spectral (MS), and digital map data. The only requirement is to change the codebook to switch the compressor from one image sensor to another. There are several approaches for designing codebooks for a vector quantizer. Adaptive Vector Quantization is a procedure that simultaneously designs codebooks as the data is being encoded or quantized. This is done by computing the centroid as a recursive moving average where the centroids move after every vector is encoded. When computing the centroid of a fixed set of vectors the resultant centroid is identical to the previous centroid calculation. This method of centroid calculation can be easily combined with VQ encoding techniques. The defined quantizer changes after every encoded vector by recursively updating the centroid of minimum distance which is the selected by the encoder. Since the quantizer is changing definition or states after every encoded vector, the decoder must now receive updates to the codebook. This is done as side information by multiplexing bits into the compressed source data.
Hao, Li-Ying; Yang, Guang-Hong
2013-09-01
This paper is concerned with the problem of robust fault-tolerant compensation control problem for uncertain linear systems subject to both state and input signal quantization. By incorporating novel matrix full-rank factorization technique with sliding surface design successfully, the total failure of certain actuators can be coped with, under a special actuator redundancy assumption. In order to compensate for quantization errors, an adjustment range of quantization sensitivity for a dynamic uniform quantizer is given through the flexible choices of design parameters. Comparing with the existing results, the derived inequality condition leads to the fault tolerance ability stronger and much wider scope of applicability. With a static adjustment policy of quantization sensitivity, an adaptive sliding mode controller is then designed to maintain the sliding mode, where the gain of the nonlinear unit vector term is updated automatically to compensate for the effects of actuator faults, quantization errors, exogenous disturbances and parameter uncertainties without the need for a fault detection and isolation (FDI) mechanism. Finally, the effectiveness of the proposed design method is illustrated via a model of a rocket fairing structural-acoustic. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain
Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo
2012-01-01
An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality. PMID:23049544
Medical image compression based on vector quantization with variable block sizes in wavelet domain.
Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo
2012-01-01
An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.
Educational Information Quantization for Improving Content Quality in Learning Management Systems
ERIC Educational Resources Information Center
Rybanov, Alexander Aleksandrovich
2014-01-01
The article offers the educational information quantization method for improving content quality in Learning Management Systems. The paper considers questions concerning analysis of quality of quantized presentation of educational information, based on quantitative text parameters: average frequencies of parts of speech, used in the text; formal…
Quantized kernel least mean square algorithm.
Chen, Badong; Zhao, Songlin; Zhu, Pingping; Príncipe, José C
2012-01-01
In this paper, we propose a quantization approach, as an alternative of sparsification, to curb the growth of the radial basis function structure in kernel adaptive filtering. The basic idea behind this method is to quantize and hence compress the input (or feature) space. Different from sparsification, the new approach uses the "redundant" data to update the coefficient of the closest center. In particular, a quantized kernel least mean square (QKLMS) algorithm is developed, which is based on a simple online vector quantization method. The analytical study of the mean square convergence has been carried out. The energy conservation relation for QKLMS is established, and on this basis we arrive at a sufficient condition for mean square convergence, and a lower and upper bound on the theoretical value of the steady-state excess mean square error. Static function estimation and short-term chaotic time-series prediction examples are presented to demonstrate the excellent performance.
Hao, Li-Ying; Park, Ju H; Ye, Dan
2017-09-01
In this paper, a new robust fault-tolerant compensation control method for uncertain linear systems over networks is proposed, where only quantized signals are assumed to be available. This approach is based on the integral sliding mode (ISM) method where two kinds of integral sliding surfaces are constructed. One is the continuous-state-dependent surface with the aim of sliding mode stability analysis and the other is the quantization-state-dependent surface, which is used for ISM controller design. A scheme that combines the adaptive ISM controller and quantization parameter adjustment strategy is then proposed. Through utilizing H ∞ control analytical technique, once the system is in the sliding mode, the nature of performing disturbance attenuation and fault tolerance from the initial time can be found without requiring any fault information. Finally, the effectiveness of our proposed ISM control fault-tolerant schemes against quantization errors is demonstrated in the simulation.
Image-adapted visually weighted quantization matrices for digital image compression
NASA Technical Reports Server (NTRS)
Watson, Andrew B. (Inventor)
1994-01-01
A method for performing image compression that eliminates redundant and invisible image components is presented. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.
Quantized phase coding and connected region labeling for absolute phase retrieval.
Chen, Xiangcheng; Wang, Yuwei; Wang, Yajun; Ma, Mengchao; Zeng, Chunnian
2016-12-12
This paper proposes an absolute phase retrieval method for complex object measurement based on quantized phase-coding and connected region labeling. A specific code sequence is embedded into quantized phase of three coded fringes. Connected regions of different codes are labeled and assigned with 3-digit-codes combining the current period and its neighbors. Wrapped phase, more than 36 periods, can be restored with reference to the code sequence. Experimental results verify the capability of the proposed method to measure multiple isolated objects.
High Order Entropy-Constrained Residual VQ for Lossless Compression of Images
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Smith, Mark J. T.; Scales, Allen
1995-01-01
High order entropy coding is a powerful technique for exploiting high order statistical dependencies. However, the exponentially high complexity associated with such a method often discourages its use. In this paper, an entropy-constrained residual vector quantization method is proposed for lossless compression of images. The method consists of first quantizing the input image using a high order entropy-constrained residual vector quantizer and then coding the residual image using a first order entropy coder. The distortion measure used in the entropy-constrained optimization is essentially the first order entropy of the residual image. Experimental results show very competitive performance.
NASA Astrophysics Data System (ADS)
Karyakin, Yu. E.; Nekhozhin, M. A.; Pletnev, A. A.
2013-07-01
A method for calculating the quantity of moisture in a metal-concrete container in the process of its charging with spent nuclear fuel is proposed. A computing method and results obtained by it for conservative estimation of the time of vacuum drying of a container charged with spent nuclear fuel by technologies with quantization and without quantization of the lower fuel element cluster are presented. It has been shown that the absence of quantization in loading spent fuel increases several times the time of vacuum drying of the metal-concrete container.
On the Path Integral in Non-Commutative (nc) Qft
NASA Astrophysics Data System (ADS)
Dehne, Christoph
2008-09-01
As is generally known, different quantization schemes applied to field theory on NC spacetime lead to Feynman rules with different physical properties, if time does not commute with space. In particular, the Feynman rules that are derived from the path integral corresponding to the T*-product (the so-called naïve Feynman rules) violate the causal time ordering property. Within the Hamiltonian approach to quantum field theory, we show that we can (formally) modify the time ordering encoded in the above path integral. The resulting Feynman rules are identical to those obtained in the canonical approach via the Gell-Mann-Low formula (with T-ordering). They preserve thus unitarity and causal time ordering.
NASA Technical Reports Server (NTRS)
Tsue, Yasuhiko
1994-01-01
A general framework for time-dependent variational approach in terms of squeezed coherent states is constructed with the aim of describing quantal systems by means of classical mechanics including higher order quantal effects with the aid of canonicity conditions developed in the time-dependent Hartree-Fock theory. The Maslov phase occurring in a semi-classical quantization rule is investigated in this framework. In the limit of a semi-classical approximation in this approach, it is definitely shown that the Maslov phase has a geometric nature analogous to the Berry phase. It is also indicated that this squeezed coherent state approach is a possible way to go beyond the usual WKB approximation.
Quantum gravitational corrections from the Wheeler–DeWitt equation for scalar–tensor theories
NASA Astrophysics Data System (ADS)
Steinwachs, Christian F.; van der Wild, Matthijs L.
2018-07-01
We perform the canonical quantization of a general scalar–tensor theory and derive the first quantum gravitational corrections following from a semiclassical expansion of the Wheeler–DeWitt equation. The non-minimal coupling of the scalar field to gravity induces a derivative coupling between the scalar field and the gravitational degrees of freedom, which prevents a direct application of the expansion scheme. We address this technical difficulty by transforming the theory from the Jordan frame to the Einstein frame. We find that a large non-minimal coupling can have strong effects on the quantum gravitational correction terms. We briefly discuss these effects in the context of the specific model of Higgs inflation.
Quantum theory of structured monochromatic light
NASA Astrophysics Data System (ADS)
Punnoose, Alexander; Tu, J. J.
2017-08-01
Applications that envisage utilizing the orbital angular momentum (OAM) at the single photon level assume that the OAM degrees of freedom of the photons are orthogonal. To test this critical assumption, we quantize the beam-like solutions of the vector Helmholtz equation from first principles. We show that although the photon operators of a diffracting monochromatic beam do not in general satisfy the canonical commutation relations, implying that the photon states in Fock space are not orthogonal, the states are bona fide eigenstates of the number and Hamiltonian operators. As a result, the representation for the photon operators presented in this work form a natural basis to study structured monochromatic light at the single photon level.
Optimal Compression of Floating-Point Astronomical Images Without Significant Loss of Information
NASA Technical Reports Server (NTRS)
Pence, William D.; White, R. L.; Seaman, R.
2010-01-01
We describe a compression method for floating-point astronomical images that gives compression ratios of 6 - 10 while still preserving the scientifically important information in the image. The pixel values are first preprocessed by quantizing them into scaled integer intensity levels, which removes some of the uncompressible noise in the image. The integers are then losslessly compressed using the fast and efficient Rice algorithm and stored in a portable FITS format file. Quantizing an image more coarsely gives greater image compression, but it also increases the noise and degrades the precision of the photometric and astrometric measurements in the quantized image. Dithering the pixel values during the quantization process greatly improves the precision of measurements in the more coarsely quantized images. We perform a series of experiments on both synthetic and real astronomical CCD images to quantitatively demonstrate that the magnitudes and positions of stars in the quantized images can be measured with the predicted amount of precision. In order to encourage wider use of these image compression methods, we have made available a pair of general-purpose image compression programs, called fpack and funpack, which can be used to compress any FITS format image.
On Correspondence of BRST-BFV, Dirac, and Refined Algebraic Quantizations of Constrained Systems
NASA Astrophysics Data System (ADS)
Shvedov, O. Yu.
2002-11-01
The correspondence between BRST-BFV, Dirac, and refined algebraic (group averaging, projection operator) approaches to quantizing constrained systems is analyzed. For the closed-algebra case, it is shown that the component of the BFV wave function corresponding to maximal (minimal) value of number of ghosts and antighosts in the Schrodinger representation may be viewed as a wave function in the refined algebraic (Dirac) quantization approach. The Giulini-Marolf group averaging formula for the inner product in the refined algebraic quantization approach is obtained from the Batalin-Marnelius prescription for the BRST-BFV inner product, which should be generally modified due to topological problems. The considered prescription for the correspondence of states is observed to be applicable to the open-algebra case. The refined algebraic quantization approach is generalized then to the case of nontrivial structure functions. A simple example is discussed. The correspondence of observables for different quantization methods is also investigated.
Generalized radiation-field quantization method and the Petermann excess-noise factor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Y.-J.; Siegman, A.E.; E.L. Ginzton Laboratory, Stanford University, Stanford, California 94305
2003-10-01
We propose a generalized radiation-field quantization formalism, where quantization does not have to be referenced to a set of power-orthogonal eigenmodes as conventionally required. This formalism can be used to directly quantize the true system eigenmodes, which can be non-power-orthogonal due to the open nature of the system or the gain/loss medium involved in the system. We apply this generalized field quantization to the laser linewidth problem, in particular, lasers with non-power-orthogonal oscillation modes, and derive the excess-noise factor in a fully quantum-mechanical framework. We also show that, despite the excess-noise factor for oscillating modes, the total spatially averaged decaymore » rate for the laser atoms remains unchanged.« less
BFV approach to geometric quantization
NASA Astrophysics Data System (ADS)
Fradkin, E. S.; Linetsky, V. Ya.
1994-12-01
A gauge-invariant approach to geometric quantization is developed. It yields a complete quantum description for dynamical systems with non-trivial geometry and topology of the phase space. The method is a global version of the gauge-invariant approach to quantization of second-class constraints developed by Batalin, Fradkin and Fradkina (BFF). Physical quantum states and quantum observables are respectively described by covariantly constant sections of the Fock bundle and the bundle of hermitian operators over the phase space with a flat connection defined by the nilpotent BVF-BRST operator. Perturbative calculation of the first non-trivial quantum correction to the Poisson brackets leads to the Chevalley cocycle known in deformation quantization. Consistency conditions lead to a topological quantization condition with metaplectic anomaly.
A visual detection model for DCT coefficient quantization
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.; Watson, Andrew B.
1994-01-01
The discrete cosine transform (DCT) is widely used in image compression and is part of the JPEG and MPEG compression standards. The degree of compression and the amount of distortion in the decompressed image are controlled by the quantization of the transform coefficients. The standards do not specify how the DCT coefficients should be quantized. One approach is to set the quantization level for each coefficient so that the quantization error is near the threshold of visibility. Results from previous work are combined to form the current best detection model for DCT coefficient quantization noise. This model predicts sensitivity as a function of display parameters, enabling quantization matrices to be designed for display situations varying in luminance, veiling light, and spatial frequency related conditions (pixel size, viewing distance, and aspect ratio). It also allows arbitrary color space directions for the representation of color. A model-based method of optimizing the quantization matrix for an individual image was developed. The model described above provides visual thresholds for each DCT frequency. These thresholds are adjusted within each block for visual light adaptation and contrast masking. For given quantization matrix, the DCT quantization errors are scaled by the adjusted thresholds to yield perceptual errors. These errors are pooled nonlinearly over the image to yield total perceptual error. With this model one may estimate the quantization matrix for a particular image that yields minimum bit rate for a given total perceptual error, or minimum perceptual error for a given bit rate. Custom matrices for a number of images show clear improvement over image-independent matrices. Custom matrices are compatible with the JPEG standard, which requires transmission of the quantization matrix.
Hamiltonian thermodynamics of three-dimensional dilatonic black holes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dias, Goncalo A. S.; Lemos, Jose P. S.
2008-08-15
The action for a class of three-dimensional dilaton-gravity theories with a negative cosmological constant can be recast in a Brans-Dicke type action, with its free {omega} parameter. These theories have static spherically symmetric black holes. Those with well formulated asymptotics are studied through a Hamiltonian formalism, and their thermodynamical properties are found out. The theories studied are general relativity ({omega}{yields}{infinity}), a dimensionally reduced cylindrical four-dimensional general relativity theory ({omega}=0), and a theory representing a class of theories ({omega}=-3). The Hamiltonian formalism is set up in three dimensions through foliations on the right region of the Carter-Penrose diagram, with the bifurcationmore » 1-sphere as the left boundary, and anti-de Sitter infinity as the right boundary. The metric functions on the foliated hypersurfaces are the canonical coordinates. The Hamiltonian action is written, the Hamiltonian being a sum of constraints. One finds a new action which yields an unconstrained theory with one pair of canonical coordinates (M,P{sub M}), M being the mass parameter and P{sub M} its conjugate momenta The resulting Hamiltonian is a sum of boundary terms only. A quantization of the theory is performed. The Schroedinger evolution operator is constructed, the trace is taken, and the partition function of the canonical ensemble is obtained. The black hole entropies differ, in general, from the usual quarter of the horizon area due to the dilaton.« less
Interferometric tests of Planckian quantum geometry models
Kwon, Ohkyung; Hogan, Craig J.
2016-04-19
The effect of Planck scale quantum geometrical effects on measurements with interferometers is estimated with standard physics, and with a variety of proposed extensions. It is shown that effects are negligible in standard field theory with canonically quantized gravity. Statistical noise levels are estimated in a variety of proposals for nonstandard metric fluctuations, and these alternatives are constrained using upper bounds on stochastic metric fluctuations from LIGO. Idealized models of several interferometer system architectures are used to predict signal noise spectra in a quantum geometry that cannot be described by a fluctuating metric, in which position noise arises from holographicmore » bounds on directional information. Lastly, predictions in this case are shown to be close to current and projected experimental bounds.« less
Quantum cluster algebras and quantum nilpotent algebras.
Goodearl, Kenneth R; Yakimov, Milen T
2014-07-08
A major direction in the theory of cluster algebras is to construct (quantum) cluster algebra structures on the (quantized) coordinate rings of various families of varieties arising in Lie theory. We prove that all algebras in a very large axiomatically defined class of noncommutative algebras possess canonical quantum cluster algebra structures. Furthermore, they coincide with the corresponding upper quantum cluster algebras. We also establish analogs of these results for a large class of Poisson nilpotent algebras. Many important families of coordinate rings are subsumed in the class we are covering, which leads to a broad range of applications of the general results to the above-mentioned types of problems. As a consequence, we prove the Berenstein-Zelevinsky conjecture [Berenstein A, Zelevinsky A (2005) Adv Math 195:405-455] for the quantized coordinate rings of double Bruhat cells and construct quantum cluster algebra structures on all quantum unipotent groups, extending the theorem of Geiß et al. [Geiß C, et al. (2013) Selecta Math 19:337-397] for the case of symmetric Kac-Moody groups. Moreover, we prove that the upper cluster algebras of Berenstein et al. [Berenstein A, et al. (2005) Duke Math J 126:1-52] associated with double Bruhat cells coincide with the corresponding cluster algebras.
Relational particle models: I. Reconciliation with standard classical and quantum theory
NASA Astrophysics Data System (ADS)
Anderson, Edward
2006-04-01
This paper concerns the absolute versus relative motion debate. The Barbour and Bertotti (1982) work may be viewed as an indirectly set up relational formulation of a portion of Newtonian mechanics. I consider further direct formulations of this and argue that the portion in question—universes with zero total angular momentum that are conservative and with kinetic terms that are (homogeneous) quadratic in their velocities—is capable of accommodating a wide range of classical physics phenomena. Furthermore, as I develop in paper II, this relational particle model is a useful toy model for canonical general relativity. I consider what happens if one quantizes relational rather than absolute mechanics, indeed whether the latter is misleading. By exploiting Jacobi coordinates, I show how to access many examples of quantized relational particle models and then interpret these from a relational perspective. By these means, previous suggestions of bad semiclassicality for such models can be eluded. I show how small (particle number) universe relational particle model examples display eigenspectrum truncation, gaps, energy interlocking and counterbalanced total angular momentum. These features mean that these small universe models make interesting toy models for some aspects of closed-universe quantum cosmology. Meanwhile, these features do not compromise the recovery of reality as regards the practicalities of experimentation in a large universe such as our own.
Quantum cluster algebras and quantum nilpotent algebras
Goodearl, Kenneth R.; Yakimov, Milen T.
2014-01-01
A major direction in the theory of cluster algebras is to construct (quantum) cluster algebra structures on the (quantized) coordinate rings of various families of varieties arising in Lie theory. We prove that all algebras in a very large axiomatically defined class of noncommutative algebras possess canonical quantum cluster algebra structures. Furthermore, they coincide with the corresponding upper quantum cluster algebras. We also establish analogs of these results for a large class of Poisson nilpotent algebras. Many important families of coordinate rings are subsumed in the class we are covering, which leads to a broad range of applications of the general results to the above-mentioned types of problems. As a consequence, we prove the Berenstein–Zelevinsky conjecture [Berenstein A, Zelevinsky A (2005) Adv Math 195:405–455] for the quantized coordinate rings of double Bruhat cells and construct quantum cluster algebra structures on all quantum unipotent groups, extending the theorem of Geiß et al. [Geiß C, et al. (2013) Selecta Math 19:337–397] for the case of symmetric Kac–Moody groups. Moreover, we prove that the upper cluster algebras of Berenstein et al. [Berenstein A, et al. (2005) Duke Math J 126:1–52] associated with double Bruhat cells coincide with the corresponding cluster algebras. PMID:24982197
Density-Dependent Quantized Least Squares Support Vector Machine for Large Data Sets.
Nan, Shengyu; Sun, Lei; Chen, Badong; Lin, Zhiping; Toh, Kar-Ann
2017-01-01
Based on the knowledge that input data distribution is important for learning, a data density-dependent quantization scheme (DQS) is proposed for sparse input data representation. The usefulness of the representation scheme is demonstrated by using it as a data preprocessing unit attached to the well-known least squares support vector machine (LS-SVM) for application on big data sets. Essentially, the proposed DQS adopts a single shrinkage threshold to obtain a simple quantization scheme, which adapts its outputs to input data density. With this quantization scheme, a large data set is quantized to a small subset where considerable sample size reduction is generally obtained. In particular, the sample size reduction can save significant computational cost when using the quantized subset for feature approximation via the Nyström method. Based on the quantized subset, the approximated features are incorporated into LS-SVM to develop a data density-dependent quantized LS-SVM (DQLS-SVM), where an analytic solution is obtained in the primal solution space. The developed DQLS-SVM is evaluated on synthetic and benchmark data with particular emphasis on large data sets. Extensive experimental results show that the learning machine incorporating DQS attains not only high computational efficiency but also good generalization performance.
Image data compression having minimum perceptual error
NASA Technical Reports Server (NTRS)
Watson, Andrew B. (Inventor)
1995-01-01
A method for performing image compression that eliminates redundant and invisible image components is described. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.
Dynamic State Estimation of Power Systems With Quantization Effects: A Recursive Filter Approach.
Hu, Liang; Wang, Zidong; Liu, Xiaohui
2016-08-01
In this paper, a recursive filter algorithm is developed to deal with the state estimation problem for power systems with quantized nonlinear measurements. The measurements from both the remote terminal units and the phasor measurement unit are subject to quantizations described by a logarithmic quantizer. Attention is focused on the design of a recursive filter such that, in the simultaneous presence of nonlinear measurements and quantization effects, an upper bound for the estimation error covariance is guaranteed and subsequently minimized. Instead of using the traditional approximation methods in nonlinear estimation that simply ignore the linearization errors, we treat both the linearization and quantization errors as norm-bounded uncertainties in the algorithm development so as to improve the performance of the estimator. For the power system with such kind of introduced uncertainties, a filter is designed in the framework of robust recursive estimation, and the developed filter algorithm is tested on the IEEE benchmark power system to demonstrate its effectiveness.
Sub-Selective Quantization for Learning Binary Codes in Large-Scale Image Search.
Li, Yeqing; Liu, Wei; Huang, Junzhou
2018-06-01
Recently with the explosive growth of visual content on the Internet, large-scale image search has attracted intensive attention. It has been shown that mapping high-dimensional image descriptors to compact binary codes can lead to considerable efficiency gains in both storage and performing similarity computation of images. However, most existing methods still suffer from expensive training devoted to large-scale binary code learning. To address this issue, we propose a sub-selection based matrix manipulation algorithm, which can significantly reduce the computational cost of code learning. As case studies, we apply the sub-selection algorithm to several popular quantization techniques including cases using linear and nonlinear mappings. Crucially, we can justify the resulting sub-selective quantization by proving its theoretic properties. Extensive experiments are carried out on three image benchmarks with up to one million samples, corroborating the efficacy of the sub-selective quantization method in terms of image retrieval.
Image Coding Based on Address Vector Quantization.
NASA Astrophysics Data System (ADS)
Feng, Yushu
Image coding is finding increased application in teleconferencing, archiving, and remote sensing. This thesis investigates the potential of Vector Quantization (VQ), a relatively new source coding technique, for compression of monochromatic and color images. Extensions of the Vector Quantization technique to the Address Vector Quantization method have been investigated. In Vector Quantization, the image data to be encoded are first processed to yield a set of vectors. A codeword from the codebook which best matches the input image vector is then selected. Compression is achieved by replacing the image vector with the index of the code-word which produced the best match, the index is sent to the channel. Reconstruction of the image is done by using a table lookup technique, where the label is simply used as an address for a table containing the representative vectors. A code-book of representative vectors (codewords) is generated using an iterative clustering algorithm such as K-means, or the generalized Lloyd algorithm. A review of different Vector Quantization techniques are given in chapter 1. Chapter 2 gives an overview of codebook design methods including the Kohonen neural network to design codebook. During the encoding process, the correlation of the address is considered and Address Vector Quantization is developed for color image and monochrome image coding. Address VQ which includes static and dynamic processes is introduced in chapter 3. In order to overcome the problems in Hierarchical VQ, Multi-layer Address Vector Quantization is proposed in chapter 4. This approach gives the same performance as that of the normal VQ scheme but the bit rate is about 1/2 to 1/3 as that of the normal VQ method. In chapter 5, a Dynamic Finite State VQ based on a probability transition matrix to select the best subcodebook to encode the image is developed. In chapter 6, a new adaptive vector quantization scheme, suitable for color video coding, called "A Self -Organizing Adaptive VQ Technique" is presented. In addition to chapters 2 through 6 which report on new work, this dissertation includes one chapter (chapter 1) and part of chapter 2 which review previous work on VQ and image coding, respectively. Finally, a short discussion of directions for further research is presented in conclusion.
Quantum games of opinion formation based on the Marinatto-Weber quantum game scheme
NASA Astrophysics Data System (ADS)
Deng, Xinyang; Deng, Yong; Liu, Qi; Shi, Lei; Wang, Zhen
2016-06-01
Quantization has become a new way to investigate classical game theory since quantum strategies and quantum games were proposed. In the existing studies, many typical game models, such as the prisoner's dilemma, battle of the sexes, Hawk-Dove game, have been extensively explored by using quantization approach. Along a similar method, here several game models of opinion formations will be quantized on the basis of the Marinatto-Weber quantum game scheme, a frequently used scheme of converting classical games to quantum versions. Our results show that the quantization can fascinatingly change the properties of some classical opinion formation game models so as to generate win-win outcomes.
Improved image decompression for reduced transform coding artifacts
NASA Technical Reports Server (NTRS)
Orourke, Thomas P.; Stevenson, Robert L.
1994-01-01
The perceived quality of images reconstructed from low bit rate compression is severely degraded by the appearance of transform coding artifacts. This paper proposes a method for producing higher quality reconstructed images based on a stochastic model for the image data. Quantization (scalar or vector) partitions the transform coefficient space and maps all points in a partition cell to a representative reconstruction point, usually taken as the centroid of the cell. The proposed image estimation technique selects the reconstruction point within the quantization partition cell which results in a reconstructed image which best fits a non-Gaussian Markov random field (MRF) image model. This approach results in a convex constrained optimization problem which can be solved iteratively. At each iteration, the gradient projection method is used to update the estimate based on the image model. In the transform domain, the resulting coefficient reconstruction points are projected to the particular quantization partition cells defined by the compressed image. Experimental results will be shown for images compressed using scalar quantization of block DCT and using vector quantization of subband wavelet transform. The proposed image decompression provides a reconstructed image with reduced visibility of transform coding artifacts and superior perceived quality.
NASA Astrophysics Data System (ADS)
Liu, Hao; Li, Kangda; Wang, Bing; Tang, Hainie; Gong, Xiaohui
2017-01-01
A quantized block compressive sensing (QBCS) framework, which incorporates the universal measurement, quantization/inverse quantization, entropy coder/decoder, and iterative projected Landweber reconstruction, is summarized. Under the QBCS framework, this paper presents an improved reconstruction algorithm for aerial imagery, QBCS, with entropy-aware projected Landweber (QBCS-EPL), which leverages the full-image sparse transform without Wiener filter and an entropy-aware thresholding model for wavelet-domain image denoising. Through analyzing the functional relation between the soft-thresholding factors and entropy-based bitrates for different quantization methods, the proposed model can effectively remove wavelet-domain noise of bivariate shrinkage and achieve better image reconstruction quality. For the overall performance of QBCS reconstruction, experimental results demonstrate that the proposed QBCS-EPL algorithm significantly outperforms several existing algorithms. With the experiment-driven methodology, the QBCS-EPL algorithm can obtain better reconstruction quality at a relatively moderate computational cost, which makes it more desirable for aerial imagery applications.
NASA Astrophysics Data System (ADS)
Ivanov, K. A.; Nikolaev, V. V.; Gubaydullin, A. R.; Kaliteevski, M. A.
2017-10-01
Based on the scattering matrix formalism, we have developed a method of quantization of an electromagnetic field in two-dimensional photonic nanostructures ( S-quantization in the two-dimensional case). In this method, the fields at the boundaries of the quantization box are expanded into a Fourier series and are related with each other by the scattering matrix of the system, which is the product of matrices describing the propagation of plane waves in empty regions of the quantization box and the scattering matrix of the photonic structure (or an arbitrary inhomogeneity). The quantization condition (similarly to the onedimensional case) is formulated as follows: the eigenvalues of the scattering matrix are equal to unity, which corresponds to the fact that the set of waves that are incident on the structure (components of the expansion into the Fourier series) is equal to the set of waves that travel away from the structure (outgoing waves). The coefficients of the matrix of scattering through the inhomogeneous structure have been calculated using the following procedure: the structure is divided into parallel layers such that the permittivity in each layer varies only along the axis that is perpendicular to the layers. Using the Fourier transform, the Maxwell equations have been written in the form of a matrix that relates the Fourier components of the electric field at the boundaries of neighboring layers. The product of these matrices is the transfer matrix in the basis of the Fourier components of the electric field. Represented in a block form, it is composed by matrices that contain the reflection and transmission coefficients for the Fourier components of the field, which, in turn, constitute the scattering matrix. The developed method considerably simplifies the calculation scheme for the analysis of the behavior of the electromagnetic field in structures with a two-dimensional inhomogeneity. In addition, this method makes it possible to obviate difficulties that arise in the analysis of the Purcell effect because of the divergence of the integral describing the effective volume of the mode in open systems.
Subband directional vector quantization in radiological image compression
NASA Astrophysics Data System (ADS)
Akrout, Nabil M.; Diab, Chaouki; Prost, Remy; Goutte, Robert; Amiel, Michel
1992-05-01
The aim of this paper is to propose a new scheme for image compression. The method is very efficient for images which have directional edges such as the tree-like structure of the coronary vessels in digital angiograms. This method involves two steps. First, the original image is decomposed at different resolution levels using a pyramidal subband decomposition scheme. For decomposition/reconstruction of the image, free of aliasing and boundary errors, we use an ideal band-pass filter bank implemented in the Discrete Cosine Transform domain (DCT). Second, the high-frequency subbands are vector quantized using a multiresolution codebook with vertical and horizontal codewords which take into account the edge orientation of each subband. The proposed method reduces the blocking effect encountered at low bit rates in conventional vector quantization.
NASA Astrophysics Data System (ADS)
Motoyui, Nobuyuki; Yamada, Mitsuru
We investigate a two-dimensional N = 2 supersymmetric model which consists of n chiral superfields with Kähler potential. When we define quantum observables, we are always plagued by operator ordering problem. Among various ways to fix the operator order, we rely upon the supersymmetry. We demonstrate that the correct operator order is given by requiring the super-Poincaré algebra by carrying out the canonical Dirac bracket quantization. This is shown to be also true when the supersymmetry algebra has a central extension by the presence of topological soliton. It is also shown that the path of soliton is a straight line in the complex plane of superpotential W and triangular mass inequality holds. One half of supersymmetry is broken by the presence of soliton.
Quantized impedance dealing with the damping behavior of the one-dimensional oscillator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Jinghao; Zhang, Jing; Li, Yuan
2015-11-15
A quantized impedance is proposed to theoretically establish the relationship between the atomic eigenfrequency and the intrinsic frequency of the one-dimensional oscillator in this paper. The classical oscillator is modified by the idea that the electron transition is treated as a charge-discharge process of a suggested capacitor with the capacitive energy equal to the energy level difference of the jumping electron. The quantized capacitance of the impedance interacting with the jumping electron can lead the resonant frequency of the oscillator to the same as the atomic eigenfrequency. The quantized resistance reflects that the damping coefficient of the oscillator is themore » mean collision frequency of the transition electron. In addition, the first and third order electric susceptibilities based on the oscillator are accordingly quantized. Our simulation of the hydrogen atom emission spectrum based on the proposed method agrees well with the experimental one. Our results exhibits that the one-dimensional oscillator with the quantized impedance may become useful in the estimations of the refractive index and one- or multi-photon absorption coefficients of some nonmagnetic media composed of hydrogen-like atoms.« less
Quantized impedance dealing with the damping behavior of the one-dimensional oscillator
NASA Astrophysics Data System (ADS)
Zhu, Jinghao; Zhang, Jing; Li, Yuan; Zhang, Yong; Fang, Zhengji; Zhao, Peide; Li, Erping
2015-11-01
A quantized impedance is proposed to theoretically establish the relationship between the atomic eigenfrequency and the intrinsic frequency of the one-dimensional oscillator in this paper. The classical oscillator is modified by the idea that the electron transition is treated as a charge-discharge process of a suggested capacitor with the capacitive energy equal to the energy level difference of the jumping electron. The quantized capacitance of the impedance interacting with the jumping electron can lead the resonant frequency of the oscillator to the same as the atomic eigenfrequency. The quantized resistance reflects that the damping coefficient of the oscillator is the mean collision frequency of the transition electron. In addition, the first and third order electric susceptibilities based on the oscillator are accordingly quantized. Our simulation of the hydrogen atom emission spectrum based on the proposed method agrees well with the experimental one. Our results exhibits that the one-dimensional oscillator with the quantized impedance may become useful in the estimations of the refractive index and one- or multi-photon absorption coefficients of some nonmagnetic media composed of hydrogen-like atoms.
Image compression system and method having optimized quantization tables
NASA Technical Reports Server (NTRS)
Ratnakar, Viresh (Inventor); Livny, Miron (Inventor)
1998-01-01
A digital image compression preprocessor for use in a discrete cosine transform-based digital image compression device is provided. The preprocessor includes a gathering mechanism for determining discrete cosine transform statistics from input digital image data. A computing mechanism is operatively coupled to the gathering mechanism to calculate a image distortion array and a rate of image compression array based upon the discrete cosine transform statistics for each possible quantization value. A dynamic programming mechanism is operatively coupled to the computing mechanism to optimize the rate of image compression array against the image distortion array such that a rate-distortion-optimal quantization table is derived. In addition, a discrete cosine transform-based digital image compression device and a discrete cosine transform-based digital image compression and decompression system are provided. Also, a method for generating a rate-distortion-optimal quantization table, using discrete cosine transform-based digital image compression, and operating a discrete cosine transform-based digital image compression and decompression system are provided.
Three paths toward the quantum angle operator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gazeau, Jean Pierre, E-mail: gazeau@apc.univ-paris7.fr; Szafraniec, Franciszek Hugon, E-mail: franciszek.szafraniec@uj.edu.pl
2016-12-15
We examine mathematical questions around angle (or phase) operator associated with a number operator through a short list of basic requirements. We implement three methods of construction of quantum angle. The first one is based on operator theory and parallels the definition of angle for the upper half-circle through its cosine and completed by a sign inversion. The two other methods are integral quantization generalizing in a certain sense the Berezin–Klauder approaches. One method pertains to Weyl–Heisenberg integral quantization of the plane viewed as the phase space of the motion on the line. It depends on a family of “weight”more » functions on the plane. The third method rests upon coherent state quantization of the cylinder viewed as the phase space of the motion on the circle. The construction of these coherent states depends on a family of probability distributions on the line.« less
Electroweak standard model with very special relativity
NASA Astrophysics Data System (ADS)
Alfaro, Jorge; González, Pablo; Ávila, Ricardo
2015-05-01
The very special relativity electroweak Standard Model (VSR EW SM) is a theory with SU (2 )L×U (1 )R symmetry, with the same number of leptons and gauge fields as in the usual Weinberg-Salam model. No new particles are introduced. The model is renormalizable and unitarity is preserved. However, photons obtain mass and the massive bosons obtain different masses for different polarizations. Besides, neutrino masses are generated. A VSR-invariant term will produce neutrino oscillations and new processes are allowed. In particular, we compute the rate of the decays μ →e +γ . All these processes, which are forbidden in the electroweak Standard Model, put stringent bounds on the parameters of our model and measure the violation of Lorentz invariance. We investigate the canonical quantization of this nonlocal model. Second quantization is carried out, and we obtain a well-defined particle content. Additionally, we do a counting of the degrees of freedom associated with the gauge bosons involved in this work, after spontaneous symmetry breaking has been realized. Violations of Lorentz invariance have been predicted by several theories of quantum gravity [J. Alfaro, H. Morales-Tecotl, and L. F. Urrutia, Phys. Rev. Lett. 84, 2318 (2000); Phys. Rev. D 65, 103509 (2002)]. It is a remarkable possibility that the low-energy effects of Lorentz violation induced by quantum gravity could be contained in the nonlocal terms of the VSR EW SM.
Colliding holes in Riemann surfaces and quantum cluster algebras
NASA Astrophysics Data System (ADS)
Chekhov, Leonid; Mazzocco, Marta
2018-01-01
In this paper, we describe a new type of surgery for non-compact Riemann surfaces that naturally appears when colliding two holes or two sides of the same hole in an orientable Riemann surface with boundary (and possibly orbifold points). As a result of this surgery, bordered cusps appear on the boundary components of the Riemann surface. In Poincaré uniformization, these bordered cusps correspond to ideal triangles in the fundamental domain. We introduce the notion of bordered cusped Teichmüller space and endow it with a Poisson structure, quantization of which is achieved with a canonical quantum ordering. We give a complete combinatorial description of the bordered cusped Teichmüller space by introducing the notion of maximal cusped lamination, a lamination consisting of geodesic arcs between bordered cusps and closed geodesics homotopic to the boundaries such that it triangulates the Riemann surface. We show that each bordered cusp carries a natural decoration, i.e. a choice of a horocycle, so that the lengths of the arcs in the maximal cusped lamination are defined as λ-lengths in Thurston-Penner terminology. We compute the Goldman bracket explicitly in terms of these λ-lengths and show that the groupoid of flip morphisms acts as a generalized cluster algebra mutation. From the physical point of view, our construction provides an explicit coordinatization of moduli spaces of open/closed string worldsheets and their quantization.
Image Data Compression Having Minimum Perceptual Error
NASA Technical Reports Server (NTRS)
Watson, Andrew B. (Inventor)
1997-01-01
A method is presented for performing color or grayscale image compression that eliminates redundant and invisible image components. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The quantization matrix comprises visual masking by luminance and contrast technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.
Symplectic Quantization of a Vector-Tensor Gauge Theory with Topological Coupling
NASA Astrophysics Data System (ADS)
Barcelos-Neto, J.; Silva, M. B. D.
We use the symplectic formalism to quantize a gauge theory where vectors and tensors fields are coupled in a topological way. This is an example of reducible theory and a procedure like of ghosts-of-ghosts of the BFV method is applied but in terms of Lagrange multipliers. Our final results are in agreement with the ones found in the literature by using the Dirac method.
NASA Astrophysics Data System (ADS)
Menezes, G.; Svaiter, N. F.
2006-07-01
We use the method of stochastic quantization in a topological field theory defined in an Euclidean space, assuming a Langevin equation with a memory kernel. We show that our procedure for the Abelian Chern-Simons theory converges regardless of the nature of the Chern-Simons coefficient.
Symplectic Quantization of a Reducible Theory
NASA Astrophysics Data System (ADS)
Barcelos-Neto, J.; Silva, M. B. D.
We use the symplectic formalism to quantize the Abelian antisymmetric tensor gauge field. It is related to a reducible theory in the sense that all of its constraints are not independent. A procedure like ghost-of-ghost of the BFV method has to be used, but in terms of Lagrange multipliers.
Du, Baoqiang; Dong, Shaofeng; Wang, Yanfeng; Guo, Shuting; Cao, Lingzhi; Zhou, Wei; Zuo, Yandi; Liu, Dan
2013-11-01
A wide-frequency and high-resolution frequency measurement method based on the quantized phase step law is presented in this paper. Utilizing a variation law of the phase differences, the direct different frequency phase processing, and the phase group synchronization phenomenon, combining an A/D converter and the adaptive phase shifting principle, a counter gate is established in the phase coincidences at one-group intervals, which eliminates the ±1 counter error in the traditional frequency measurement method. More importantly, the direct phase comparison, the measurement, and the control between any periodic signals have been realized without frequency normalization in this method. Experimental results show that sub-picosecond resolution can be easily obtained in the frequency measurement, the frequency standard comparison, and the phase-locked control based on the phase quantization processing technique. The method may be widely used in navigation positioning, space techniques, communication, radar, astronomy, atomic frequency standards, and other high-tech fields.
Digital halftoning methods for selectively partitioning error into achromatic and chromatic channels
NASA Technical Reports Server (NTRS)
Mulligan, Jeffrey B.
1990-01-01
A method is described for reducing the visibility of artifacts arising in the display of quantized color images on CRT displays. The method is based on the differential spatial sensitivity of the human visual system to chromatic and achromatic modulations. Because the visual system has the highest spatial and temporal acuity for the luminance component of an image, a technique which will reduce luminance artifacts at the expense of introducing high-frequency chromatic errors is sought. A method based on controlling the correlations between the quantization errors in the individual phosphor images is explored. The luminance component is greatest when the phosphor errors are positively correlated, and is minimized when the phosphor errors are negatively correlated. The greatest effect of the correlation is obtained when the intensity quantization step sizes of the individual phosphors have equal luminances. For the ordered dither algorithm, a version of the method can be implemented by simply inverting the matrix of thresholds for one of the color components.
Wang, Yan-Wu; Bian, Tao; Xiao, Jiang-Wen; Wen, Changyun
2015-10-01
This paper studies the global synchronization of complex dynamical network (CDN) under digital communication with limited bandwidth. To realize the digital communication, the so-called uniform-quantizer-sets are introduced to quantize the states of nodes, which are then encoded and decoded by newly designed encoders and decoders. To meet the requirement of the bandwidth constraint, a scaling function is utilized to guarantee the quantizers having bounded inputs and thus achieving bounded real-time quantization levels. Moreover, a new type of vector norm is introduced to simplify the expression of the bandwidth limit. Through mathematical induction, a sufficient condition is derived to ensure global synchronization of the CDNs. The lower bound on the sum of the real-time quantization levels is analyzed for different cases. Optimization method is employed to relax the requirements on the network topology and to determine the minimum of such lower bound for each case, respectively. Simulation examples are also presented to illustrate the established results.
Application of heterogeneous pulse coupled neural network in image quantization
NASA Astrophysics Data System (ADS)
Huang, Yi; Ma, Yide; Li, Shouliang; Zhan, Kun
2016-11-01
On the basis of the different strengths of synaptic connections between actual neurons, this paper proposes a heterogeneous pulse coupled neural network (HPCNN) algorithm to perform quantization on images. HPCNNs are developed from traditional pulse coupled neural network (PCNN) models, which have different parameters corresponding to different image regions. This allows pixels of different gray levels to be classified broadly into two categories: background regional and object regional. Moreover, an HPCNN also satisfies human visual characteristics. The parameters of the HPCNN model are calculated automatically according to these categories, and quantized results will be optimal and more suitable for humans to observe. At the same time, the experimental results of natural images from the standard image library show the validity and efficiency of our proposed quantization method.
Tampered Region Localization of Digital Color Images Based on JPEG Compression Noise
NASA Astrophysics Data System (ADS)
Wang, Wei; Dong, Jing; Tan, Tieniu
With the availability of various digital image edit tools, seeing is no longer believing. In this paper, we focus on tampered region localization for image forensics. We propose an algorithm which can locate tampered region(s) in a lossless compressed tampered image when its unchanged region is output of JPEG decompressor. We find the tampered region and the unchanged region have different responses for JPEG compression. The tampered region has stronger high frequency quantization noise than the unchanged region. We employ PCA to separate different spatial frequencies quantization noises, i.e. low, medium and high frequency quantization noise, and extract high frequency quantization noise for tampered region localization. Post-processing is involved to get final localization result. The experimental results prove the effectiveness of our proposed method.
Quantum particles in general spacetimes: A tangent bundle formalism
NASA Astrophysics Data System (ADS)
Wohlfarth, Mattias N. R.
2018-06-01
Using tangent bundle geometry we construct an equivalent reformulation of classical field theory on flat spacetimes which simultaneously encodes the perspectives of multiple observers. Its generalization to curved spacetimes realizes a new type of nonminimal coupling of the fields and is shown to admit a canonical quantization procedure. For the resulting quantum theory we demonstrate the emergence of a particle interpretation, fully consistent with general relativistic geometry. The path dependency of parallel transport forces each observer to carry their own quantum state; we find that the communication of the corresponding quantum information may generate extra particles on curved spacetimes. A speculative link between quantum information and spacetime curvature is discussed which might lead to novel explanations for quantum decoherence and vanishing interference in double-slit or interaction-free measurement scenarios, in the mere presence of additional observers.
Spin-foam models and the physical scalar product
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alesci, Emanuele; Centre de Physique Theorique de Luminy, Universite de la Mediterranee, F-13288 Marseille; Noui, Karim
2008-11-15
This paper aims at clarifying the link between loop quantum gravity and spin-foam models in four dimensions. Starting from the canonical framework, we construct an operator P acting on the space of cylindrical functions Cyl({gamma}), where {gamma} is the four-simplex graph, such that its matrix elements are, up to some normalization factors, the vertex amplitude of spin-foam models. The spin-foam models we are considering are the topological model, the Barrett-Crane model, and the Engle-Pereira-Rovelli model. If one of these spin-foam models provides a covariant quantization of gravity, then the associated operator P should be the so-called ''projector'' into physical statesmore » and its matrix elements should give the physical scalar product. We discuss the possibility to extend the action of P to any cylindrical functions on the space manifold.« less
NASA Astrophysics Data System (ADS)
Tip, A.
1998-06-01
Starting from Maxwell's equations for a linear, nonconducting, absorptive, and dispersive medium, characterized by the constitutive equations D(x,t)=ɛ1(x)E(x,t)+∫t-∞dsχ(x,t-s)E(x,s) and H(x,t)=B(x,t), a unitary time evolution and canonical formalism is obtained. Given the complex, coordinate, and frequency-dependent, electric permeability ɛ(x,ω), no further assumptions are made. The procedure leads to a proper definition of band gaps in the periodic case and a new continuity equation for energy flow. An S-matrix formalism for scattering from lossy objects is presented in full detail. A quantized version of the formalism is derived and applied to the generation of Čerenkov and transition radiation as well as atomic decay. The last case suggests a useful generalization of the density of states to the absorptive situation.
Treatment of constraints in the stochastic quantization method and covariantized Langevin equation
NASA Astrophysics Data System (ADS)
Ikegami, Kenji; Kimura, Tadahiko; Mochizuki, Riuji
1993-04-01
We study the treatment of the constraints in the stochastic quantization method. We improve the treatment of the stochastic consistency condition proposed by Namiki et al. by suitably taking into account the Ito calculus. Then we obtain an improved Langevi equation and the Fokker-Planck equation which naturally leads to the correct path integral quantization of the constrained system as the stochastic equilibrium state. This treatment is applied to an O( N) non-linear α model and it is shown that singular terms appearing in the improved Langevin equation cancel out the σ n(O) divergences in one loop order. We also ascertain that the above Langevin equation, rewritten in terms of idependent variables, is actually equivalent to the one in the general-coordinate transformation covariant and vielbein-rotation invariant formalish.
NASA Astrophysics Data System (ADS)
Song, Haiyu; Yu, Li; Zhang, Dan; Zhang, Wen-An
2012-12-01
This paper is concerned with the finite-time quantized H∞ control problem for a class of discrete-time switched time-delay systems with time-varying exogenous disturbances. By using the sector bound approach and the average dwell time method, sufficient conditions are derived for the switched system to be finite-time bounded and ensure a prescribed H∞ disturbance attenuation level, and a mode-dependent quantized state feedback controller is designed by solving an optimization problem. Two illustrative examples are provided to demonstrate the effectiveness of the proposed theoretical results.
Learning binary code via PCA of angle projection for image retrieval
NASA Astrophysics Data System (ADS)
Yang, Fumeng; Ye, Zhiqiang; Wei, Xueqi; Wu, Congzhong
2018-01-01
With benefits of low storage costs and high query speeds, binary code representation methods are widely researched for efficiently retrieving large-scale data. In image hashing method, learning hashing function to embed highdimensions feature to Hamming space is a key step for accuracy retrieval. Principal component analysis (PCA) technical is widely used in compact hashing methods, and most these hashing methods adopt PCA projection functions to project the original data into several dimensions of real values, and then each of these projected dimensions is quantized into one bit by thresholding. The variances of different projected dimensions are different, and with real-valued projection produced more quantization error. To avoid the real-valued projection with large quantization error, in this paper we proposed to use Cosine similarity projection for each dimensions, the angle projection can keep the original structure and more compact with the Cosine-valued. We used our method combined the ITQ hashing algorithm, and the extensive experiments on the public CIFAR-10 and Caltech-256 datasets validate the effectiveness of the proposed method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gonçalves, L.A.; Olavo, L.S.F., E-mail: olavolsf@gmail.com
Dissipation in Quantum Mechanics took some time to become a robust field of investigation after the birth of the field. The main issue hindering developments in the field is that the Quantization process was always tightly connected to the Hamiltonian formulation of Classical Mechanics. In this paper we present a quantization process that does not depend upon the Hamiltonian formulation of Classical Mechanics (although still departs from Classical Mechanics) and thus overcome the problem of finding, from first principles, a completely general Schrödinger equation encompassing dissipation. This generalized process of quantization is shown to be nothing but an extension ofmore » a more restricted version that is shown to produce the Schrödinger equation for Hamiltonian systems from first principles (even for Hamiltonian velocity dependent potential). - Highlights: • A Quantization process independent of the Hamiltonian formulation of quantum Mechanics is proposed. • This quantization method is applied to dissipative or absorptive systems. • A Dissipative Schrödinger equation is derived from first principles.« less
Can one ADM quantize relativistic bosonicstrings and membranes?
NASA Astrophysics Data System (ADS)
Moncrief, Vincent
2006-04-01
The standard methods for quantizing relativistic strings diverge significantly from the Dirac-Wheeler-DeWitt program for quantization of generally covariant systems and one wonders whether the latter could be successfully implemented as an alternative to the former. As a first step in this direction, we consider the possibility of quantizing strings (and also relativistic membranes) via a partially gauge-fixed ADM (Arnowitt, Deser and Misner) formulation of the reduced field equations for these systems. By exploiting some (Euclidean signature) Hamilton-Jacobi techniques that Mike Ryan and I had developed previously for the quantization of Bianchi IX cosmological models, I show how to construct Diff( S 1)-invariant (or Diff(Σ)-invariant in the case of membranes) ground state wave functionals for the cases of co-dimension one strings and membranes embedded in Minkowski spacetime. I also show that the reduced Hamiltonian density operators for these systems weakly commute when applied to physical (i.e. Diff( S 1) or Diff(Σ)-invariant) states. While many open questions remain, these preliminary results seem to encourage further research along the same lines.
Effect of signal intensity and camera quantization on laser speckle contrast analysis
Song, Lipei; Elson, Daniel S.
2012-01-01
Laser speckle contrast analysis (LASCA) is limited to being a qualitative method for the measurement of blood flow and tissue perfusion as it is sensitive to the measurement configuration. The signal intensity is one of the parameters that can affect the contrast values due to the quantization of the signals by the camera and analog-to-digital converter (ADC). In this paper we deduce the theoretical relationship between signal intensity and contrast values based on the probability density function (PDF) of the speckle pattern and simplify it to a rational function. A simple method to correct this contrast error is suggested. The experimental results demonstrate that this relationship can effectively compensate the bias in contrast values induced by the quantized signal intensity and correct for bias induced by signal intensity variations across the field of view. PMID:23304650
Landau quantization of Dirac fermions in graphene and its multilayers
NASA Astrophysics Data System (ADS)
Yin, Long-Jing; Bai, Ke-Ke; Wang, Wen-Xiao; Li, Si-Yu; Zhang, Yu; He, Lin
2017-08-01
When electrons are confined in a two-dimensional (2D) system, typical quantum-mechanical phenomena such as Landau quantization can be detected. Graphene systems, including the single atomic layer and few-layer stacked crystals, are ideal 2D materials for studying a variety of quantum-mechanical problems. In this article, we review the experimental progress in the unusual Landau quantized behaviors of Dirac fermions in monolayer and multilayer graphene by using scanning tunneling microscopy (STM) and scanning tunneling spectroscopy (STS). Through STS measurement of the strong magnetic fields, distinct Landau-level spectra and rich level-splitting phenomena are observed in different graphene layers. These unique properties provide an effective method for identifying the number of layers, as well as the stacking orders, and investigating the fundamentally physical phenomena of graphene. Moreover, in the presence of a strain and charged defects, the Landau quantization of graphene can be significantly modified, leading to unusual spectroscopic and electronic properties.
Wang, Chenliang; Wen, Changyun; Hu, Qinglei; Wang, Wei; Zhang, Xiuyu
2018-06-01
This paper is devoted to distributed adaptive containment control for a class of nonlinear multiagent systems with input quantization. By employing a matrix factorization and a novel matrix normalization technique, some assumptions involving control gain matrices in existing results are relaxed. By fusing the techniques of sliding mode control and backstepping control, a two-step design method is proposed to construct controllers and, with the aid of neural networks, all system nonlinearities are allowed to be unknown. Moreover, a linear time-varying model and a similarity transformation are introduced to circumvent the obstacle brought by quantization, and the controllers need no information about the quantizer parameters. The proposed scheme is able to ensure the boundedness of all closed-loop signals and steer the containment errors into an arbitrarily small residual set. The simulation results illustrate the effectiveness of the scheme.
Model predictive control of non-linear systems over networks with data quantization and packet loss.
Yu, Jimin; Nan, Liangsheng; Tang, Xiaoming; Wang, Ping
2015-11-01
This paper studies the approach of model predictive control (MPC) for the non-linear systems under networked environment where both data quantization and packet loss may occur. The non-linear controlled plant in the networked control system (NCS) is represented by a Tagaki-Sugeno (T-S) model. The sensed data and control signal are quantized in both links and described as sector bound uncertainties by applying sector bound approach. Then, the quantized data are transmitted in the communication networks and may suffer from the effect of packet losses, which are modeled as Bernoulli process. A fuzzy predictive controller which guarantees the stability of the closed-loop system is obtained by solving a set of linear matrix inequalities (LMIs). A numerical example is given to illustrate the effectiveness of the proposed method. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Yang, Shuyu; Mitra, Sunanda
2002-05-01
Due to the huge volumes of radiographic images to be managed in hospitals, efficient compression techniques yielding no perceptual loss in the reconstructed images are becoming a requirement in the storage and management of such datasets. A wavelet-based multi-scale vector quantization scheme that generates a global codebook for efficient storage and transmission of medical images is presented in this paper. The results obtained show that even at low bit rates one is able to obtain reconstructed images with perceptual quality higher than that of the state-of-the-art scalar quantization method, the set partitioning in hierarchical trees.
NASA Astrophysics Data System (ADS)
Procopio, Lorenzo M.; Rozema, Lee A.; Dakić, Borivoje; Walther, Philip
2017-09-01
In his recent article [Phys. Rev. A 95, 060101(R) (2017), 10.1103/PhysRevA.95.060101], Adler questions the usefulness of the bound found in our experimental search for genuine effects of hypercomplex quantum mechanics [Nat. Commun. 8, 15044 (2017), 10.1038/ncomms15044]. Our experiment was performed using a black-box (instrumentalist) approach to generalized probabilistic theories; therefore, it does not assume a priori any particular underlying mechanism. From that point of view our experimental results do indeed place meaningful bounds on the possible effects of "postquantum theories," including quaternionic quantum mechanics. In his article, Adler compares our experiment to nonrelativistic and Möller formal scattering theories within quaternionic quantum mechanics. With a particular set of assumptions, he finds that quaternionic effects would likely not manifest themselves in general. Although these assumptions are justified in the nonrelativistic case, a proper calculation for relativistic particles is still missing. Here, we provide a concrete relativistic example of Klein-Gordon scattering wherein the quaternionic effects persist. We note that when the Klein-Gordon equation is formulated using a Hamiltonian formalism it displays a so-called "indefinite metric," a characteristic feature of relativistic quantum wave equations. In Adler's example this is directly forbidden by his assumptions, and therefore our present example is not in contradiction to his work. In complex quantum mechanics this problem of an indefinite metric is solved in a second quantization. Unfortunately, there is no known algorithm for canonical field quantization in quaternionic quantum mechanics.
NASA Astrophysics Data System (ADS)
Salisbury, Donald; Renn, Jürgen; Sundermeyer, Kurt
2016-02-01
Classical background independence is reflected in Lagrangian general relativity through covariance under the full diffeomorphism group. We show how this independence can be maintained in a Hamilton-Jacobi approach that does not accord special privilege to any geometric structure. Intrinsic space-time curvature-based coordinates grant equal status to all geometric backgrounds. They play an essential role as a starting point for inequivalent semiclassical quantizations. The scheme calls into question Wheeler’s geometrodynamical approach and the associated Wheeler-DeWitt equation in which 3-metrics are featured geometrical objects. The formalism deals with variables that are manifestly invariant under the full diffeomorphism group. Yet, perhaps paradoxically, the liberty in selecting intrinsic coordinates is precisely as broad as is the original diffeomorphism freedom. We show how various ideas from the past five decades concerning the true degrees of freedom of general relativity can be interpreted in light of this new constrained Hamiltonian description. In particular, we show how the Kuchař multi-fingered time approach can be understood as a means of introducing full four-dimensional diffeomorphism invariants. Every choice of new phase space variables yields new Einstein-Hamilton-Jacobi constraining relations, and corresponding intrinsic Schrödinger equations. We show how to implement this freedom by canonical transformation of the intrinsic Hamiltonian. We also reinterpret and rectify significant work by Dittrich on the construction of “Dirac observables.”
BOOK REVIEW: Quantum Gravity (2nd edn)
NASA Astrophysics Data System (ADS)
Husain, Viqar
2008-06-01
There has been a flurry of books on quantum gravity in the past few years. The first edition of Kiefer's book appeared in 2004, about the same time as Carlo Rovelli's book with the same title. This was soon followed by Thomas Thiemann's 'Modern Canonical Quantum General Relativity'. Although the main focus of each of these books is non-perturbative and non-string approaches to the quantization of general relativity, they are quite orthogonal in temperament, style, subject matter and mathematical detail. Rovelli and Thiemann focus primarily on loop quantum gravity (LQG), whereas Kiefer attempts a broader introduction and review of the subject that includes chapters on string theory and decoherence. Kiefer's second edition attempts an even wider and somewhat ambitious sweep with 'new sections on asymptotic safety, dynamical triangulation, primordial black holes, the information-loss problem, loop quantum cosmology, and other topics'. The presentation of these current topics is necessarily brief given the size of the book, but effective in encapsulating the main ideas in some cases. For instance the few pages devoted to loop quantum cosmology describe how the mini-superspace reduction of the quantum Hamiltonian constraint of LQG becomes a difference equation, whereas the discussion of 'dynamical triangulations', an approach to defining a discretized Lorentzian path integral for quantum gravity, is less detailed. The first few chapters of the book provide, in a roughly historical sequence, the covariant and canonical metric variable approach to the subject developed in the 1960s and 70s. The problem(s) of time in quantum gravity are nicely summarized in the chapter on quantum geometrodynamics, followed by a detailed and effective introduction of the WKB approach and the semi-classical approximation. These topics form the traditional core of the subject. The next three chapters cover LQG, quantization of black holes, and quantum cosmology. Of these the chapter on LQG is the shortest at fourteen pages—a reflection perhaps of the fact that there are two books and a few long reviews of the subject available written by the main protagonists in the field. The chapters on black holes and cosmology provide a more or less standard introduction to black hole thermodynamics, Hawking and Unruh radiation, quantization of the Schwarzschild metric and mini-superspace collapse models, and the DeWitt, Hartle Hawking and Vilenkin wavefunctions. The chapter on string theory is an essay-like overview of its quantum gravitational aspects. It provides a nice introduction to selected ideas and a guide to the literature. Here a prescient student may be left wondering why there is no quantum cosmology in string theory, perhaps a deliberate omission to avoid the 'landscape' and its fauna. In summary, I think this book succeeds in its purpose of providing a broad introduction to quantum gravity, and nicely complements some of the other books on the subject.
Regularized Generalized Canonical Correlation Analysis
ERIC Educational Resources Information Center
Tenenhaus, Arthur; Tenenhaus, Michel
2011-01-01
Regularized generalized canonical correlation analysis (RGCCA) is a generalization of regularized canonical correlation analysis to three or more sets of variables. It constitutes a general framework for many multi-block data analysis methods. It combines the power of multi-block data analysis methods (maximization of well identified criteria) and…
Compositions and methods for the expression of selenoproteins in eukaryotic cells
Gladyshev, Vadim [Lincoln, NE; Novoselov, Sergey [Puschino, RU
2012-09-25
Recombinant nucleic acid constructs for the efficient expression of eukaryotic selenoproteins and related methods for production of recombinant selenoproteins are provided. The nucleic acid constructs comprise novel selenocysteine insertion sequence (SECIS) elements. Certain novel SECIS elements of the invention contain non-canonical quartet sequences. Other novel SECIS elements provided by the invention are chimeric SECIS elements comprising a canonical SECIS element that contains a non-canonical quartet sequence and chimeric SECIS elements comprising a non-canonical SECIS element that contains a canonical quartet sequence. The novel SECIS elements of the invention facilitate the insertion of selenocysteine residues into recombinant polypeptides.
Coherent states for quantum compact groups
NASA Astrophysics Data System (ADS)
Jurĉo, B.; Ŝťovíĉek, P.
1996-12-01
Coherent states are introduced and their properties are discussed for simple quantum compact groups A l, Bl, Cl and D l. The multiplicative form of the canonical element for the quantum double is used to introduce the holomorphic coordinates on a general quantum dressing orbit. The coherent state is interpreted as a holomorphic function on this orbit with values in the carrier Hilbert space of an irreducible representation of the corresponding quantized enveloping algebra. Using Gauss decomposition, the commutation relations for the holomorphic coordinates on the dressing orbit are derived explicitly and given in a compact R-matrix formulation (generalizing this way the q-deformed Grassmann and flag manifolds). The antiholomorphic realization of the irreducible representations of a compact quantum group (the analogue of the Borel-Weil construction) is described using the concept of coherent state. The relation between representation theory and non-commutative differential geometry is suggested.
Shape from sound: toward new tools for quantum gravity.
Aasen, David; Bhamre, Tejal; Kempf, Achim
2013-03-22
To unify general relativity and quantum theory is hard in part because they are formulated in two very different mathematical languages, differential geometry and functional analysis. A natural candidate for bridging this language gap, at least in the case of the Euclidean signature, is the discipline of spectral geometry. It aims at describing curved manifolds in terms of the spectra of their canonical differential operators. As an immediate benefit, this would offer a clean gauge-independent identification of the metric's degrees of freedom in terms of invariants that should be ready to quantize. However, spectral geometry is itself hard and has been plagued by ambiguities. Here, we regularize and break up spectral geometry into small, finite-dimensional and therefore manageable steps. We constructively demonstrate that this strategy works at least in two dimensions. We can now calculate the shapes of two-dimensional objects from their vibrational spectra.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Livine, Etera R.
We introduce the set of framed (convex) polyhedra with N faces as the symplectic quotient C{sup 2N}//SU(2). A framed polyhedron is then parametrized by N spinors living in C{sup 2} satisfying suitable closure constraints and defines a usual convex polyhedron plus extra U(1) phases attached to each face. We show that there is a natural action of the unitary group U(N) on this phase space, which changes the shape of faces and allows to map any (framed) polyhedron onto any other with the same total (boundary) area. This identifies the space of framed polyhedra to the Grassmannian space U(N)/ (SU(2)×U(N−2)).more » We show how to write averages of geometrical observables (polynomials in the faces' area and the angles between them) over the ensemble of polyhedra (distributed uniformly with respect to the Haar measure on U(N)) as polynomial integrals over the unitary group and we provide a few methods to compute these integrals systematically. We also use the Itzykson-Zuber formula from matrix models as the generating function for these averages and correlations. In the quantum case, a canonical quantization of the framed polyhedron phase space leads to the Hilbert space of SU(2) intertwiners (or, in other words, SU(2)-invariant states in tensor products of irreducible representations). The total boundary area as well as the individual face areas are quantized as half-integers (spins), and the Hilbert spaces for fixed total area form irreducible representations of U(N). We define semi-classical coherent intertwiner states peaked on classical framed polyhedra and transforming consistently under U(N) transformations. And we show how the U(N) character formula for unitary transformations is to be considered as an extension of the Itzykson-Zuber to the quantum level and generates the traces of all polynomial observables over the Hilbert space of intertwiners. We finally apply the same formalism to two dimensions and show that classical (convex) polygons can be described in a similar fashion trading the unitary group for the orthogonal group. We conclude with a discussion of the possible (deformation) dynamics that one can define on the space of polygons or polyhedra. This work is a priori useful in the context of discrete geometry but it should hopefully also be relevant to (loop) quantum gravity in 2+1 and 3+1 dimensions when the quantum geometry is defined in terms of gluing of (quantized) polygons and polyhedra.« less
NASA Astrophysics Data System (ADS)
Bornyakov, V. G.; Boyda, D. L.; Goy, V. A.; Molochkov, A. V.; Nakamura, Atsushi; Nikolaev, A. A.; Zakharov, V. I.
2017-05-01
We propose and test a new approach to computation of canonical partition functions in lattice QCD at finite density. We suggest a few steps procedure. We first compute numerically the quark number density for imaginary chemical potential i μq I . Then we restore the grand canonical partition function for imaginary chemical potential using the fitting procedure for the quark number density. Finally we compute the canonical partition functions using high precision numerical Fourier transformation. Additionally we compute the canonical partition functions using the known method of the hopping parameter expansion and compare results obtained by two methods in the deconfining as well as in the confining phases. The agreement between two methods indicates the validity of the new method. Our numerical results are obtained in two flavor lattice QCD with clover improved Wilson fermions.
q-Derivatives, quantization methods and q-algebras
DOE Office of Scientific and Technical Information (OSTI.GOV)
Twarock, Reidun
1998-12-15
Using the example of Borel quantization on S{sup 1}, we discuss the relation between quantization methods and q-algebras. In particular, it is shown that a q-deformation of the Witt algebra with generators labeled by Z is realized by q-difference operators. This leads to a discrete quantum mechanics. Because of Z, the discretization is equidistant. As an approach to a non-equidistant discretization of quantum mechanics one can change the Witt algebra using not the number field Z as labels but a quadratic extension of Z characterized by an irrational number {tau}. This extension is denoted as quasi-crystal Lie algebra, because thismore » is a relation to one-dimensional quasicrystals. The q-deformation of this quasicrystal Lie algebra is discussed. It is pointed out that quasicrystal Lie algebras can be considered also as a 'deformed' Witt algebra with a 'deformation' of the labeling number field. Their application to the theory is discussed.« less
Uniform quantized electron gas
NASA Astrophysics Data System (ADS)
Høye, Johan S.; Lomba, Enrique
2016-10-01
In this work we study the correlation energy of the quantized electron gas of uniform density at temperature T = 0. To do so we utilize methods from classical statistical mechanics. The basis for this is the Feynman path integral for the partition function of quantized systems. With this representation the quantum mechanical problem can be interpreted as, and is equivalent to, a classical polymer problem in four dimensions where the fourth dimension is imaginary time. Thus methods, results, and properties obtained in the statistical mechanics of classical fluids can be utilized. From this viewpoint we recover the well known RPA (random phase approximation). Then to improve it we modify the RPA by requiring the corresponding correlation function to be such that electrons with equal spins can not be on the same position. Numerical evaluations are compared with well known results of a standard parameterization of Monte Carlo correlation energies.
Conditional Entropy-Constrained Residual VQ with Application to Image Coding
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Chung, Wilson C.; Smith, Mark J. T.
1996-01-01
This paper introduces an extension of entropy-constrained residual vector quantization (VQ) where intervector dependencies are exploited. The method, which we call conditional entropy-constrained residual VQ, employs a high-order entropy conditioning strategy that captures local information in the neighboring vectors. When applied to coding images, the proposed method is shown to achieve better rate-distortion performance than that of entropy-constrained residual vector quantization with less computational complexity and lower memory requirements. Moreover, it can be designed to support progressive transmission in a natural way. It is also shown to outperform some of the best predictive and finite-state VQ techniques reported in the literature. This is due partly to the joint optimization between the residual vector quantizer and a high-order conditional entropy coder as well as the efficiency of the multistage residual VQ structure and the dynamic nature of the prediction.
Functional Multiple-Set Canonical Correlation Analysis
ERIC Educational Resources Information Center
Hwang, Heungsun; Jung, Kwanghee; Takane, Yoshio; Woodward, Todd S.
2012-01-01
We propose functional multiple-set canonical correlation analysis for exploring associations among multiple sets of functions. The proposed method includes functional canonical correlation analysis as a special case when only two sets of functions are considered. As in classical multiple-set canonical correlation analysis, computationally, the…
NASA Astrophysics Data System (ADS)
Zhou, Chi-Chun; Dai, Wu-Sheng
2018-02-01
In statistical mechanics, for a system with a fixed number of particles, e.g. a finite-size system, strictly speaking, the thermodynamic quantity needs to be calculated in the canonical ensemble. Nevertheless, the calculation of the canonical partition function is difficult. In this paper, based on the mathematical theory of the symmetric function, we suggest a method for the calculation of the canonical partition function of ideal quantum gases, including ideal Bose, Fermi, and Gentile gases. Moreover, we express the canonical partition functions of interacting classical and quantum gases given by the classical and quantum cluster expansion methods in terms of the Bell polynomial in mathematics. The virial coefficients of ideal Bose, Fermi, and Gentile gases are calculated from the exact canonical partition function. The virial coefficients of interacting classical and quantum gases are calculated from the canonical partition function by using the expansion of the Bell polynomial, rather than calculated from the grand canonical potential.
Prediction-guided quantization for video tone mapping
NASA Astrophysics Data System (ADS)
Le Dauphin, Agnès.; Boitard, Ronan; Thoreau, Dominique; Olivier, Yannick; Francois, Edouard; LeLéannec, Fabrice
2014-09-01
Tone Mapping Operators (TMOs) compress High Dynamic Range (HDR) content to address Low Dynamic Range (LDR) displays. However, before reaching the end-user, this tone mapped content is usually compressed for broadcasting or storage purposes. Any TMO includes a quantization step to convert floating point values to integer ones. In this work, we propose to adapt this quantization, in the loop of an encoder, to reduce the entropy of the tone mapped video content. Our technique provides an appropriate quantization for each mode of both the Intra and Inter-prediction that is performed in the loop of a block-based encoder. The mode that minimizes a rate-distortion criterion uses its associated quantization to provide integer values for the rest of the encoding process. The method has been implemented in HEVC and was tested over two different scenarios: the compression of tone mapped LDR video content (using the HM10.0) and the compression of perceptually encoded HDR content (HM14.0). Results show an average bit-rate reduction under the same PSNR for all the sequences and TMO considered of 20.3% and 27.3% for tone mapped content and 2.4% and 2.7% for HDR content.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kovchavtsev, A. P., E-mail: kap@isp.nsc.ru; Tsarenko, A. V.; Guzev, A. A.
The influence of electron energy quantization in a space-charge region on the accumulation capacitance of the InAs-based metal-oxide-semiconductor capacitors (MOSCAPs) has been investigated by modeling and comparison with the experimental data from Au/anodic layer(4-20 nm)/n-InAs(111)A MOSCAPs. The accumulation capacitance for MOSCAPs has been calculated by the solution of Poisson equation with different assumptions and the self-consistent solution of Schrödinger and Poisson equations with quantization taken into account. It was shown that the quantization during the MOSCAPs accumulation capacitance calculations should be taken into consideration for the correct interface states density determination by Terman method and the evaluation of gate dielectric thicknessmore » from capacitance-voltage measurements.« less
The uniform quantized electron gas revisited
NASA Astrophysics Data System (ADS)
Lomba, Enrique; Høye, Johan S.
2017-11-01
In this article we continue and extend our recent work on the correlation energy of the quantized electron gas of uniform density at temperature T=0 . As before, we utilize the methods, properties, and results obtained by means of classical statistical mechanics. These were extended to quantized systems via the Feynman path integral formalism. The latter translates the quantum problem into a classical polymer problem in four dimensions. Again, the well known RPA (random phase approximation) is recovered as a basic result which we then modify and improve upon. Here we analyze the condition of thermodynamic self-consistency. Our numerical calculations exhibit a remarkable agreement with well known results of a standard parameterization of Monte Carlo correlation energies.
Stabilizing canonical-ensemble calculations in the auxiliary-field Monte Carlo method
NASA Astrophysics Data System (ADS)
Gilbreth, C. N.; Alhassid, Y.
2015-03-01
Quantum Monte Carlo methods are powerful techniques for studying strongly interacting Fermi systems. However, implementing these methods on computers with finite-precision arithmetic requires careful attention to numerical stability. In the auxiliary-field Monte Carlo (AFMC) method, low-temperature or large-model-space calculations require numerically stabilized matrix multiplication. When adapting methods used in the grand-canonical ensemble to the canonical ensemble of fixed particle number, the numerical stabilization increases the number of required floating-point operations for computing observables by a factor of the size of the single-particle model space, and thus can greatly limit the systems that can be studied. We describe an improved method for stabilizing canonical-ensemble calculations in AFMC that exhibits better scaling, and present numerical tests that demonstrate the accuracy and improved performance of the method.
Memory-efficient decoding of LDPC codes
NASA Technical Reports Server (NTRS)
Kwok-San Lee, Jason; Thorpe, Jeremy; Hawkins, Jon
2005-01-01
We present a low-complexity quantization scheme for the implementation of regular (3,6) LDPC codes. The quantization parameters are optimized to maximize the mutual information between the source and the quantized messages. Using this non-uniform quantized belief propagation algorithm, we have simulated that an optimized 3-bit quantizer operates with 0.2dB implementation loss relative to a floating point decoder, and an optimized 4-bit quantizer operates less than 0.1dB quantization loss.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Du, Liang; Yang, Yi; Harley, Ronald Gordon
A system is for a plurality of different electric load types. The system includes a plurality of sensors structured to sense a voltage signal and a current signal for each of the different electric loads; and a processor. The processor acquires a voltage and current waveform from the sensors for a corresponding one of the different electric load types; calculates a power or current RMS profile of the waveform; quantizes the power or current RMS profile into a set of quantized state-values; evaluates a state-duration for each of the quantized state-values; evaluates a plurality of state-types based on the powermore » or current RMS profile and the quantized state-values; generates a state-sequence that describes a corresponding finite state machine model of a generalized load start-up or transient profile for the corresponding electric load type; and identifies the corresponding electric load type.« less
The electronic structure of Au25 clusters: between discrete and continuous
NASA Astrophysics Data System (ADS)
Katsiev, Khabiboulakh; Lozova, Nataliya; Wang, Lu; Sai Krishna, Katla; Li, Ruipeng; Mei, Wai-Ning; Skrabalak, Sara E.; Kumar, Challa S. S. R.; Losovyj, Yaroslav
2016-08-01
Here, an approach based on synchrotron resonant photoemission is employed to explore the transition between quantization and hybridization of the electronic structure in atomically precise ligand-stabilized nanoparticles. While the presence of ligands maintains quantization in Au25 clusters, their removal renders increased hybridization of the electronic states in the vicinity of the Fermi level. These observations are supported by DFT studies.Here, an approach based on synchrotron resonant photoemission is employed to explore the transition between quantization and hybridization of the electronic structure in atomically precise ligand-stabilized nanoparticles. While the presence of ligands maintains quantization in Au25 clusters, their removal renders increased hybridization of the electronic states in the vicinity of the Fermi level. These observations are supported by DFT studies. Electronic supplementary information (ESI) available: Experimental details including chemicals, sample preparation, and characterization methods. Computation techniques, SV-AUC, GIWAXS, XPS, UPS, MALDI-TOF, ESI data of Au25 clusters. See DOI: 10.1039/c6nr02374f
Information preserving coding for multispectral data
NASA Technical Reports Server (NTRS)
Duan, J. R.; Wintz, P. A.
1973-01-01
A general formulation of the data compression system is presented. A method of instantaneous expansion of quantization levels by reserving two codewords in the codebook to perform a folding over in quantization is implemented for error free coding of data with incomplete knowledge of the probability density function. Results for simple DPCM with folding and an adaptive transform coding technique followed by a DPCM technique are compared using ERTS-1 data.
A Variant of the Mukai Pairing via Deformation Quantization
NASA Astrophysics Data System (ADS)
Ramadoss, Ajay C.
2012-06-01
Let X be a smooth projective complex variety. The Hochschild homology HH•( X) of X is an important invariant of X, which is isomorphic to the Hodge cohomology of X via the Hochschild-Kostant-Rosenberg isomorphism. On HH•( X), one has the Mukai pairing constructed by Caldararu. An explicit formula for the Mukai pairing at the level of Hodge cohomology was proven by the author in an earlier work (following ideas of Markarian). This formula implies a similar explicit formula for a closely related variant of the Mukai pairing on HH•( X). The latter pairing on HH•( X) is intimately linked to the study of Fourier-Mukai transforms of complex projective varieties. We give a new method to prove a formula computing the aforementioned variant of Caldararu's Mukai pairing. Our method is based on some important results in the area of deformation quantization. In particular, we use part of the work of Kashiwara and Schapira on Deformation Quantization modules together with an algebraic index theorem of Bressler, Nest and Tsygan. Our new method explicitly shows that the "Noncommutative Riemann-Roch" implies the classical Riemann-Roch. Further, it is hoped that our method would be useful for generalization to settings involving certain singular varieties.
NASA Astrophysics Data System (ADS)
Sasano, Koji; Okajima, Hiroshi; Matsunaga, Nobutomo
Recently, the fractional order PID (FO-PID) control, which is the extension of the PID control, has been focused on. Even though the FO-PID requires the high-order filter, it is difficult to realize the high-order filter due to the memory limitation of digital computer. For implementation of FO-PID, approximation of the fractional integrator and differentiator are required. Short memory principle (SMP) is one of the effective approximation methods. However, there is a disadvantage that the approximated filter with SMP cannot eliminate the steady-state error. For this problem, we introduce the distributed implementation of the integrator and the dynamic quantizer to make the efficient use of permissible memory. The objective of this study is to clarify how to implement the accurate FO-PID with limited memories. In this paper, we propose the implementation method of FO-PID with memory constraint using dynamic quantizer. And the trade off between approximation of fractional elements and quantized data size are examined so as to close to the ideal FO-PID responses. The effectiveness of proposed method is evaluated by numerical example and experiment in the temperature control of heat plate.
Chamberlin, Ralph V; Davis, Bryce F
2013-10-01
Disordered systems show deviations from the standard Debye theory of specific heat at low temperatures. These deviations are often attributed to two-level systems of uncertain origin. We find that a source of excess specific heat comes from correlations between quanta of energy if excitations are localized on an intermediate length scale. We use simulations of a simplified Creutz model for a system of Ising-like spins coupled to a thermal bath of Einstein-like oscillators. One feature of this model is that energy is quantized in both the system and its bath, ensuring conservation of energy at every step. Another feature is that the exact entropies of both the system and its bath are known at every step, so that their temperatures can be determined independently. We find that there is a mismatch in canonical temperature between the system and its bath. In addition to the usual finite-size effects in the Bose-Einstein and Fermi-Dirac distributions, if excitations in the heat bath are localized on an intermediate length scale, this mismatch is independent of system size up to at least 10(6) particles. We use a model for correlations between quanta of energy to adjust the statistical distributions and yield a thermodynamically consistent temperature. The model includes a chemical potential for units of energy, as is often used for other types of particles that are quantized and conserved. Experimental evidence for this model comes from its ability to characterize the excess specific heat of imperfect crystals at low temperatures.
The Casalbuoni-Brink-Schwarz superparticle with covariant, reducible constraints
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dayi, O.F.
1992-04-30
This paper discusses the fermionic constraints of the massless Casalbuoni-Brink-Schwarz superparticle in d = 10 which are separated covariantly as first- and second-class constraints which are infinitely reducible. Although the reducibility conditions of the second-class constraints include the first-class ones a consistent quantization is possible. The ghost structure of the system for quantizing it in terms of the BFV-BRST methods is given and unitarity is shown.
Atomic-scale epitaxial aluminum film on GaAs substrate
NASA Astrophysics Data System (ADS)
Fan, Yen-Ting; Lo, Ming-Cheng; Wu, Chu-Chun; Chen, Peng-Yu; Wu, Jenq-Shinn; Liang, Chi-Te; Lin, Sheng-Di
2017-07-01
Atomic-scale metal films exhibit intriguing size-dependent film stability, electrical conductivity, superconductivity, and chemical reactivity. With advancing methods for preparing ultra-thin and atomically smooth metal films, clear evidences of the quantum size effect have been experimentally collected in the past two decades. However, with the problems of small-area fabrication, film oxidation in air, and highly-sensitive interfaces between the metal, substrate, and capping layer, the uses of the quantized metallic films for further ex-situ investigations and applications have been seriously limited. To this end, we develop a large-area fabrication method for continuous atomic-scale aluminum film. The self-limited oxidation of aluminum protects and quantizes the metallic film and enables ex-situ characterizations and device processing in air. Structure analysis and electrical measurements on the prepared films imply the quantum size effect in the atomic-scale aluminum film. Our work opens the way for further physics studies and device applications using the quantized electronic states in metals.
Hamiltonian thermodynamics of charged three-dimensional dilatonic black holes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dias, Goncalo A. S.; Lemos, Jose P. S.; Centro Multidisciplinar de Astrofisica-CENTRA, Departamento de Fisica, Instituto Superior Tecnico-IST, Universidade Tecnica de Lisboa-UTL, Avenida Rovisco Pais 1, 1049-001 Lisboa
2008-10-15
The action for a class of three-dimensional dilaton-gravity theories, with an electromagnetic Maxwell field and a cosmological constant, can be recast in a Brans-Dicke-Maxwell type action, with its free {omega} parameter. For a negative cosmological constant, these theories have static, electrically charged, and spherically symmetric black hole solutions. Those theories with well formulated asymptotics are studied through a Hamiltonian formalism, and their thermodynamical properties are found out. The theories studied are general relativity ({omega}{yields}{+-}{infinity}), a dimensionally reduced cylindrical four-dimensional general relativity theory ({omega}=0), and a theory representing a class of theories ({omega}=-3), all with a Maxwell term. The Hamiltonian formalismmore » is set up in three dimensions through foliations on the right region of the Carter-Penrose diagram, with the bifurcation 1-sphere as the left boundary, and anti-de Sitter infinity as the right boundary. The metric functions on the foliated hypersurfaces and the radial component of the vector potential one-form are the canonical coordinates. The Hamiltonian action is written, the Hamiltonian being a sum of constraints. One finds a new action which yields an unconstrained theory with two pairs of canonical coordinates (M,P{sub M};Q,P{sub Q}), where M is the mass parameter, which for {omega}<-(3/2) and for {omega}={+-}{infinity} needs a careful renormalization, P{sub M} is the conjugate momenta of M, Q is the charge parameter, and P{sub Q} is its conjugate momentum. The resulting Hamiltonian is a sum of boundary terms only. A quantization of the theory is performed. The Schroedinger evolution operator is constructed, the trace is taken, and the partition function of the grand canonical ensemble is obtained, where the chemical potential is the scalar electric field {phi}. Like the uncharged cases studied previously, the charged black hole entropies differ, in general, from the usual quarter of the horizon area due to the dilaton.« less
Codestream-Based Identification of JPEG 2000 Images with Different Coding Parameters
NASA Astrophysics Data System (ADS)
Watanabe, Osamu; Fukuhara, Takahiro; Kiya, Hitoshi
A method of identifying JPEG 2000 images with different coding parameters, such as code-block sizes, quantization-step sizes, and resolution levels, is presented. It does not produce false-negative matches regardless of different coding parameters (compression rate, code-block size, and discrete wavelet transform (DWT) resolutions levels) or quantization step sizes. This feature is not provided by conventional methods. Moreover, the proposed approach is fast because it uses the number of zero-bit-planes that can be extracted from the JPEG 2000 codestream by only parsing the header information without embedded block coding with optimized truncation (EBCOT) decoding. The experimental results revealed the effectiveness of image identification based on the new method.
Perspectives of Light-Front Quantized Field Theory: Some New Results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Srivastava, Prem P.
1999-08-13
A review of some basic topics in the light-front (LF) quantization of relativistic field theory is made. It is argued that the LF quantization is equally appropriate as the conventional one and that they lead, assuming the microcausality principle, to the same physical content. This is confirmed in the studies on the LF of the spontaneous symmetry breaking (SSB), of the degenerate vacua in Schwinger model (SM) and Chiral SM (CSM), of the chiral boson theory, and of the QCD in covariant gauges among others. The discussion on the LF is more economical and more transparent than that found inmore » the conventional equal-time quantized theory. The removal of the constraints on the LF phase space by following the Dirac method, in fact, results in a substantially reduced number of independent dynamical variables. Consequently, the descriptions of the physical Hilbert space and the vacuum structure, for example, become more tractable. In the context of the Dyson-Wick perturbation theory the relevant propagators in the front form theory are causal. The Wick rotation can then be performed to employ the Euclidean space integrals in momentum space. The lack of manifest covariance becomes tractable, and still more so if we employ, as discussed in the text, the Fourier transform of the fermionic field based on a special construction of the LF spinor. The fact that the hyperplanes x{sup {+-}} = 0 constitute characteristic surfaces of the hyperbolic partial differential equation is found irrelevant in the quantized theory; it seems sufficient to quantize the theory on one of the characteristic hyperplanes.« less
Generalized noise terms for the quantized fluctuational electrodynamics
NASA Astrophysics Data System (ADS)
Partanen, Mikko; Häyrynen, Teppo; Tulkki, Jukka; Oksanen, Jani
2017-03-01
The quantization of optical fields in vacuum has been known for decades, but extending the field quantization to lossy and dispersive media in nonequilibrium conditions has proven to be complicated due to the position-dependent electric and magnetic responses of the media. In fact, consistent position-dependent quantum models for the photon number in resonant structures have only been formulated very recently and only for dielectric media. Here we present a general position-dependent quantized fluctuational electrodynamics (QFED) formalism that extends the consistent field quantization to describe the photon number also in the presence of magnetic field-matter interactions. It is shown that the magnetic fluctuations provide an additional degree of freedom in media where the magnetic coupling to the field is prominent. Therefore, the field quantization requires an additional independent noise operator that is commuting with the conventional bosonic noise operator describing the polarization current fluctuations in dielectric media. In addition to allowing the detailed description of field fluctuations, our methods provide practical tools for modeling optical energy transfer and the formation of thermal balance in general dielectric and magnetic nanodevices. We use QFED to investigate the magnetic properties of microcavity systems to demonstrate an example geometry in which it is possible to probe fields arising from the electric and magnetic source terms. We show that, as a consequence of the magnetic Purcell effect, the tuning of the position of an emitter layer placed inside a vacuum cavity can make the emissivity of a magnetic emitter to exceed the emissivity of a corresponding electric emitter.
NASA Astrophysics Data System (ADS)
Clarke, Peter; Varghese, Philip; Goldstein, David
2018-01-01
A discrete velocity method is developed for gas mixtures of diatomic molecules with both rotational and vibrational energy states. A full quantized model is described, and rotation-translation and vibration-translation energy exchanges are simulated using a Larsen-Borgnakke exchange model. Elastic and inelastic molecular interactions are modeled during every simulated collision to help produce smooth internal energy distributions. The method is verified by comparing simulations of homogeneous relaxation by our discrete velocity method to numerical solutions of the Jeans and Landau-Teller equations, and to direct simulation Monte Carlo. We compute the structure of a 1D shock using this method, and determine how the rotational energy distribution varies with spatial location in the shock and with position in velocity space.
NASA Astrophysics Data System (ADS)
Huang, Wen-Min; Mou, Chung-Yu; Chang, Cheng-Hung
2010-02-01
While the scattering phase for several one-dimensional potentials can be exactly derived, less is known in multi-dimensional quantum systems. This work provides a method to extend the one-dimensional phase knowledge to multi-dimensional quantization rules. The extension is illustrated in the example of Bogomolny's transfer operator method applied in two quantum wells bounded by step potentials of different heights. This generalized semiclassical method accurately determines the energy spectrum of the systems, which indicates the substantial role of the proposed phase correction. Theoretically, the result can be extended to other semiclassical methods, such as Gutzwiller trace formula, dynamical zeta functions, and semiclassical Landauer-Büttiker formula. In practice, this recipe enhances the applicability of semiclassical methods to multi-dimensional quantum systems bounded by general soft potentials.
Quantization selection in the high-throughput H.264/AVC encoder based on the RD
NASA Astrophysics Data System (ADS)
Pastuszak, Grzegorz
2013-10-01
In the hardware video encoder, the quantization is responsible for quality losses. On the other hand, it allows the reduction of bit rates to the target one. If the mode selection is based on the rate-distortion criterion, the quantization can also be adjusted to obtain better compression efficiency. Particularly, the use of Lagrangian function with a given multiplier enables the encoder to select the most suitable quantization step determined by the quantization parameter QP. Moreover, the quantization offset added before discarding the fraction value after quantization can be adjusted. In order to select the best quantization parameter and offset in real time, the HD/SD encoder should be implemented in the hardware. In particular, the hardware architecture should embed the transformation and quantization modules able to process the same residuals many times. In this work, such an architecture is used. Experimental results show what improvements in terms of compression efficiency are achievable for Intra coding.
Full Spectrum Conversion Using Traveling Pulse Wave Quantization
2017-03-01
Full Spectrum Conversion Using Traveling Pulse Wave Quantization Michael S. Kappes Mikko E. Waltari IQ-Analog Corporation San Diego, California...temporal-domain quantization technique called Traveling Pulse Wave Quantization (TPWQ). Full spectrum conversion is defined as the complete...pulse width measurements that are continuously generated hence the name “traveling” pulse wave quantization. Our TPWQ-based ADC is composed of a
On-line gas chromatographic analysis of airborne particles
Hering, Susanne V [Berkeley, CA; Goldstein, Allen H [Orinda, CA
2012-01-03
A method and apparatus for the in-situ, chemical analysis of an aerosol. The method may include the steps of: collecting an aerosol; thermally desorbing the aerosol into a carrier gas to provide desorbed aerosol material; transporting the desorbed aerosol material onto the head of a gas chromatography column; analyzing the aerosol material using a gas chromatograph, and quantizing the aerosol material as it evolves from the gas chromatography column. The apparatus includes a collection and thermal desorption cell, a gas chromatograph including a gas chromatography column, heated transport lines coupling the cell and the column; and a quantization detector for aerosol material evolving from the gas chromatography column.
Kelvin-Helmholtz instability in a single-component atomic superfluid
NASA Astrophysics Data System (ADS)
Baggaley, A. W.; Parker, N. G.
2018-05-01
We demonstrate an experimentally feasible method for generating the classical Kelvin-Helmholtz instability in a single-component atomic Bose-Einstein condensate. By progressively reducing a potential barrier between two counterflowing channels, we seed a line of quantized vortices, which precede to form progressively larger clusters, mimicking the classical roll-up behavior of the Kelvin-Helmholtz instability. This cluster formation leads to an effective superfluid shear layer, formed through the collective motion of many quantized vortices. From this we demonstrate a straightforward method to measure the effective viscosity of a turbulent quantum fluid in a system with a moderate number of vortices, within the range of current experimental capabilities.
Cao, Zhaoliang; Mu, Quanquan; Hu, Lifa; Lu, Xinghai; Xuan, Li
2009-09-28
A simple method for evaluating the wavefront compensation error of diffractive liquid-crystal wavefront correctors (DLCWFCs) for atmospheric turbulence correction is reported. A simple formula which describes the relationship between pixel number, DLCWFC aperture, quantization level, and atmospheric coherence length was derived based on the calculated atmospheric turbulence wavefronts using Kolmogorov atmospheric turbulence theory. It was found that the pixel number across the DLCWFC aperture is a linear function of the telescope aperture and the quantization level, and it is an exponential function of the atmosphere coherence length. These results are useful for people using DLCWFCs in atmospheric turbulence correction for large-aperture telescopes.
Multipurpose image watermarking algorithm based on multistage vector quantization.
Lu, Zhe-Ming; Xu, Dian-Guo; Sun, Sheng-He
2005-06-01
The rapid growth of digital multimedia and Internet technologies has made copyright protection, copy protection, and integrity verification three important issues in the digital world. To solve these problems, the digital watermarking technique has been presented and widely researched. Traditional watermarking algorithms are mostly based on discrete transform domains, such as the discrete cosine transform, discrete Fourier transform (DFT), and discrete wavelet transform (DWT). Most of these algorithms are good for only one purpose. Recently, some multipurpose digital watermarking methods have been presented, which can achieve the goal of content authentication and copyright protection simultaneously. However, they are based on DWT or DFT. Lately, several robust watermarking schemes based on vector quantization (VQ) have been presented, but they can only be used for copyright protection. In this paper, we present a novel multipurpose digital image watermarking method based on the multistage vector quantizer structure, which can be applied to image authentication and copyright protection. In the proposed method, the semi-fragile watermark and the robust watermark are embedded in different VQ stages using different techniques, and both of them can be extracted without the original image. Simulation results demonstrate the effectiveness of our algorithm in terms of robustness and fragility.
Grand canonical ensemble Monte Carlo simulation of the dCpG/proflavine crystal hydrate.
Resat, H; Mezei, M
1996-09-01
The grand canonical ensemble Monte Carlo molecular simulation method is used to investigate hydration patterns in the crystal hydrate structure of the dCpG/proflavine intercalated complex. The objective of this study is to show by example that the recently advocated grand canonical ensemble simulation is a computationally efficient method for determining the positions of the hydrating water molecules in protein and nucleic acid structures. A detailed molecular simulation convergence analysis and an analogous comparison of the theoretical results with experiments clearly show that the grand ensemble simulations can be far more advantageous than the comparable canonical ensemble simulations.
Zhang, Yu; Wu, Jianxin; Cai, Jianfei
2016-05-01
In large-scale visual recognition and image retrieval tasks, feature vectors, such as Fisher vector (FV) or the vector of locally aggregated descriptors (VLAD), have achieved state-of-the-art results. However, the combination of the large numbers of examples and high-dimensional vectors necessitates dimensionality reduction, in order to reduce its storage and CPU costs to a reasonable range. In spite of the popularity of various feature compression methods, this paper shows that the feature (dimension) selection is a better choice for high-dimensional FV/VLAD than the feature (dimension) compression methods, e.g., product quantization. We show that strong correlation among the feature dimensions in the FV and the VLAD may not exist, which renders feature selection a natural choice. We also show that, many dimensions in FV/VLAD are noise. Throwing them away using feature selection is better than compressing them and useful dimensions altogether using feature compression methods. To choose features, we propose an efficient importance sorting algorithm considering both the supervised and unsupervised cases, for visual recognition and image retrieval, respectively. Combining with the 1-bit quantization, feature selection has achieved both higher accuracy and less computational cost than feature compression methods, such as product quantization, on the FV and the VLAD image representations.
Group field theories for all loop quantum gravity
NASA Astrophysics Data System (ADS)
Oriti, Daniele; Ryan, James P.; Thürigen, Johannes
2015-02-01
Group field theories represent a second quantized reformulation of the loop quantum gravity state space and a completion of the spin foam formalism. States of the canonical theory, in the traditional continuum setting, have support on graphs of arbitrary valence. On the other hand, group field theories have usually been defined in a simplicial context, thus dealing with a restricted set of graphs. In this paper, we generalize the combinatorics of group field theories to cover all the loop quantum gravity state space. As an explicit example, we describe the group field theory formulation of the KKL spin foam model, as well as a particular modified version. We show that the use of tensor model tools allows for the most effective construction. In order to clarify the mathematical basis of our construction and of the formalisms with which we deal, we also give an exhaustive description of the combinatorial structures entering spin foam models and group field theories, both at the level of the boundary states and of the quantum amplitudes.
Black holes in loop quantum gravity.
Perez, Alejandro
2017-12-01
This is a review of results on black hole physics in the context of loop quantum gravity. The key feature underlying these results is the discreteness of geometric quantities at the Planck scale predicted by this approach to quantum gravity. Quantum discreteness follows directly from the canonical quantization prescription when applied to the action of general relativity that is suitable for the coupling of gravity with gauge fields, and especially with fermions. Planckian discreteness and causal considerations provide the basic structure for the understanding of the thermal properties of black holes close to equilibrium. Discreteness also provides a fresh new look at more (at the moment) speculative issues, such as those concerning the fate of information in black hole evaporation. The hypothesis of discreteness leads, also, to interesting phenomenology with possible observational consequences. The theory of loop quantum gravity is a developing program; this review reports its achievements and open questions in a pedagogical manner, with an emphasis on quantum aspects of black hole physics.
Modeling and analysis of energy quantization effects on single electron inverter performance
NASA Astrophysics Data System (ADS)
Dan, Surya Shankar; Mahapatra, Santanu
2009-08-01
In this paper, for the first time, the effects of energy quantization on single electron transistor (SET) inverter performance are analyzed through analytical modeling and Monte Carlo simulations. It is shown that energy quantization mainly changes the Coulomb blockade region and drain current of SET devices and thus affects the noise margin, power dissipation, and the propagation delay of SET inverter. A new analytical model for the noise margin of SET inverter is proposed which includes the energy quantization effects. Using the noise margin as a metric, the robustness of SET inverter is studied against the effects of energy quantization. A compact expression is developed for a novel parameter quantization threshold which is introduced for the first time in this paper. Quantization threshold explicitly defines the maximum energy quantization that an SET inverter logic circuit can withstand before its noise margin falls below a specified tolerance level. It is found that SET inverter designed with CT:CG=1/3 (where CT and CG are tunnel junction and gate capacitances, respectively) offers maximum robustness against energy quantization.
Spatio-Chromatic Adaptation via Higher-Order Canonical Correlation Analysis of Natural Images
Gutmann, Michael U.; Laparra, Valero; Hyvärinen, Aapo; Malo, Jesús
2014-01-01
Independent component and canonical correlation analysis are two general-purpose statistical methods with wide applicability. In neuroscience, independent component analysis of chromatic natural images explains the spatio-chromatic structure of primary cortical receptive fields in terms of properties of the visual environment. Canonical correlation analysis explains similarly chromatic adaptation to different illuminations. But, as we show in this paper, neither of the two methods generalizes well to explain both spatio-chromatic processing and adaptation at the same time. We propose a statistical method which combines the desirable properties of independent component and canonical correlation analysis: It finds independent components in each data set which, across the two data sets, are related to each other via linear or higher-order correlations. The new method is as widely applicable as canonical correlation analysis, and also to more than two data sets. We call it higher-order canonical correlation analysis. When applied to chromatic natural images, we found that it provides a single (unified) statistical framework which accounts for both spatio-chromatic processing and adaptation. Filters with spatio-chromatic tuning properties as in the primary visual cortex emerged and corresponding-colors psychophysics was reproduced reasonably well. We used the new method to make a theory-driven testable prediction on how the neural response to colored patterns should change when the illumination changes. We predict shifts in the responses which are comparable to the shifts reported for chromatic contrast habituation. PMID:24533049
Spatio-chromatic adaptation via higher-order canonical correlation analysis of natural images.
Gutmann, Michael U; Laparra, Valero; Hyvärinen, Aapo; Malo, Jesús
2014-01-01
Independent component and canonical correlation analysis are two general-purpose statistical methods with wide applicability. In neuroscience, independent component analysis of chromatic natural images explains the spatio-chromatic structure of primary cortical receptive fields in terms of properties of the visual environment. Canonical correlation analysis explains similarly chromatic adaptation to different illuminations. But, as we show in this paper, neither of the two methods generalizes well to explain both spatio-chromatic processing and adaptation at the same time. We propose a statistical method which combines the desirable properties of independent component and canonical correlation analysis: It finds independent components in each data set which, across the two data sets, are related to each other via linear or higher-order correlations. The new method is as widely applicable as canonical correlation analysis, and also to more than two data sets. We call it higher-order canonical correlation analysis. When applied to chromatic natural images, we found that it provides a single (unified) statistical framework which accounts for both spatio-chromatic processing and adaptation. Filters with spatio-chromatic tuning properties as in the primary visual cortex emerged and corresponding-colors psychophysics was reproduced reasonably well. We used the new method to make a theory-driven testable prediction on how the neural response to colored patterns should change when the illumination changes. We predict shifts in the responses which are comparable to the shifts reported for chromatic contrast habituation.
Zhang, Wanli; Yang, Shiju; Li, Chuandong; Zhang, Wei; Yang, Xinsong
2018-08-01
This paper focuses on stochastic exponential synchronization of delayed memristive neural networks (MNNs) by the aid of systems with interval parameters which are established by using the concept of Filippov solution. New intermittent controller and adaptive controller with logarithmic quantization are structured to deal with the difficulties induced by time-varying delays, interval parameters as well as stochastic perturbations, simultaneously. Moreover, not only control cost can be reduced but also communication channels and bandwidth are saved by using these controllers. Based on novel Lyapunov functions and new analytical methods, several synchronization criteria are established to realize the exponential synchronization of MNNs with stochastic perturbations via intermittent control and adaptive control with or without logarithmic quantization. Finally, numerical simulations are offered to substantiate our theoretical results. Copyright © 2018 Elsevier Ltd. All rights reserved.
Spacetime algebra as a powerful tool for electromagnetism
NASA Astrophysics Data System (ADS)
Dressel, Justin; Bliokh, Konstantin Y.; Nori, Franco
2015-08-01
We present a comprehensive introduction to spacetime algebra that emphasizes its practicality and power as a tool for the study of electromagnetism. We carefully develop this natural (Clifford) algebra of the Minkowski spacetime geometry, with a particular focus on its intrinsic (and often overlooked) complex structure. Notably, the scalar imaginary that appears throughout the electromagnetic theory properly corresponds to the unit 4-volume of spacetime itself, and thus has physical meaning. The electric and magnetic fields are combined into a single complex and frame-independent bivector field, which generalizes the Riemann-Silberstein complex vector that has recently resurfaced in studies of the single photon wavefunction. The complex structure of spacetime also underpins the emergence of electromagnetic waves, circular polarizations, the normal variables for canonical quantization, the distinction between electric and magnetic charge, complex spinor representations of Lorentz transformations, and the dual (electric-magnetic field exchange) symmetry that produces helicity conservation in vacuum fields. This latter symmetry manifests as an arbitrary global phase of the complex field, motivating the use of a complex vector potential, along with an associated transverse and gauge-invariant bivector potential, as well as complex (bivector and scalar) Hertz potentials. Our detailed treatment aims to encourage the use of spacetime algebra as a readily available and mature extension to existing vector calculus and tensor methods that can greatly simplify the analysis of fundamentally relativistic objects like the electromagnetic field.
NASA Astrophysics Data System (ADS)
Aghamaleki, Javad Abbasi; Behrad, Alireza
2018-01-01
Double compression detection is a crucial stage in digital image and video forensics. However, the detection of double compressed videos is challenging when the video forger uses the same quantization matrix and synchronized group of pictures (GOP) structure during the recompression history to conceal tampering effects. A passive approach is proposed for detecting double compressed MPEG videos with the same quantization matrix and synchronized GOP structure. To devise the proposed algorithm, the effects of recompression on P frames are mathematically studied. Then, based on the obtained guidelines, a feature vector is proposed to detect double compressed frames on the GOP level. Subsequently, sparse representations of the feature vectors are used for dimensionality reduction and enrich the traces of recompression. Finally, a support vector machine classifier is employed to detect and localize double compression in temporal domain. The experimental results show that the proposed algorithm achieves the accuracy of more than 95%. In addition, the comparisons of the results of the proposed method with those of other methods reveal the efficiency of the proposed algorithm.
NASA Astrophysics Data System (ADS)
Tavousi, Alireza; Mansouri-Birjandi, Mohammad Ali; Saffari, Mehdi
2016-09-01
Implementing of photonic sampling and quantizing analog-to-digital converters (ADCs) enable us to extract a single binary word from optical signals without need for extra electronic assisting parts. This would enormously increase the sampling and quantizing time as well as decreasing the consumed power. To this end, based on the concept of successive approximation method, a 4-bit full-optical ADC that operates using the intensity-dependent Kerr-like nonlinearity in a two dimensional photonic crystal (2DPhC) platform is proposed. The Silicon (Si) nanocrystal is chosen because of the suitable nonlinear material characteristic. An optical limiter is used for the clamping and quantization of each successive levels that represent the ADC bits. In the proposal, an energy efficient optical ADC circuit is implemented by controlling the system parameters such as ring-to-waveguide coupling coefficients, the ring's nonlinear refractive index, and the ring's length. The performance of the ADC structure is verified by the simulation using finite difference time domain (FDTD) method.
Optimal block cosine transform image coding for noisy channels
NASA Technical Reports Server (NTRS)
Vaishampayan, V.; Farvardin, N.
1986-01-01
The two dimensional block transform coding scheme based on the discrete cosine transform was studied extensively for image coding applications. While this scheme has proven to be efficient in the absence of channel errors, its performance degrades rapidly over noisy channels. A method is presented for the joint source channel coding optimization of a scheme based on the 2-D block cosine transform when the output of the encoder is to be transmitted via a memoryless design of the quantizers used for encoding the transform coefficients. This algorithm produces a set of locally optimum quantizers and the corresponding binary code assignment for the assumed transform coefficient statistics. To determine the optimum bit assignment among the transform coefficients, an algorithm was used based on the steepest descent method, which under certain convexity conditions on the performance of the channel optimized quantizers, yields the optimal bit allocation. Comprehensive simulation results for the performance of this locally optimum system over noisy channels were obtained and appropriate comparisons against a reference system designed for no channel error were rendered.
Canonical decomposition of magnetotelluric responses: Experiment on 1D anisotropic structures
NASA Astrophysics Data System (ADS)
Guo, Ze-qiu; Wei, Wen-bo; Ye, Gao-feng; Jin, Sheng; Jing, Jian-en
2015-08-01
Horizontal electrical heterogeneity of subsurface earth is mostly originated from structural complexity and electrical anisotropy, and local near-surface electrical heterogeneity will severely distort regional electromagnetic responses. Conventional distortion analyses for magnetotelluric soundings are primarily physical decomposition methods with respect to isotropic models, which mostly presume that the geoelectric distribution of geological structures is of local and regional patterns represented by 3D/2D models. Due to the widespread anisotropy of earth media, the confusion between 1D anisotropic responses and 2D isotropic responses, and the defects of physical decomposition methods, we propose to conduct modeling experiments with canonical decomposition in terms of 1D layered anisotropic models, and the method is one of the mathematical decomposition methods based on eigenstate analyses differentiated from distortion analyses, which can be used to recover electrical information such as strike directions, and maximum and minimum conductivity. We tested this method with numerical simulation experiments on several 1D synthetic models, which turned out that canonical decomposition is quite effective to reveal geological anisotropic information. Finally, for the background of anisotropy from previous study by geological and seismological methods, canonical decomposition is applied to real data acquired in North China Craton for 1D anisotropy analyses, and the result shows that, with effective modeling and cautious interpretation, canonical decomposition could be another good method to detect anisotropy of geological media.
Grand canonical ensemble Monte Carlo simulation of the dCpG/proflavine crystal hydrate.
Resat, H; Mezei, M
1996-01-01
The grand canonical ensemble Monte Carlo molecular simulation method is used to investigate hydration patterns in the crystal hydrate structure of the dCpG/proflavine intercalated complex. The objective of this study is to show by example that the recently advocated grand canonical ensemble simulation is a computationally efficient method for determining the positions of the hydrating water molecules in protein and nucleic acid structures. A detailed molecular simulation convergence analysis and an analogous comparison of the theoretical results with experiments clearly show that the grand ensemble simulations can be far more advantageous than the comparable canonical ensemble simulations. Images FIGURE 5 FIGURE 7 PMID:8873992
NASA Astrophysics Data System (ADS)
Chernyak, Vladimir Y.; Klein, John R.; Sinitsyn, Nikolai A.
2012-04-01
This article studies Markovian stochastic motion of a particle on a graph with finite number of nodes and periodically time-dependent transition rates that satisfy the detailed balance condition at any time. We show that under general conditions, the currents in the system on average become quantized or fractionally quantized for adiabatic driving at sufficiently low temperature. We develop the quantitative theory of this quantization and interpret it in terms of topological invariants. By implementing the celebrated Kirchhoff theorem we derive a general and explicit formula for the average generated current that plays a role of an efficient tool for treating the current quantization effects.
Speech coding at low to medium bit rates
NASA Astrophysics Data System (ADS)
Leblanc, Wilfred Paul
1992-09-01
Improved search techniques coupled with improved codebook design methodologies are proposed to improve the performance of conventional code-excited linear predictive coders for speech. Improved methods for quantizing the short term filter are developed by employing a tree search algorithm and joint codebook design to multistage vector quantization. Joint codebook design procedures are developed to design locally optimal multistage codebooks. Weighting during centroid computation is introduced to improve the outlier performance of the multistage vector quantizer. Multistage vector quantization is shown to be both robust against input characteristics and in the presence of channel errors. Spectral distortions of about 1 dB are obtained at rates of 22-28 bits/frame. Structured codebook design procedures for excitation in code-excited linear predictive coders are compared to general codebook design procedures. Little is lost using significant structure in the excitation codebooks while greatly reducing the search complexity. Sparse multistage configurations are proposed for reducing computational complexity and memory size. Improved search procedures are applied to code-excited linear prediction which attempt joint optimization of the short term filter, the adaptive codebook, and the excitation. Improvements in signal to noise ratio of 1-2 dB are realized in practice.
NASA Astrophysics Data System (ADS)
Lowney, Joseph Daniel
Methods to generate, manipulate, and measure optical and atomic fields with global or local angular momentum have a wide range of applications in both fundamental physics research and technology development. In optics, the engineering of angular momentum states of light can aid studies of orbital angular momentum (OAM) exchange between light and matter. The engineering of optical angular momentum states can also be used to increase the bandwidth of optical communications or serve as a means to distribute quantum keys, for example. Similar capabilities in Bose-Einstein condensates are being investigated to improve our understanding of superfluid dynamics, superconductivity, and turbulence, the last of which is widely considered to be one of most ubiquitous yet poorly understood subjects in physics. The first part of this two-part dissertation presents an analysis of techniques for measuring and manipulating quantized vortices in BECs. The second part of this dissertation presents theoretical and numerical analyses of new methods to engineer the OAM spectra of optical beams. The superfluid dynamics of a BEC are often well described by a nonlinear Schrodinger equation. The nonlinearity arises from interatomic scattering and enables BECs to support quantized vortices, which have quantized circulation and are fundamental structural elements of quantum turbulence. With the experimental tools to dynamically manipulate and measure quantized vortices, BECs are proving to be a useful medium for testing the theoretical predictions of quantum turbulence. In this dissertation we analyze a method for making minimally destructive in situ observations of quantized vortices in a BEC. Secondly, we numerically study a mechanism to imprint vortex dipoles in a BEC. With these advancements, more robust experiments of vortex dynamics and quantum turbulence will be within reach. A more complete understanding of quantum turbulence will enable principles of microscopic fluid flow to be related to the statistical properties of turbulence in a superfluid. In the second part of this dissertation we explore frequency mixing, a subset of nonlinear optical processes in which one or more input optical beam(s) are converted into one or more output beams with different optical frequencies. The ability of parametric nonlinear processes such as second harmonic generation or parametric amplification to manipulate the OAM spectra of optical beams is an active area of research. In a theoretical and numerical investigation, two complimentary methods for sculpting the OAM spectra are developed. The first method employs second harmonic generation with two non-collinear input beams to develop a broad spectrum of OAM states in an optical field. The second method utilizes parametric amplification with collinear input beams to develop an OAM-dependent gain or attenuation, termed dichroism for OAM, to effectively narrow the OAM spectrum of an optical beam. The theoretical principles developed in this dissertation enhance our understanding of how nonlinear processes can be used to engineer the OAM spectra of optical beams and could serve as methods to increase the bandwidth of an optical signal by multiplexing over a range of OAM states.
Zhang, Senlin; Chen, Huayan; Liu, Meiqin; Zhang, Qunfei
2017-11-07
Target tracking is one of the broad applications of underwater wireless sensor networks (UWSNs). However, as a result of the temporal and spatial variability of acoustic channels, underwater acoustic communications suffer from an extremely limited bandwidth. In order to reduce network congestion, it is important to shorten the length of the data transmitted from local sensors to the fusion center by quantization. Although quantization can reduce bandwidth cost, it also brings about bad tracking performance as a result of information loss after quantization. To solve this problem, this paper proposes an optimal quantization-based target tracking scheme. It improves the tracking performance of low-bit quantized measurements by minimizing the additional covariance caused by quantization. The simulation demonstrates that our scheme performs much better than the conventional uniform quantization-based target tracking scheme and the increment of the data length affects our scheme only a little. Its tracking performance improves by only 4.4% from 2- to 3-bit, which means our scheme weakly depends on the number of data bits. Moreover, our scheme also weakly depends on the number of participate sensors, and it can work well in sparse sensor networks. In a 6 × 6 × 6 sensor network, compared with 4 × 4 × 4 sensor networks, the number of participant sensors increases by 334.92%, while the tracking accuracy using 1-bit quantized measurements improves by only 50.77%. Overall, our optimal quantization-based target tracking scheme can achieve the pursuit of data-efficiency, which fits the requirements of low-bandwidth UWSNs.
Discriminant analysis in wildlife research: Theory and applications
Williams, B.K.; Capen, D.E.
1981-01-01
Discriminant analysis, a method of analyzing grouped multivariate data, is often used in ecological investigations. It has both a predictive and an explanatory function, the former aiming at classification of individuals of unknown group membership. The goal of the latter function is to exhibit group separation by means of linear transforms, and the corresponding method is called canonical analysis. This discussion focuses on the application of canonical analysis in ecology. In order to clarify its meaning, a parametric approach is taken instead of the usual data-based formulation. For certain assumptions the data-based canonical variates are shown to result from maximum likelihood estimation, thus insuring consistency and asymptotic efficiency. The distorting effects of covariance heterogeneity are examined, as are certain difficulties which arise in interpreting the canonical functions. A 'distortion metric' is defined, by means of which distortions resulting from the canonical transformation can be assessed. Several sampling problems which arise in ecological applications are considered. It is concluded that the method may prove valuable for data exploration, but is of limited value as an inferential procedure.
Perceptual Optimization of DCT Color Quantization Matrices
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Statler, Irving C. (Technical Monitor)
1994-01-01
Many image compression schemes employ a block Discrete Cosine Transform (DCT) and uniform quantization. Acceptable rate/distortion performance depends upon proper design of the quantization matrix. In previous work, we showed how to use a model of the visibility of DCT basis functions to design quantization matrices for arbitrary display resolutions and color spaces. Subsequently, we showed how to optimize greyscale quantization matrices for individual images, for optimal rate/perceptual distortion performance. Here we describe extensions of this optimization algorithm to color images.
Scalets, wavelets and (complex) turning point quantization
NASA Astrophysics Data System (ADS)
Handy, C. R.; Brooks, H. A.
2001-05-01
Despite the many successes of wavelet analysis in image and signal processing, the incorporation of continuous wavelet transform theory within quantum mechanics has lacked a compelling, first principles, motivating analytical framework, until now. For arbitrary one-dimensional rational fraction Hamiltonians, we develop a simple, unified formalism, which clearly underscores the complementary, and mutually interdependent, role played by moment quantization theory (i.e. via scalets, as defined herein) and wavelets. This analysis involves no approximation of the Hamiltonian within the (equivalent) wavelet space, and emphasizes the importance of (complex) multiple turning point contributions in the quantization process. We apply the method to three illustrative examples. These include the (double-well) quartic anharmonic oscillator potential problem, V(x) = Z2x2 + gx4, the quartic potential, V(x) = x4, and the very interesting and significant non-Hermitian potential V(x) = -(ix)3, recently studied by Bender and Boettcher.
Perceptually-Based Adaptive JPEG Coding
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Rosenholtz, Ruth; Null, Cynthia H. (Technical Monitor)
1996-01-01
An extension to the JPEG standard (ISO/IEC DIS 10918-3) allows spatial adaptive coding of still images. As with baseline JPEG coding, one quantization matrix applies to an entire image channel, but in addition the user may specify a multiplier for each 8 x 8 block, which multiplies the quantization matrix, yielding the new matrix for the block. MPEG 1 and 2 use much the same scheme, except there the multiplier changes only on macroblock boundaries. We propose a method for perceptual optimization of the set of multipliers. We compute the perceptual error for each block based upon DCT quantization error adjusted according to contrast sensitivity, light adaptation, and contrast masking, and pick the set of multipliers which yield maximally flat perceptual error over the blocks of the image. We investigate the bitrate savings due to this adaptive coding scheme and the relative importance of the different sorts of masking on adaptive coding.
Wavelet/scalar quantization compression standard for fingerprint images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brislawn, C.M.
1996-06-12
US Federal Bureau of Investigation (FBI) has recently formulated a national standard for digitization and compression of gray-scale fingerprint images. Fingerprints are scanned at a spatial resolution of 500 dots per inch, with 8 bits of gray-scale resolution. The compression algorithm for the resulting digital images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition (wavelet/scalar quantization method). The FBI standard produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. The compression standard specifies a class ofmore » potential encoders and a universal decoder with sufficient generality to reconstruct compressed images produced by any compliant encoder, allowing flexibility for future improvements in encoder technology. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations.« less
A visual detection model for DCT coefficient quantization
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.; Peterson, Heidi A.
1993-01-01
The discrete cosine transform (DCT) is widely used in image compression, and is part of the JPEG and MPEG compression standards. The degree of compression, and the amount of distortion in the decompressed image are determined by the quantization of the transform coefficients. The standards do not specify how the DCT coefficients should be quantized. Our approach is to set the quantization level for each coefficient so that the quantization error is at the threshold of visibility. Here we combine results from our previous work to form our current best detection model for DCT coefficient quantization noise. This model predicts sensitivity as a function of display parameters, enabling quantization matrices to be designed for display situations varying in luminance, veiling light, and spatial frequency related conditions (pixel size, viewing distance, and aspect ratio). It also allows arbitrary color space directions for the representation of color.
Quantization and training of object detection networks with low-precision weights and activations
NASA Astrophysics Data System (ADS)
Yang, Bo; Liu, Jian; Zhou, Li; Wang, Yun; Chen, Jie
2018-01-01
As convolutional neural networks have demonstrated state-of-the-art performance in object recognition and detection, there is a growing need for deploying these systems on resource-constrained mobile platforms. However, the computational burden and energy consumption of inference for these networks are significantly higher than what most low-power devices can afford. To address these limitations, this paper proposes a method to train object detection networks with low-precision weights and activations. The probability density functions of weights and activations of each layer are first directly estimated using piecewise Gaussian models. Then, the optimal quantization intervals and step sizes for each convolution layer are adaptively determined according to the distribution of weights and activations. As the most computationally expensive convolutions can be replaced by effective fixed point operations, the proposed method can drastically reduce computation complexity and memory footprint. Performing on the tiny you only look once (YOLO) and YOLO architectures, the proposed method achieves comparable accuracy to their 32-bit counterparts. As an illustration, the proposed 4-bit and 8-bit quantized versions of the YOLO model achieve a mean average precision of 62.6% and 63.9%, respectively, on the Pascal visual object classes 2012 test dataset. The mAP of the 32-bit full-precision baseline model is 64.0%.
Accelerating simulation for the multiple-point statistics algorithm using vector quantization
NASA Astrophysics Data System (ADS)
Zuo, Chen; Pan, Zhibin; Liang, Hao
2018-03-01
Multiple-point statistics (MPS) is a prominent algorithm to simulate categorical variables based on a sequential simulation procedure. Assuming training images (TIs) as prior conceptual models, MPS extracts patterns from TIs using a template and records their occurrences in a database. However, complex patterns increase the size of the database and require considerable time to retrieve the desired elements. In order to speed up simulation and improve simulation quality over state-of-the-art MPS methods, we propose an accelerating simulation for MPS using vector quantization (VQ), called VQ-MPS. First, a variable representation is presented to make categorical variables applicable for vector quantization. Second, we adopt a tree-structured VQ to compress the database so that stationary simulations are realized. Finally, a transformed template and classified VQ are used to address nonstationarity. A two-dimensional (2D) stationary channelized reservoir image is used to validate the proposed VQ-MPS. In comparison with several existing MPS programs, our method exhibits significantly better performance in terms of computational time, pattern reproductions, and spatial uncertainty. Further demonstrations consist of a 2D four facies simulation, two 2D nonstationary channel simulations, and a three-dimensional (3D) rock simulation. The results reveal that our proposed method is also capable of solving multifacies, nonstationarity, and 3D simulations based on 2D TIs.
From the Weyl quantization of a particle on the circle to number–phase Wigner functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Przanowski, Maciej, E-mail: maciej.przanowski@p.lodz.pl; Brzykcy, Przemysław, E-mail: 800289@edu.p.lodz.pl; Tosiek, Jaromir, E-mail: jaromir.tosiek@p.lodz.pl
2014-12-15
A generalized Weyl quantization formalism for a particle on the circle is shown to supply an effective method for defining the number–phase Wigner function in quantum optics. A Wigner function for the state ϱ{sup ^} and the kernel K for a particle on the circle is defined and its properties are analysed. Then it is shown how this Wigner function can be easily modified to give the number–phase Wigner function in quantum optics. Some examples of such number–phase Wigner functions are considered.
Bit-wise arithmetic coding for data compression
NASA Technical Reports Server (NTRS)
Kiely, A. B.
1994-01-01
This article examines the problem of compressing a uniformly quantized independent and identically distributed (IID) source. We present a new compression technique, bit-wise arithmetic coding, that assigns fixed-length codewords to the quantizer output and uses arithmetic coding to compress the codewords, treating the codeword bits as independent. We examine the performance of this method and evaluate the overhead required when used block-adaptively. Simulation results are presented for Gaussian and Laplacian sources. This new technique could be used as the entropy coder in a transform or subband coding system.
NASA Technical Reports Server (NTRS)
Manohar, Mareboyana; Tilton, James C.
1994-01-01
A progressive vector quantization (VQ) compression approach is discussed which decomposes image data into a number of levels using full search VQ. The final level is losslessly compressed, enabling lossless reconstruction. The computational difficulties are addressed by implementation on a massively parallel SIMD machine. We demonstrate progressive VQ on multispectral imagery obtained from the Advanced Very High Resolution Radiometer instrument and other Earth observation image data, and investigate the trade-offs in selecting the number of decomposition levels and codebook training method.
NASA Astrophysics Data System (ADS)
Mazzola, F.; Wells, J. W.; Pakpour-Tabrizi, A. C.; Jackman, R. B.; Thiagarajan, B.; Hofmann, Ph.; Miwa, J. A.
2018-01-01
We demonstrate simultaneous quantization of conduction band (CB) and valence band (VB) states in silicon using ultrashallow, high-density, phosphorus doping profiles (so-called Si:P δ layers). We show that, in addition to the well-known quantization of CB states within the dopant plane, the confinement of VB-derived states between the subsurface P dopant layer and the Si surface gives rise to a simultaneous quantization of VB states in this narrow region. We also show that the VB quantization can be explained using a simple particle-in-a-box model, and that the number and energy separation of the quantized VB states depend on the depth of the P dopant layer beneath the Si surface. Since the quantized CB states do not show a strong dependence on the dopant depth (but rather on the dopant density), it is straightforward to exhibit control over the properties of the quantized CB and VB states independently of each other by choosing the dopant density and depth accordingly, thus offering new possibilities for engineering quantum matter.
An adaptive vector quantization scheme
NASA Technical Reports Server (NTRS)
Cheung, K.-M.
1990-01-01
Vector quantization is known to be an effective compression scheme to achieve a low bit rate so as to minimize communication channel bandwidth and also to reduce digital memory storage while maintaining the necessary fidelity of the data. However, the large number of computations required in vector quantizers has been a handicap in using vector quantization for low-rate source coding. An adaptive vector quantization algorithm is introduced that is inherently suitable for simple hardware implementation because it has a simple architecture. It allows fast encoding and decoding because it requires only addition and subtraction operations.
Null hypersurface quantization, electromagnetic duality and asympotic symmetries of Maxwell theory
NASA Astrophysics Data System (ADS)
Bhattacharyya, Arpan; Hung, Ling-Yan; Jiang, Yikun
2018-03-01
In this paper we consider introducing careful regularization at the quantization of Maxwell theory in the asymptotic null infinity. This allows systematic discussions of the commutators in various boundary conditions, and application of Dirac brackets accordingly in a controlled manner. This method is most useful when we consider asymptotic charges that are not localized at the boundary u → ±∞ like large gauge transformations. We show that our method reproduces the operator algebra in known cases, and it can be applied to other space-time symmetry charges such as the BMS transformations. We also obtain the asymptotic form of the U(1) charge following from the electromagnetic duality in an explicitly EM symmetric Schwarz-Sen type action. Using our regularization method, we demonstrate that the charge generates the expected transformation of a helicity operator. Our method promises applications in more generic theories.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Serwer, Philip, E-mail: serwer@uthscsa.edu; Wright, Elena T.; Liu, Zheng
DNA packaging of phages phi29, T3 and T7 sometimes produces incompletely packaged DNA with quantized lengths, based on gel electrophoretic band formation. We discover here a packaging ATPase-free, in vitro model for packaged DNA length quantization. We use directed evolution to isolate a five-site T3 point mutant that hyper-produces tail-free capsids with mature DNA (heads). Three tail gene mutations, but no head gene mutations, are present. A variable-length DNA segment leaks from some mutant heads, based on DNase I-protection assay and electron microscopy. The protected DNA segment has quantized lengths, based on restriction endonuclease analysis: six sharp bands of DNAmore » missing 3.7–12.3% of the last end packaged. Native gel electrophoresis confirms quantized DNA expulsion and, after removal of external DNA, provides evidence that capsid radius is the quantization-ruler. Capsid-based DNA length quantization possibly evolved via selection for stalling that provides time for feedback control during DNA packaging and injection. - Graphical abstract: Highlights: • We implement directed evolution- and DNA-sequencing-based phage assembly genetics. • We purify stable, mutant phage heads with a partially leaked mature DNA molecule. • Native gels and DNase-protection show leaked DNA segments to have quantized lengths. • Native gels after DNase I-removal of leaked DNA reveal the capsids to vary in radius. • Thus, we hypothesize leaked DNA quantization via variably quantized capsid radius.« less
Clinical Trials With Large Numbers of Variables: Important Advantages of Canonical Analysis.
Cleophas, Ton J
2016-01-01
Canonical analysis assesses the combined effects of a set of predictor variables on a set of outcome variables, but it is little used in clinical trials despite the omnipresence of multiple variables. The aim of this study was to assess the performance of canonical analysis as compared with traditional multivariate methods using multivariate analysis of covariance (MANCOVA). As an example, a simulated data file with 12 gene expression levels and 4 drug efficacy scores was used. The correlation coefficient between the 12 predictor and 4 outcome variables was 0.87 (P = 0.0001) meaning that 76% of the variability in the outcome variables was explained by the 12 covariates. Repeated testing after the removal of 5 unimportant predictor and 1 outcome variable produced virtually the same overall result. The MANCOVA identified identical unimportant variables, but it was unable to provide overall statistics. (1) Canonical analysis is remarkable, because it can handle many more variables than traditional multivariate methods such as MANCOVA can. (2) At the same time, it accounts for the relative importance of the separate variables, their interactions and differences in units. (3) Canonical analysis provides overall statistics of the effects of sets of variables, whereas traditional multivariate methods only provide the statistics of the separate variables. (4) Unlike other methods for combining the effects of multiple variables such as factor analysis/partial least squares, canonical analysis is scientifically entirely rigorous. (5) Limitations include that it is less flexible than factor analysis/partial least squares, because only 2 sets of variables are used and because multiple solutions instead of one is offered. We do hope that this article will stimulate clinical investigators to start using this remarkable method.
Face recognition algorithm using extended vector quantization histogram features.
Yan, Yan; Lee, Feifei; Wu, Xueqian; Chen, Qiu
2018-01-01
In this paper, we propose a face recognition algorithm based on a combination of vector quantization (VQ) and Markov stationary features (MSF). The VQ algorithm has been shown to be an effective method for generating features; it extracts a codevector histogram as a facial feature representation for face recognition. Still, the VQ histogram features are unable to convey spatial structural information, which to some extent limits their usefulness in discrimination. To alleviate this limitation of VQ histograms, we utilize Markov stationary features (MSF) to extend the VQ histogram-based features so as to add spatial structural information. We demonstrate the effectiveness of our proposed algorithm by achieving recognition results superior to those of several state-of-the-art methods on publicly available face databases.
Dimensional quantization effects in the thermodynamics of conductive filaments
NASA Astrophysics Data System (ADS)
Niraula, D.; Grice, C. R.; Karpov, V. G.
2018-06-01
We consider the physical effects of dimensional quantization in conductive filaments that underlie operations of some modern electronic devices. We show that, as a result of quantization, a sufficiently thin filament acquires a positive charge. Several applications of this finding include the host material polarization, the stability of filament constrictions, the equilibrium filament radius, polarity in device switching, and quantization of conductance.
Nearly associative deformation quantization
NASA Astrophysics Data System (ADS)
Vassilevich, Dmitri; Oliveira, Fernando Martins Costa
2018-04-01
We study several classes of non-associative algebras as possible candidates for deformation quantization in the direction of a Poisson bracket that does not satisfy Jacobi identities. We show that in fact alternative deformation quantization algebras require the Jacobi identities on the Poisson bracket and, under very general assumptions, are associative. At the same time, flexible deformation quantization algebras exist for any Poisson bracket.
Dimensional quantization effects in the thermodynamics of conductive filaments.
Niraula, D; Grice, C R; Karpov, V G
2018-06-29
We consider the physical effects of dimensional quantization in conductive filaments that underlie operations of some modern electronic devices. We show that, as a result of quantization, a sufficiently thin filament acquires a positive charge. Several applications of this finding include the host material polarization, the stability of filament constrictions, the equilibrium filament radius, polarity in device switching, and quantization of conductance.
Distance learning in discriminative vector quantization.
Schneider, Petra; Biehl, Michael; Hammer, Barbara
2009-10-01
Discriminative vector quantization schemes such as learning vector quantization (LVQ) and extensions thereof offer efficient and intuitive classifiers based on the representation of classes by prototypes. The original methods, however, rely on the Euclidean distance corresponding to the assumption that the data can be represented by isotropic clusters. For this reason, extensions of the methods to more general metric structures have been proposed, such as relevance adaptation in generalized LVQ (GLVQ) and matrix learning in GLVQ. In these approaches, metric parameters are learned based on the given classification task such that a data-driven distance measure is found. In this letter, we consider full matrix adaptation in advanced LVQ schemes. In particular, we introduce matrix learning to a recent statistical formalization of LVQ, robust soft LVQ, and we compare the results on several artificial and real-life data sets to matrix learning in GLVQ, a derivation of LVQ-like learning based on a (heuristic) cost function. In all cases, matrix adaptation allows a significant improvement of the classification accuracy. Interestingly, however, the principled behavior of the models with respect to prototype locations and extracted matrix dimensions shows several characteristic differences depending on the data sets.
Using a binaural biomimetic array to identify bottom objects ensonified by echolocating dolphins
Heiweg, D.A.; Moore, P.W.; Martin, S.W.; Dankiewicz, L.A.
2006-01-01
The development of a unique dolphin biomimetic sonar produced data that were used to study signal processing methods for object identification. Echoes from four metallic objects proud on the bottom, and a substrate-only condition, were generated by bottlenose dolphins trained to ensonify the targets in very shallow water. Using the two-element ('binaural') receive array, object echo spectra were collected and submitted for identification to four neural network architectures. Identification accuracy was evaluated over two receive array configurations, and five signal processing schemes. The four neural networks included backpropagation, learning vector quantization, genetic learning and probabilistic network architectures. The processing schemes included four methods that capitalized on the binaural data, plus a monaural benchmark process. All the schemes resulted in above-chance identification accuracy when applied to learning vector quantization and backpropagation. Beam-forming or concatenation of spectra from both receive elements outperformed the monaural benchmark, with higher sensitivity and lower bias. Ultimately, best object identification performance was achieved by the learning vector quantization network supplied with beam-formed data. The advantages of multi-element signal processing for object identification are clearly demonstrated in this development of a first-ever dolphin biomimetic sonar. ?? 2006 IOP Publishing Ltd.
Bottini, Silvia; Hamouda-Tekaya, Nedra; Tanasa, Bogdan; Zaragosi, Laure-Emmanuelle; Grandjean, Valerie; Repetto, Emanuela; Trabucchi, Michele
2017-05-19
Experimental evidence indicates that about 60% of miRNA-binding activity does not follow the canonical rule about the seed matching between miRNA and target mRNAs, but rather a non-canonical miRNA targeting activity outside the seed or with a seed-like motifs. Here, we propose a new unbiased method to identify canonical and non-canonical miRNA-binding sites from peaks identified by Ago2 Cross-Linked ImmunoPrecipitation associated to high-throughput sequencing (CLIP-seq). Since the quality of peaks is of pivotal importance for the final output of the proposed method, we provide a comprehensive benchmarking of four peak detection programs, namely CIMS, PIPE-CLIP, Piranha and Pyicoclip, on four publicly available Ago2-HITS-CLIP datasets and one unpublished in-house Ago2-dataset in stem cells. We measured the sensitivity, the specificity and the position accuracy toward miRNA binding sites identification, and the agreement with TargetScan. Secondly, we developed a new pipeline, called miRBShunter, to identify canonical and non-canonical miRNA-binding sites based on de novo motif identification from Ago2 peaks and prediction of miRNA::RNA heteroduplexes. miRBShunter was tested and experimentally validated on the in-house Ago2-dataset and on an Ago2-PAR-CLIP dataset in human stem cells. Overall, we provide guidelines to choose a suitable peak detection program and a new method for miRNA-target identification. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Bottini, Silvia; Hamouda-Tekaya, Nedra; Tanasa, Bogdan; Zaragosi, Laure-Emmanuelle; Grandjean, Valerie; Repetto, Emanuela
2017-01-01
Abstract Experimental evidence indicates that about 60% of miRNA-binding activity does not follow the canonical rule about the seed matching between miRNA and target mRNAs, but rather a non-canonical miRNA targeting activity outside the seed or with a seed-like motifs. Here, we propose a new unbiased method to identify canonical and non-canonical miRNA-binding sites from peaks identified by Ago2 Cross-Linked ImmunoPrecipitation associated to high-throughput sequencing (CLIP-seq). Since the quality of peaks is of pivotal importance for the final output of the proposed method, we provide a comprehensive benchmarking of four peak detection programs, namely CIMS, PIPE-CLIP, Piranha and Pyicoclip, on four publicly available Ago2-HITS-CLIP datasets and one unpublished in-house Ago2-dataset in stem cells. We measured the sensitivity, the specificity and the position accuracy toward miRNA binding sites identification, and the agreement with TargetScan. Secondly, we developed a new pipeline, called miRBShunter, to identify canonical and non-canonical miRNA-binding sites based on de novo motif identification from Ago2 peaks and prediction of miRNA::RNA heteroduplexes. miRBShunter was tested and experimentally validated on the in-house Ago2-dataset and on an Ago2-PAR-CLIP dataset in human stem cells. Overall, we provide guidelines to choose a suitable peak detection program and a new method for miRNA-target identification. PMID:28108660
Topological quantization in units of the fine structure constant.
Maciejko, Joseph; Qi, Xiao-Liang; Drew, H Dennis; Zhang, Shou-Cheng
2010-10-15
Fundamental topological phenomena in condensed matter physics are associated with a quantized electromagnetic response in units of fundamental constants. Recently, it has been predicted theoretically that the time-reversal invariant topological insulator in three dimensions exhibits a topological magnetoelectric effect quantized in units of the fine structure constant α=e²/ℏc. In this Letter, we propose an optical experiment to directly measure this topological quantization phenomenon, independent of material details. Our proposal also provides a way to measure the half-quantized Hall conductances on the two surfaces of the topological insulator independently of each other.
Late onset canonical babbling: a possible early marker of abnormal development.
Oller, D K; Eilers, R E; Neal, A R; Cobo-Lewis, A B
1998-11-01
By their 10th month of life, typically developing infants produce canonical babbling, which includes the well-formed syllables required for meaningful speech. Research suggests that emerging speech or language-related disorders might be associated with late onset of canonical babbling. Onset of canonical babbling was investigated for 1,536 high-risk infants, at about 10-months corrected age. Parental report by open-ended questionnaire was found to be an efficient method for ascertaining babbling status. Although delays were infrequent, they were often associated with genetic, neurological, anatomical, and/or physiological abnormalities. Over half the cases of late canonical babbling were not, at the time they were discovered associated with prior significant medical diagnoses. Late canonical-babbling onset may be a predictor of later developmental disabilities, including problems in speech, language, and reading.
NASA Technical Reports Server (NTRS)
Wrigley, Chris J.; Hancock, Bruce R.; Newton, Kenneth W.; Cunningham, Thomas J.
2013-01-01
Single-slope analog-to-digital converters (ADCs) are particularly useful for onchip digitization in focal plane arrays (FPAs) because of their inherent monotonicity, relative simplicity, and efficiency for column-parallel applications, but they are comparatively slow. Squareroot encoding can allow the number of code values to be reduced without loss of signal-to-noise ratio (SNR) by keeping the quantization noise just below the signal shot noise. This encoding can be implemented directly by using a quadratic ramp. The reduction in the number of code values can substantially increase the quantization speed. However, in an FPA, the fixed pattern noise (FPN) limits the use of small quantization steps at low signal levels. If the zero-point is adjusted so that the lowest column is onscale, the other columns, including those at the center of the distribution, will be pushed up the ramp where the quantization noise is higher. Additionally, the finite frequency response of the ramp buffer amplifier and the comparator distort the shape of the ramp, so that the effective ramp value at the time the comparator trips differs from the intended value, resulting in errors. Allowing increased settling time decreases the quantization speed, while increasing the bandwidth increases the noise. The FPN problem is solved by breaking the ramp into two portions, with some fraction of the available code values allocated to a linear ramp and the remainder to a quadratic ramp. To avoid large transients, both the value and the slope of the linear and quadratic portions should be equal where they join. The span of the linear portion must cover the minimum offset, but not necessarily the maximum, since the fraction of the pixels above the upper limit will still be correctly quantized, albeit with increased quantization noise. The required linear span, maximum signal and ratio of quantization noise to shot noise at high signal, along with the continuity requirement, determines the number of code values that must be allocated to each portion. The distortion problem is solved by using a lookup table to convert captured code values back to signal levels. The values in this table will be similar to the intended ramp value, but with a correction for the finite bandwidth effects. Continuous-time comparators are used, and their bandwidth is set below the step rate, which smoothes the ramp and reduces the noise. No settling time is needed, as would be the case for clocked comparators, but the low bandwidth enhances the distortion of the non-linear portion. This is corrected by use of a return lookup table, which differs from the one used to generate the ramp. The return lookup table is obtained by calibrating against a stepped precision DC reference. This results in a residual non-linearity well below the quantization noise. This method can also compensate for differential non-linearity (DNL) in the DAC used to generate the ramp. The use of a ramp with a combination of linear and quadratic portions for a single-slope ADC is novel. The number of steps is minimized by keeping the step size just below the photon shot noise. This in turn maximizes the speed of the conversion. High resolution is maintained by keeping small quantization steps at low signals, and noise is minimized by allowing the lowest analog bandwidth, all without increasing the quantization noise. A calibrated return lookup table allows the system to maintain excellent linearity.
NASA Astrophysics Data System (ADS)
Wu, Xiongwu; Brooks, Bernard R.
2011-11-01
The self-guided Langevin dynamics (SGLD) is a method to accelerate conformational searching. This method is unique in the way that it selectively enhances and suppresses molecular motions based on their frequency to accelerate conformational searching without modifying energy surfaces or raising temperatures. It has been applied to studies of many long time scale events, such as protein folding. Recent progress in the understanding of the conformational distribution in SGLD simulations makes SGLD also an accurate method for quantitative studies. The SGLD partition function provides a way to convert the SGLD conformational distribution to the canonical ensemble distribution and to calculate ensemble average properties through reweighting. Based on the SGLD partition function, this work presents a force-momentum-based self-guided Langevin dynamics (SGLDfp) simulation method to directly sample the canonical ensemble. This method includes interaction forces in its guiding force to compensate the perturbation caused by the momentum-based guiding force so that it can approximately sample the canonical ensemble. Using several example systems, we demonstrate that SGLDfp simulations can approximately maintain the canonical ensemble distribution and significantly accelerate conformational searching. With optimal parameters, SGLDfp and SGLD simulations can cross energy barriers of more than 15 kT and 20 kT, respectively, at similar rates for LD simulations to cross energy barriers of 10 kT. The SGLDfp method is size extensive and works well for large systems. For studies where preserving accessible conformational space is critical, such as free energy calculations and protein folding studies, SGLDfp is an efficient approach to search and sample the conformational space.
On the Dequantization of Fedosov's Deformation Quantization
NASA Astrophysics Data System (ADS)
Karabegov, Alexander V.
2003-08-01
To each natural deformation quantization on a Poisson manifold M we associate a Poisson morphism from the formal neighborhood of the zero section of the cotangent bundle to M to the formal neighborhood of the diagonal of the product M x M~, where M~ is a copy of M with the opposite Poisson structure. We call it dequantization of the natural deformation quantization. Then we "dequantize" Fedosov's quantization.
Ustinov, E A; Do, D D
2012-08-21
We present for the first time in the literature a new scheme of kinetic Monte Carlo method applied on a grand canonical ensemble, which we call hereafter GC-kMC. It was shown recently that the kinetic Monte Carlo (kMC) scheme is a very effective tool for the analysis of equilibrium systems. It had been applied in a canonical ensemble to describe vapor-liquid equilibrium of argon over a wide range of temperatures, gas adsorption on a graphite open surface and in graphitic slit pores. However, in spite of the conformity of canonical and grand canonical ensembles, the latter is more relevant in the correct description of open systems; for example, the hysteresis loop observed in adsorption of gases in pores under sub-critical conditions can only be described with a grand canonical ensemble. Therefore, the present paper is aimed at an extension of the kMC to open systems. The developed GC-kMC was proved to be consistent with the results obtained with the canonical kMC (C-kMC) for argon adsorption on a graphite surface at 77 K and in graphitic slit pores at 87.3 K. We showed that in slit micropores the hexagonal packing in the layers adjacent to the pore walls is observed at high loadings even at temperatures above the triple point of the bulk phase. The potential and applicability of the GC-kMC are further shown with the correct description of the heat of adsorption and the pressure tensor of the adsorbed phase.
Integrability, Quantization and Moduli Spaces of Curves
NASA Astrophysics Data System (ADS)
Rossi, Paolo
2017-07-01
This paper has the purpose of presenting in an organic way a new approach to integrable (1+1)-dimensional field systems and their systematic quantization emerging from intersection theory of the moduli space of stable algebraic curves and, in particular, cohomological field theories, Hodge classes and double ramification cycles. This methods are alternative to the traditional Witten-Kontsevich framework and its generalizations by Dubrovin and Zhang and, among other advantages, have the merit of encompassing quantum integrable systems. Most of this material originates from an ongoing collaboration with A. Buryak, B. Dubrovin and J. Guéré.
Nano-Transistor Modeling: Two Dimensional Green's Function Method
NASA Technical Reports Server (NTRS)
Svizhenko, Alexei; Anantram, M. P.; Govindan, T. R.; Biegel, Bryan
2001-01-01
Two quantum mechanical effects that impact the operation of nanoscale transistors are inversion layer energy quantization and ballistic transport. While the qualitative effects of these features are reasonably understood, a comprehensive study of device physics in two dimensions is lacking. Our work addresses this shortcoming and provides: (a) a framework to quantitatively explore device physics issues such as the source-drain and gate leakage currents, DIBL (Drain Induced Barrier Lowering), and threshold voltage shift due to quantization, and b) a means of benchmarking quantum corrections to semiclassical models (such as density-gradient and quantum-corrected MEDICI).
Positive geometries and canonical forms
NASA Astrophysics Data System (ADS)
Arkani-Hamed, Nima; Bai, Yuntao; Lam, Thomas
2017-11-01
Recent years have seen a surprising connection between the physics of scattering amplitudes and a class of mathematical objects — the positive Grassmannian, positive loop Grassmannians, tree and loop Amplituhedra — which have been loosely referred to as "positive geometries". The connection between the geometry and physics is provided by a unique differential form canonically determined by the property of having logarithmic singularities (only) on all the boundaries of the space, with residues on each boundary given by the canonical form on that boundary. The structures seen in the physical setting of the Amplituhedron are both rigid and rich enough to motivate an investigation of the notions of "positive geometries" and their associated "canonical forms" as objects of study in their own right, in a more general mathematical setting. In this paper we take the first steps in this direction. We begin by giving a precise definition of positive geometries and canonical forms, and introduce two general methods for finding forms for more complicated positive geometries from simpler ones — via "triangulation" on the one hand, and "push-forward" maps between geometries on the other. We present numerous examples of positive geometries in projective spaces, Grassmannians, and toric, cluster and flag varieties, both for the simplest "simplex-like" geometries and the richer "polytope-like" ones. We also illustrate a number of strategies for computing canonical forms for large classes of positive geometries, ranging from a direct determination exploiting knowledge of zeros and poles, to the use of the general triangulation and push-forward methods, to the representation of the form as volume integrals over dual geometries and contour integrals over auxiliary spaces. These methods yield interesting representations for the canonical forms of wide classes of positive geometries, ranging from the simplest Amplituhedra to new expressions for the volume of arbitrary convex polytopes.
On the use and computation of the Jordan canonical form in system theory
NASA Technical Reports Server (NTRS)
Sridhar, B.; Jordan, D.
1974-01-01
This paper investigates various aspects of the application of the Jordan canonical form of a matrix in system theory and develops a computational approach to determining the Jordan form for a given matrix. Applications include pole placement, controllability and observability studies, serving as an intermediate step in yielding other canonical forms, and theorem proving. The computational method developed in this paper is both simple and efficient. The method is based on the definition of a generalized eigenvector and a natural extension of Gauss elimination techniques. Examples are included for demonstration purposes.
BSIFT: toward data-independent codebook for large scale image search.
Zhou, Wengang; Li, Houqiang; Hong, Richang; Lu, Yijuan; Tian, Qi
2015-03-01
Bag-of-Words (BoWs) model based on Scale Invariant Feature Transform (SIFT) has been widely used in large-scale image retrieval applications. Feature quantization by vector quantization plays a crucial role in BoW model, which generates visual words from the high- dimensional SIFT features, so as to adapt to the inverted file structure for the scalable retrieval. Traditional feature quantization approaches suffer several issues, such as necessity of visual codebook training, limited reliability, and update inefficiency. To avoid the above problems, in this paper, a novel feature quantization scheme is proposed to efficiently quantize each SIFT descriptor to a descriptive and discriminative bit-vector, which is called binary SIFT (BSIFT). Our quantizer is independent of image collections. In addition, by taking the first 32 bits out from BSIFT as code word, the generated BSIFT naturally lends itself to adapt to the classic inverted file structure for image indexing. Moreover, the quantization error is reduced by feature filtering, code word expansion, and query sensitive mask shielding. Without any explicit codebook for quantization, our approach can be readily applied in image search in some resource-limited scenarios. We evaluate the proposed algorithm for large scale image search on two public image data sets. Experimental results demonstrate the index efficiency and retrieval accuracy of our approach.
Passive forensics for copy-move image forgery using a method based on DCT and SVD.
Zhao, Jie; Guo, Jichang
2013-12-10
As powerful image editing tools are widely used, the demand for identifying the authenticity of an image is much increased. Copy-move forgery is one of the tampering techniques which are frequently used. Most existing techniques to expose this forgery need to improve the robustness for common post-processing operations and fail to precisely locate the tampering region especially when there are large similar or flat regions in the image. In this paper, a robust method based on DCT and SVD is proposed to detect this specific artifact. Firstly, the suspicious image is divided into fixed-size overlapping blocks and 2D-DCT is applied to each block, then the DCT coefficients are quantized by a quantization matrix to obtain a more robust representation of each block. Secondly, each quantized block is divided non-overlapping sub-blocks and SVD is applied to each sub-block, then features are extracted to reduce the dimension of each block using its largest singular value. Finally, the feature vectors are lexicographically sorted, and duplicated image blocks will be matched by predefined shift frequency threshold. Experiment results demonstrate that our proposed method can effectively detect multiple copy-move forgery and precisely locate the duplicated regions, even when an image was distorted by Gaussian blurring, AWGN, JPEG compression and their mixed operations. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Probability Quantization for Multiplication-Free Binary Arithmetic Coding
NASA Technical Reports Server (NTRS)
Cheung, K. -M.
1995-01-01
A method has been developed to improve on Witten's binary arithmetic coding procedure of tracking a high value and a low value. The new method approximates the probability of the less probable symbol, which improves the worst-case coding efficiency.
Generic absence of strong singularities in loop quantum Bianchi-IX spacetimes
NASA Astrophysics Data System (ADS)
Saini, Sahil; Singh, Parampreet
2018-03-01
We study the generic resolution of strong singularities in loop quantized effective Bianchi-IX spacetime in two different quantizations—the connection operator based ‘A’ quantization and the extrinsic curvature based ‘K’ quantization. We show that in the effective spacetime description with arbitrary matter content, it is necessary to include inverse triad corrections to resolve all the strong singularities in the ‘A’ quantization. Whereas in the ‘K’ quantization these results can be obtained without including inverse triad corrections. Under these conditions, the energy density, expansion and shear scalars for both of the quantization prescriptions are bounded. Notably, both the quantizations can result in potentially curvature divergent events if matter content allows divergences in the partial derivatives of the energy density with respect to the triad variables at a finite energy density. Such events are found to be weak curvature singularities beyond which geodesics can be extended in the effective spacetime. Our results show that all potential strong curvature singularities of the classical theory are forbidden in Bianchi-IX spacetime in loop quantum cosmology and geodesic evolution never breaks down for such events.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nandi, Debottam; Shankaranarayanan, S., E-mail: debottam@iisertvm.ac.in, E-mail: shanki@iisertvm.ac.in
2016-10-01
In this work, we present a consistent Hamiltonian analysis of cosmological perturbations for generalized non-canonical scalar fields. In order to do so, we introduce a new phase-space variable that is uniquely defined for different non-canonical scalar fields. We also show that this is the simplest and efficient way of expressing the Hamiltonian. We extend the Hamiltonian approach of [1] to non-canonical scalar field and obtain an unique expression of speed of sound in terms of phase-space variable. In order to invert generalized phase-space Hamilton's equations to Euler-Lagrange equations of motion, we prescribe a general inversion formulae and show that ourmore » approach for non-canonical scalar field is consistent. We also obtain the third and fourth order interaction Hamiltonian for generalized non-canonical scalar fields and briefly discuss the extension of our method to generalized Galilean scalar fields.« less
Observables and density matrices embedded in dual Hilbert spaces
NASA Astrophysics Data System (ADS)
Prosen, T.; Martignon, L.; Seligman, T. H.
2015-06-01
The introduction of operator states and of observables in various fields of quantum physics has raised questions about the mathematical structures of the corresponding spaces. In the framework of third quantization it had been conjectured that we deal with Hilbert spaces although the mathematical background was not entirely clear, particularly, when dealing with bosonic operators. This in turn caused some doubts about the correct way to combine bosonic and fermionic operators or, in other words, regular and Grassmann variables. In this paper we present a formal answer to the problems on a simple and very general basis. We illustrate the resulting construction by revisiting the Bargmann transform and finding the known connection between {{L}}2({{R}}) and the Bargmann-Hilbert space. We pursue this line of thinking one step further and discuss the representations of complex extensions of linear canonical transformations as isometries between dual Hilbert spaces. We then use the formalism to give an explicit formulation for Fock spaces involving both fermions and bosons thus solving the problem at the origin of our considerations.
NASA Astrophysics Data System (ADS)
Amooshahi, Majid; Shoughi, Ali
2018-05-01
A fully canonical quantization of electromagnetic field in the presence of a bi-anisotropic absorbing magneto-dielectric slab is demonstrated. The electric and the magnetic polarization densities of the magneto-dielectric slab are defined in terms of the dynamical variables modeling the slab and the coupling tensors that couple the electromagnetic field to the slab. The four susceptibility tensors of the bi-anisotropic magneto-dielectric slab are expressed in terms of the coupling tensors that couple an electromagnetic field to the slab. It is shown that the four susceptibility tensors of the bi-anisotropic magneto-dielectric slab satisfy Kramers-Kronig relations. The Maxwell’s equations are exactly solved in the presence of the bi-anisotropic magneto-dielectric slab. The tangential and the normal components of the Casimir forces exerted on the bi-anisotropic magnet-dielectric slab exactly are calculated in the vacuum state and thermal state of the total system. It is shown that the tangential components of the Casimir forces vanish when the bi-anisotropic slab is converted to an isotropic slab.
Weak field equations and generalized FRW cosmology on the tangent Lorentz bundle
NASA Astrophysics Data System (ADS)
Triantafyllopoulos, A.; Stavrinos, P. C.
2018-04-01
We study field equations for a weak anisotropic model on the tangent Lorentz bundle TM of a spacetime manifold. A geometrical extension of general relativity (GR) is considered by introducing the concept of local anisotropy, i.e. a direct dependence of geometrical quantities on observer 4‑velocity. In this approach, we consider a metric on TM as the sum of an h-Riemannian metric structure and a weak anisotropic perturbation, field equations with extra terms are obtained for this model. As well, extended Raychaudhuri equations are studied in the framework of Finsler-like extensions. Canonical momentum and mass-shell equation are also generalized in relation to their GR counterparts. Quantization of the mass-shell equation leads to a generalization of the Klein–Gordon equation and dispersion relation for a scalar field. In this model the accelerated expansion of the universe can be attributed to the geometry itself. A cosmological bounce is modeled with the introduction of an anisotropic scalar field. Also, the electromagnetic field equations are directly incorporated in this framework.
Entanglement Criteria of Two Two-Level Atoms Interacting with Two Coupled Modes
NASA Astrophysics Data System (ADS)
Baghshahi, Hamid Reza; Tavassoly, Mohammad Kazem; Faghihi, Mohammad Javad
2015-08-01
In this paper, we study the interaction between two two-level atoms and two coupled modes of a quantized radiation field in the form of parametric frequency converter injecting within an optical cavity enclosed by a medium with Kerr nonlinearity. It is demonstrated that, by applying the Bogoliubov-Valatin canonical transformation, the introduced model is reduced to a well-known form of the generalized Jaynes-Cummings model. Then, under particular initial conditions for the atoms (in a coherent superposition of its ground and upper states) and the fields (in a standard coherent state) which may be prepared, the time evolution of state vector of the entire system is analytically evaluated. In order to understand the degree of entanglement between subsystems (atom-field and atom-atom), the dynamics of entanglement through different measures, namely, von Neumann reduced entropy, concurrence and negativity is evaluated. In each case, the effects of Kerr nonlinearity and detuning parameter on the above measures are numerically analyzed, in detail. It is illustrated that the amount of entanglement can be tuned by choosing the evolved parameters, appropriately.
Visually Lossless JPEG 2000 for Remote Image Browsing
Oh, Han; Bilgin, Ali; Marcellin, Michael
2017-01-01
Image sizes have increased exponentially in recent years. The resulting high-resolution images are often viewed via remote image browsing. Zooming and panning are desirable features in this context, which result in disparate spatial regions of an image being displayed at a variety of (spatial) resolutions. When an image is displayed at a reduced resolution, the quantization step sizes needed for visually lossless quality generally increase. This paper investigates the quantization step sizes needed for visually lossless display as a function of resolution, and proposes a method that effectively incorporates the resulting (multiple) quantization step sizes into a single JPEG2000 codestream. This codestream is JPEG2000 Part 1 compliant and allows for visually lossless decoding at all resolutions natively supported by the wavelet transform as well as arbitrary intermediate resolutions, using only a fraction of the full-resolution codestream. When images are browsed remotely using the JPEG2000 Interactive Protocol (JPIP), the required bandwidth is significantly reduced, as demonstrated by extensive experimental results. PMID:28748112
Quantization-Based Adaptive Actor-Critic Tracking Control With Tracking Error Constraints.
Fan, Quan-Yong; Yang, Guang-Hong; Ye, Dan
2018-04-01
In this paper, the problem of adaptive actor-critic (AC) tracking control is investigated for a class of continuous-time nonlinear systems with unknown nonlinearities and quantized inputs. Different from the existing results based on reinforcement learning, the tracking error constraints are considered and new critic functions are constructed to improve the performance further. To ensure that the tracking errors keep within the predefined time-varying boundaries, a tracking error transformation technique is used to constitute an augmented error system. Specific critic functions, rather than the long-term cost function, are introduced to supervise the tracking performance and tune the weights of the AC neural networks (NNs). A novel adaptive controller with a special structure is designed to reduce the effect of the NN reconstruction errors, input quantization, and disturbances. Based on the Lyapunov stability theory, the boundedness of the closed-loop signals and the desired tracking performance can be guaranteed. Finally, simulations on two connected inverted pendulums are given to illustrate the effectiveness of the proposed method.
Methods and apparatuses for self-generating fault-tolerant keys in spread-spectrum systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moradi, Hussein; Farhang, Behrouz; Subramanian, Vijayarangam
Self-generating fault-tolerant keys for use in spread-spectrum systems are disclosed. At a communication device, beacon signals are received from another communication device and impulse responses are determined from the beacon signals. The impulse responses are circularly shifted to place a largest sample at a predefined position. The impulse responses are converted to a set of frequency responses in a frequency domain. The frequency responses are shuffled with a predetermined shuffle scheme to develop a set of shuffled frequency responses. A set of phase differences is determined as a difference between an angle of the frequency response and an angle ofmore » the shuffled frequency response at each element of the corresponding sets. Each phase difference is quantized to develop a set of secret-key quantized phases and a set of spreading codes is developed wherein each spreading code includes a corresponding phase of the set of secret-key quantized phases.« less
Submonolayer Quantum Dot Infrared Photodetector
NASA Technical Reports Server (NTRS)
Ting, David Z.; Bandara, Sumith V.; Gunapala, Sarath D.; Chang, Yia-Chang
2010-01-01
A method has been developed for inserting submonolayer (SML) quantum dots (QDs) or SML QD stacks, instead of conventional Stranski-Krastanov (S-K) QDs, into the active region of intersubband photodetectors. A typical configuration would be InAs SML QDs embedded in thin layers of GaAs, surrounded by AlGaAs barriers. Here, the GaAs and the AlGaAs have nearly the same lattice constant, while InAs has a larger lattice constant. In QD infrared photodetector, the important quantization directions are in the plane perpendicular to the normal incidence radiation. In-plane quantization is what enables the absorption of normal incidence radiation. The height of the S-K QD controls the positions of the quantized energy levels, but is not critically important to the desired normal incidence absorption properties. The SML QD or SML QD stack configurations give more control of the structure grown, retains normal incidence absorption properties, and decreases the strain build-up to allow thicker active layers for higher quantum efficiency.
Enhancing speech recognition using improved particle swarm optimization based hidden Markov model.
Selvaraj, Lokesh; Ganesan, Balakrishnan
2014-01-01
Enhancing speech recognition is the primary intention of this work. In this paper a novel speech recognition method based on vector quantization and improved particle swarm optimization (IPSO) is suggested. The suggested methodology contains four stages, namely, (i) denoising, (ii) feature mining (iii), vector quantization, and (iv) IPSO based hidden Markov model (HMM) technique (IP-HMM). At first, the speech signals are denoised using median filter. Next, characteristics such as peak, pitch spectrum, Mel frequency Cepstral coefficients (MFCC), mean, standard deviation, and minimum and maximum of the signal are extorted from the denoised signal. Following that, to accomplish the training process, the extracted characteristics are given to genetic algorithm based codebook generation in vector quantization. The initial populations are created by selecting random code vectors from the training set for the codebooks for the genetic algorithm process and IP-HMM helps in doing the recognition. At this point the creativeness will be done in terms of one of the genetic operation crossovers. The proposed speech recognition technique offers 97.14% accuracy.
Pseudo-Kähler Quantization on Flag Manifolds
NASA Astrophysics Data System (ADS)
Karabegov, Alexander V.
A unified approach to geometric, symbol and deformation quantizations on a generalized flag manifold endowed with an invariant pseudo-Kähler structure is proposed. In particular cases we arrive at Berezin's quantization via covariant and contravariant symbols.
Instant-Form and Light-Front Quantization of Field Theories
NASA Astrophysics Data System (ADS)
Kulshreshtha, Usha; Kulshreshtha, Daya Shankar; Vary, James
2018-05-01
In this work we consider the instant-form and light-front quantization of some field theories. As an example, we consider a class of gauged non-linear sigma models with different regularizations. In particular, we present the path integral quantization of the gauged non-linear sigma model in the Faddeevian regularization. We also make a comparision of the possible differences in the instant-form and light-front quantization at appropriate places.
Quantization improves stabilization of dynamical systems with delayed feedback
NASA Astrophysics Data System (ADS)
Stepan, Gabor; Milton, John G.; Insperger, Tamas
2017-11-01
We show that an unstable scalar dynamical system with time-delayed feedback can be stabilized by quantizing the feedback. The discrete time model corresponds to a previously unrecognized case of the microchaotic map in which the fixed point is both locally and globally repelling. In the continuous-time model, stabilization by quantization is possible when the fixed point in the absence of feedback is an unstable node, and in the presence of feedback, it is an unstable focus (spiral). The results are illustrated with numerical simulation of the unstable Hayes equation. The solutions of the quantized Hayes equation take the form of oscillations in which the amplitude is a function of the size of the quantization step. If the quantization step is sufficiently small, the amplitude of the oscillations can be small enough to practically approximate the dynamics around a stable fixed point.
Perceptual compression of magnitude-detected synthetic aperture radar imagery
NASA Technical Reports Server (NTRS)
Gorman, John D.; Werness, Susan A.
1994-01-01
A perceptually-based approach for compressing synthetic aperture radar (SAR) imagery is presented. Key components of the approach are a multiresolution wavelet transform, a bit allocation mask based on an empirical human visual system (HVS) model, and hybrid scalar/vector quantization. Specifically, wavelet shrinkage techniques are used to segregate wavelet transform coefficients into three components: local means, edges, and texture. Each of these three components is then quantized separately according to a perceptually-based bit allocation scheme. Wavelet coefficients associated with local means and edges are quantized using high-rate scalar quantization while texture information is quantized using low-rate vector quantization. The impact of the perceptually-based multiresolution compression algorithm on visual image quality, impulse response, and texture properties is assessed for fine-resolution magnitude-detected SAR imagery; excellent image quality is found at bit rates at or above 1 bpp along with graceful performance degradation at rates below 1 bpp.
Sarkar, Sujit
2018-04-12
An attempt is made to study and understand the behavior of quantization of geometric phase of a quantum Ising chain with long range interaction. We show the existence of integer and fractional topological characterization for this model Hamiltonian with different quantization condition and also the different quantized value of geometric phase. The quantum critical lines behave differently from the perspective of topological characterization. The results of duality and its relation to the topological quantization is presented here. The symmetry study for this model Hamiltonian is also presented. Our results indicate that the Zak phase is not the proper physical parameter to describe the topological characterization of system with long range interaction. We also present quite a few exact solutions with physical explanation. Finally we present the relation between duality, symmetry and topological characterization. Our work provides a new perspective on topological quantization.
Face Recognition Using Local Quantized Patterns and Gabor Filters
NASA Astrophysics Data System (ADS)
Khryashchev, V.; Priorov, A.; Stepanova, O.; Nikitin, A.
2015-05-01
The problem of face recognition in a natural or artificial environment has received a great deal of researchers' attention over the last few years. A lot of methods for accurate face recognition have been proposed. Nevertheless, these methods often fail to accurately recognize the person in difficult scenarios, e.g. low resolution, low contrast, pose variations, etc. We therefore propose an approach for accurate and robust face recognition by using local quantized patterns and Gabor filters. The estimation of the eye centers is used as a preprocessing stage. The evaluation of our algorithm on different samples from a standardized FERET database shows that our method is invariant to the general variations of lighting, expression, occlusion and aging. The proposed approach allows about 20% correct recognition accuracy increase compared with the known face recognition algorithms from the OpenCV library. The additional use of Gabor filters can significantly improve the robustness to changes in lighting conditions.
Vortex filament method as a tool for computational visualization of quantum turbulence
Hänninen, Risto; Baggaley, Andrew W.
2014-01-01
The vortex filament model has become a standard and powerful tool to visualize the motion of quantized vortices in helium superfluids. In this article, we present an overview of the method and highlight its impact in aiding our understanding of quantum turbulence, particularly superfluid helium. We present an analysis of the structure and arrangement of quantized vortices. Our results are in agreement with previous studies showing that under certain conditions, vortices form coherent bundles, which allows for classical vortex stretching, giving quantum turbulence a classical nature. We also offer an explanation for the differences between the observed properties of counterflow and pure superflow turbulence in a pipe. Finally, we suggest a mechanism for the generation of coherent structures in the presence of normal fluid shear. PMID:24704873
Noncommutative gerbes and deformation quantization
NASA Astrophysics Data System (ADS)
Aschieri, Paolo; Baković, Igor; Jurčo, Branislav; Schupp, Peter
2010-11-01
We define noncommutative gerbes using the language of star products. Quantized twisted Poisson structures are discussed as an explicit realization in the sense of deformation quantization. Our motivation is the noncommutative description of D-branes in the presence of topologically non-trivial background fields.
Restoring canonical partition functions from imaginary chemical potential
NASA Astrophysics Data System (ADS)
Bornyakov, V. G.; Boyda, D.; Goy, V.; Molochkov, A.; Nakamura, A.; Nikolaev, A.; Zakharov, V. I.
2018-03-01
Using GPGPU techniques and multi-precision calculation we developed the code to study QCD phase transition line in the canonical approach. The canonical approach is a powerful tool to investigate sign problem in Lattice QCD. The central part of the canonical approach is the fugacity expansion of the grand canonical partition functions. Canonical partition functions Zn(T) are coefficients of this expansion. Using various methods we study properties of Zn(T). At the last step we perform cubic spline for temperature dependence of Zn(T) at fixed n and compute baryon number susceptibility χB/T2 as function of temperature. After that we compute numerically ∂χ/∂T and restore crossover line in QCD phase diagram. We use improved Wilson fermions and Iwasaki gauge action on the 163 × 4 lattice with mπ/mρ = 0.8 as a sandbox to check the canonical approach. In this framework we obtain coefficient in parametrization of crossover line Tc(µ2B) = Tc(C-ĸµ2B/T2c) with ĸ = -0.0453 ± 0.0099.
Interframe vector wavelet coding technique
NASA Astrophysics Data System (ADS)
Wus, John P.; Li, Weiping
1997-01-01
Wavelet coding is often used to divide an image into multi- resolution wavelet coefficients which are quantized and coded. By 'vectorizing' scalar wavelet coding and combining this with vector quantization (VQ), vector wavelet coding (VWC) can be implemented. Using a finite number of states, finite-state vector quantization (FSVQ) takes advantage of the similarity between frames by incorporating memory into the video coding system. Lattice VQ eliminates the potential mismatch that could occur using pre-trained VQ codebooks. It also eliminates the need for codebook storage in the VQ process, thereby creating a more robust coding system. Therefore, by using the VWC coding method in conjunction with the FSVQ system and lattice VQ, the formulation of a high quality very low bit rate coding systems is proposed. A coding system using a simple FSVQ system where the current state is determined by the previous channel symbol only is developed. To achieve a higher degree of compression, a tree-like FSVQ system is implemented. The groupings are done in this tree-like structure from the lower subbands to the higher subbands in order to exploit the nature of subband analysis in terms of the parent-child relationship. Class A and Class B video sequences from the MPEG-IV testing evaluations are used in the evaluation of this coding method.
Chakrabartty, Shantanu; Shaga, Ravi K; Aono, Kenji
2013-04-01
Analog circuits that are calibrated using digital-to-analog converters (DACs) use a digital signal processor-based algorithm for real-time adaptation and programming of system parameters. In this paper, we first show that this conventional framework for adaptation yields suboptimal calibration properties because of artifacts introduced by quantization noise. We then propose a novel online stochastic optimization algorithm called noise-shaping or ΣΔ gradient descent, which can shape the quantization noise out of the frequency regions spanning the parameter adaptation trajectories. As a result, the proposed algorithms demonstrate superior parameter search properties compared to floating-point gradient methods and better convergence properties than conventional quantized gradient-methods. In the second part of this paper, we apply the ΣΔ gradient descent algorithm to two examples of real-time digital calibration: 1) balancing and tracking of bias currents, and 2) frequency calibration of a band-pass Gm-C biquad filter biased in weak inversion. For each of these examples, the circuits have been prototyped in a 0.5-μm complementary metal-oxide-semiconductor process, and we demonstrate that the proposed algorithm is able to find the optimal solution even in the presence of spurious local minima, which are introduced by the nonlinear and non-monotonic response of calibration DACs.
Visibility of wavelet quantization noise
NASA Technical Reports Server (NTRS)
Watson, A. B.; Yang, G. Y.; Solomon, J. A.; Villasenor, J.
1997-01-01
The discrete wavelet transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that we call DWT uniform quantization noise; it is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2-lambda, where r is display visual resolution in pixels/degree, and lambda is the wavelet level. Thresholds increase rapidly with wavelet spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from lowpass to horizontal/vertical to diagonal. We construct a mathematical model for DWT noise detection thresholds that is a function of level, orientation, and display visual resolution. This allows calculation of a "perceptually lossless" quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Xiao; Science and Technology on Electronic Information Control Laboratory, 610036, Chengdu, Sichuan; Wei, Chaozhen
2014-11-15
In this paper we use Dirac function to construct a fractional operator called fractional corresponding operator, which is the general form of momentum corresponding operator. Then we give a judging theorem for this operator and with this judging theorem we prove that R–L, G–L, Caputo, Riesz fractional derivative operator and fractional derivative operator based on generalized functions, which are the most popular ones, coincide with the fractional corresponding operator. As a typical application, we use the fractional corresponding operator to construct a new fractional quantization scheme and then derive a uniform fractional Schrödinger equation in form. Additionally, we find thatmore » the five forms of fractional Schrödinger equation belong to the particular cases. As another main result of this paper, we use fractional corresponding operator to generalize fractional quantization scheme by using Lévy path integral and use it to derive the corresponding general form of fractional Schrödinger equation, which consequently proves that these two quantization schemes are equivalent. Meanwhile, relations between the theory in fractional quantum mechanics and that in classic quantum mechanics are also discussed. As a physical example, we consider a particle in an infinite potential well. We give its wave functions and energy spectrums in two ways and find that both results are the same.« less
Magnetic neutron star cooling and microphysics
NASA Astrophysics Data System (ADS)
Potekhin, A. Y.; Chabrier, G.
2018-01-01
Aims: We study the relative importance of several recent updates of microphysics input to the neutron star cooling theory and the effects brought about by superstrong magnetic fields of magnetars, including the effects of the Landau quantization in their crusts. Methods: We use a finite-difference code for simulation of neutron-star thermal evolution on timescales from hours to megayears with an updated microphysics input. The consideration of short timescales (≲1 yr) is made possible by a treatment of the heat-blanketing envelope without the quasistationary approximation inherent to its treatment in traditional neutron-star cooling codes. For the strongly magnetized neutron stars, we take into account the effects of Landau quantization on thermodynamic functions and thermal conductivities. We simulate cooling of ordinary neutron stars and magnetars with non-accreted and accreted crusts and compare the results with observations. Results: Suppression of radiative and conductive opacities in strongly quantizing magnetic fields and formation of a condensed radiating surface substantially enhance the photon luminosity at early ages, making the life of magnetars brighter but shorter. These effects together with the effect of strong proton superfluidity, which slows down the cooling of kiloyear-aged neutron stars, can explain thermal luminosities of about a half of magnetars without invoking heating mechanisms. Observed thermal luminosities of other magnetars are still higher than theoretical predictions, which implies heating, but the effects of quantizing magnetic fields and baryon superfluidity help to reduce the discrepancy.
Thermal field theory and generalized light front quantization
NASA Astrophysics Data System (ADS)
Weldon, H. Arthur
2003-04-01
The dependence of thermal field theory on the surface of quantization and on the velocity of the heat bath is investigated by working in general coordinates that are arbitrary linear combinations of the Minkowski coordinates. In the general coordinates the metric tensor gμν¯ is nondiagonal. The Kubo-Martin-Schwinger condition requires periodicity in thermal correlation functions when the temporal variable changes by an amount -i/(T(g00¯)). Light-front quantization fails since g00¯=0; however, various related quantizations are possible.
New excitations in the Thirring model
NASA Astrophysics Data System (ADS)
Cortés, J. L.; Gamboa, J.; Schmidt, I.; Zanelli, J.
1998-12-01
The quantization of the massless Thirring model in the light-cone using functional methods is considered. The need to compactify the coordinate x- in the light-cone spacetime implies that the quantum effective action for left-handed fermions contains excitations similar to abelian instantons produced by composite of left-handed fermions. Right-handed fermions don't have a similar effective action. Thus, quantum mechanically, chiral symmetry must be broken as a result of the topological excitations. The conserved charge associated to the topological states is quantized. Different cases with only fermionic excitations or bosonic excitations or both can occur depending on the boundary conditions and the value of the coupling.
Functional integral for non-Lagrangian systems
NASA Astrophysics Data System (ADS)
Kochan, Denis
2010-02-01
A functional integral formulation of quantum mechanics for non-Lagrangian systems is presented. The approach, which we call “stringy quantization,” is based solely on classical equations of motion and is free of any ambiguity arising from Lagrangian and/or Hamiltonian formulation of the theory. The functionality of the proposed method is demonstrated on several examples. Special attention is paid to the stringy quantization of systems with a general A-power friction force -κq˙A. Results for A=1 are compared with those obtained in the approaches by Caldirola-Kanai, Bateman, and Kostin. Relations to the Caldeira-Leggett model and to the Feynman-Vernon approach are discussed as well.
Stochastic quantization of (λϕ4)d scalar theory: Generalized Langevin equation with memory kernel
NASA Astrophysics Data System (ADS)
Menezes, G.; Svaiter, N. F.
2007-02-01
The method of stochastic quantization for a scalar field theory is reviewed. A brief survey for the case of self-interacting scalar field, implementing the stochastic perturbation theory up to the one-loop level, is presented. Then, it is introduced a colored random noise in the Einstein's relations, a common prescription employed by one of the stochastic regularizations, to control the ultraviolet divergences of the theory. This formalism is extended to the case where a Langevin equation with a memory kernel is used. It is shown that, maintaining the Einstein's relations with a colored noise, there is convergence to a non-regularized theory.
The quantization of the chiral Schwinger model based on the BFT-BFV formalism II
NASA Astrophysics Data System (ADS)
Park, Mu-In; Park, Young-Jai; Yoon, Sean J.
1998-12-01
We apply an improved version of Batalin-Fradkin-Tyutin Hamiltonian method to the a = 1 chiral Schwinger model, which is much more nontrivial than the a>1 one. Furthermore, through the path integral quantization, we newly resolve the problem of the nontrivial 0954-3899/24/12/002/img6-function as well as that of the unwanted Fourier parameter 0954-3899/24/12/002/img7 in the measure. As a result, we explicitly obtain the fully gauge invariant partition function, which includes a new type of Wess-Zumino term irrelevant to the gauge symmetry as well as the usual WZ action.
Channel estimation based on quantized MMP for FDD massive MIMO downlink
NASA Astrophysics Data System (ADS)
Guo, Yao-ting; Wang, Bing-he; Qu, Yi; Cai, Hua-jie
2016-10-01
In this paper, we consider channel estimation for Massive MIMO systems operating in frequency division duplexing mode. By exploiting the sparsity of propagation paths in Massive MIMO channel, we develop a compressed sensing(CS) based channel estimator which can reduce the pilot overhead. As compared with the conventional least squares (LS) and linear minimum mean square error(LMMSE) estimation, the proposed algorithm is based on the quantized multipath matching pursuit - MMP - reduced the pilot overhead and performs better than other CS algorithms. The simulation results demonstrate the advantage of the proposed algorithm over various existing methods including the LS, LMMSE, CoSaMP and conventional MMP estimators.
ERIC Educational Resources Information Center
Prouty, Kenneth E.
2004-01-01
This essay examines how jazz educators construct methods for teaching the art of improvisation in institutionalized jazz studies programs. Unlike previous studies of the processes and philosophies of jazz instruction, I examine such processes from a cultural standpoint, to identify why certain methods might be favored over others. Specifically,…
ERIC Educational Resources Information Center
Jones, Gerald L.; Westen, Risdon J.
The multivariate approach of canonical correlation was used to assess selection procedures of the Air Force Academy. It was felt that improved student selection methods might reduce the number of dropouts while maintaining or improving the quality of graduates. The method of canonical correlation was designed to maximize prediction of academic…
Deformation quantization of fermi fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Galaviz, I.; Garcia-Compean, H.; Departamento de Fisica, Centro de Investigacion y de Estudios Avanzados del IPN, P.O. Box 14-740, 07000 Mexico, D.F.
2008-04-15
Deformation quantization for any Grassmann scalar free field is described via the Weyl-Wigner-Moyal formalism. The Stratonovich-Weyl quantizer, the Moyal *-product and the Wigner functional are obtained by extending the formalism proposed recently in [I. Galaviz, H. Garcia-Compean, M. Przanowski, F.J. Turrubiates, Weyl-Wigner-Moyal Formalism for Fermi Classical Systems, arXiv:hep-th/0612245] to the fermionic systems of infinite number of degrees of freedom. In particular, this formalism is applied to quantize the Dirac free field. It is observed that the use of suitable oscillator variables facilitates considerably the procedure. The Stratonovich-Weyl quantizer, the Moyal *-product, the Wigner functional, the normal ordering operator, and finally,more » the Dirac propagator have been found with the use of these variables.« less
Quantized Rabi oscillations and circular dichroism in quantum Hall systems
NASA Astrophysics Data System (ADS)
Tran, D. T.; Cooper, N. R.; Goldman, N.
2018-06-01
The dissipative response of a quantum system upon periodic driving can be exploited as a probe of its topological properties. Here we explore the implications of such phenomena in two-dimensional gases subjected to a uniform magnetic field. It is shown that a filled Landau level exhibits a quantized circular dichroism, which can be traced back to its underlying nontrivial topology. Based on selection rules, we find that this quantized effect can be suitably described in terms of Rabi oscillations, whose frequencies satisfy simple quantization laws. We discuss how quantized dissipative responses can be probed locally, both in the bulk and at the boundaries of the system. This work suggests alternative forms of topological probes based on circular dichroism.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sundararaman, Ravishankar; Goddard, III, William A.; Arias, Tomas A.
First-principles calculations combining density-functional theory and continuum solvation models enable realistic theoretical modeling and design of electrochemical systems. When a reaction proceeds in such systems, the number of electrons in the portion of the system treated quantum mechanically changes continuously, with a balancing charge appearing in the continuum electrolyte. A grand-canonical ensemble of electrons at a chemical potential set by the electrode potential is therefore the ideal description of such systems that directly mimics the experimental condition. We present two distinct algorithms: a self-consistent field method and a direct variational free energy minimization method using auxiliary Hamiltonians (GC-AuxH), to solvemore » the Kohn-Sham equations of electronic density-functional theory directly in the grand canonical ensemble at fixed potential. Both methods substantially improve performance compared to a sequence of conventional fixed-number calculations targeting the desired potential, with the GC-AuxH method additionally exhibiting reliable and smooth exponential convergence of the grand free energy. Lastly, we apply grand-canonical density-functional theory to the under-potential deposition of copper on platinum from chloride-containing electrolytes and show that chloride desorption, not partial copper monolayer formation, is responsible for the second voltammetric peak.« less
Sundararaman, Ravishankar; Goddard, William A; Arias, Tomas A
2017-03-21
First-principles calculations combining density-functional theory and continuum solvation models enable realistic theoretical modeling and design of electrochemical systems. When a reaction proceeds in such systems, the number of electrons in the portion of the system treated quantum mechanically changes continuously, with a balancing charge appearing in the continuum electrolyte. A grand-canonical ensemble of electrons at a chemical potential set by the electrode potential is therefore the ideal description of such systems that directly mimics the experimental condition. We present two distinct algorithms: a self-consistent field method and a direct variational free energy minimization method using auxiliary Hamiltonians (GC-AuxH), to solve the Kohn-Sham equations of electronic density-functional theory directly in the grand canonical ensemble at fixed potential. Both methods substantially improve performance compared to a sequence of conventional fixed-number calculations targeting the desired potential, with the GC-AuxH method additionally exhibiting reliable and smooth exponential convergence of the grand free energy. Finally, we apply grand-canonical density-functional theory to the under-potential deposition of copper on platinum from chloride-containing electrolytes and show that chloride desorption, not partial copper monolayer formation, is responsible for the second voltammetric peak.
Sundararaman, Ravishankar; Goddard, III, William A.; Arias, Tomas A.
2017-03-16
First-principles calculations combining density-functional theory and continuum solvation models enable realistic theoretical modeling and design of electrochemical systems. When a reaction proceeds in such systems, the number of electrons in the portion of the system treated quantum mechanically changes continuously, with a balancing charge appearing in the continuum electrolyte. A grand-canonical ensemble of electrons at a chemical potential set by the electrode potential is therefore the ideal description of such systems that directly mimics the experimental condition. We present two distinct algorithms: a self-consistent field method and a direct variational free energy minimization method using auxiliary Hamiltonians (GC-AuxH), to solvemore » the Kohn-Sham equations of electronic density-functional theory directly in the grand canonical ensemble at fixed potential. Both methods substantially improve performance compared to a sequence of conventional fixed-number calculations targeting the desired potential, with the GC-AuxH method additionally exhibiting reliable and smooth exponential convergence of the grand free energy. Lastly, we apply grand-canonical density-functional theory to the under-potential deposition of copper on platinum from chloride-containing electrolytes and show that chloride desorption, not partial copper monolayer formation, is responsible for the second voltammetric peak.« less
Instabilities caused by floating-point arithmetic quantization.
NASA Technical Reports Server (NTRS)
Phillips, C. L.
1972-01-01
It is shown that an otherwise stable digital control system can be made unstable by signal quantization when the controller operates on floating-point arithmetic. Sufficient conditions of instability are determined, and an example of loss of stability is treated when only one quantizer is operated.
Direct comparison of fractional and integer quantized Hall resistance
NASA Astrophysics Data System (ADS)
Ahlers, Franz J.; Götz, Martin; Pierz, Klaus
2017-08-01
We present precision measurements of the fractional quantized Hall effect, where the quantized resistance {{R}≤ft[ 1/3 \\right]} in the fractional quantum Hall state at filling factor 1/3 was compared with a quantized resistance {{R}[2]} , represented by an integer quantum Hall state at filling factor 2. A cryogenic current comparator bridge capable of currents down to the nanoampere range was used to directly compare two resistance values of two GaAs-based devices located in two cryostats. A value of 1-(5.3 ± 6.3) 10-8 (95% confidence level) was obtained for the ratio ({{R}≤ft[ 1/3 \\right]}/6{{R}[2]} ). This constitutes the most precise comparison of integer resistance quantization (in terms of h/e 2) in single-particle systems and of fractional quantization in fractionally charged quasi-particle systems. While not relevant for practical metrology, such a test of the validity of the underlying physics is of significance in the context of the upcoming revision of the SI.
Quantization noise in digital speech. M.S. Thesis- Houston Univ.
NASA Technical Reports Server (NTRS)
Schmidt, O. L.
1972-01-01
The amount of quantization noise generated in a digital-to-analog converter is dependent on the number of bits or quantization levels used to digitize the analog signal in the analog-to-digital converter. The minimum number of quantization levels and the minimum sample rate were derived for a digital voice channel. A sample rate of 6000 samples per second and lowpass filters with a 3 db cutoff of 2400 Hz are required for 100 percent sentence intelligibility. Consonant sounds are the first speech components to be degraded by quantization noise. A compression amplifier can be used to increase the weighting of the consonant sound amplitudes in the analog-to-digital converter. An expansion network must be installed at the output of the digital-to-analog converter to restore the original weighting of the consonant sounds. This technique results in 100 percent sentence intelligibility for a sample rate of 5000 samples per second, eight quantization levels, and lowpass filters with a 3 db cutoff of 2000 Hz.
NASA Astrophysics Data System (ADS)
Myrheim, J.
Contents 1 Introduction 1.1 The concept of particle statistics 1.2 Statistical mechanics and the many-body problem 1.3 Experimental physics in two dimensions 1.4 The algebraic approach: Heisenberg quantization 1.5 More general quantizations 2 The configuration space 2.1 The Euclidean relative space for two particles 2.2 Dimensions d=1,2,3 2.3 Homotopy 2.4 The braid group 3 Schroedinger quantization in one dimension 4 Heisenberg quantization in one dimension 4.1 The coordinate representation 5 Schroedinger quantization in dimension d ≥ 2 5.1 Scalar wave functions 5.2 Homotopy 5.3 Interchange phases 5.4 The statistics vector potential 5.5 The N-particle case 5.6 Chern-Simons theory 6 The Feynman path integral for anyons 6.1 Eigenstates for position and momentum 6.2 The path integral 6.3 Conjugation classes in SN 6.4 The non-interacting case 6.5 Duality of Feynman and Schroedinger quantization 7 The harmonic oscillator 7.1 The two-dimensional harmonic oscillator 7.2 Two anyons in a harmonic oscillator potential 7.3 More than two anyons 7.4 The three-anyon problem 8 The anyon gas 8.1 The cluster and virial expansions 8.2 First and second order perturbative results 8.3 Regularization by periodic boundary conditions 8.4 Regularization by a harmonic oscillator potential 8.5 Bosons and fermions 8.6 Two anyons 8.7 Three anyons 8.8 The Monte Carlo method 8.9 The path integral representation of the coefficients GP 8.10 Exact and approximate polynomials 8.11 The fourth virial coefficient of anyons 8.12 Two polynomial theorems 9 Charged particles in a constant magnetic field 9.1 One particle in a magnetic field 9.2 Two anyons in a magnetic field 9.3 The anyon gas in a magnetic field 10 Interchange phases and geometric phases 10.1 Introduction to geometric phases 10.2 One particle in a magnetic field 10.3 Two particles in a magnetic field 10.4 Interchange of two anyons in potential wells 10.5 Laughlin's theory of the fractional quantum Hall effect
Decoherence in quantum lossy systems: superoperator and matrix techniques
NASA Astrophysics Data System (ADS)
Yazdanpanah, Navid; Tavassoly, Mohammad Kazem; Moya-Cessa, Hector Manuel
2017-06-01
Due to the unavoidably dissipative interaction between quantum systems with their environments, the decoherence flows inevitably into the systems. Therefore, to achieve a better understanding on how decoherence affects on the damped systems, a fundamental investigation of master equation seems to be required. In this regard, finding out the missed information which has been lost due to irreversibly of the dissipative systems, is also of practical importance in quantum information science. Motivating by these facts, in this work we want to use superoperator and matrix techniques, by which we are able to illustrate two methods to obtain the explicit form of density operators corresponding to damped systems at arbitrary temperature T ≥ 0. To establish the potential abilities of the suggested methods, we apply them to deduce the density operator of some practical well-known quantum systems. Using the superoperator techniques, at first we obtain the density operator of a damped system which includes a qubit interacting with a single-mode quantized field within an optical cavity. As the second system, we study the decoherence of a quantized field within an optical damped cavity. We also use our proposed matrix method to study the decoherence of a system which includes two qubits in the interaction with each other via dipole-dipole interaction and at the same time with a quantized field in a lossy cavity. The influences of dissipation on the decoherence of dynamical properties of these systems are also numerically investigated. At last, the advantages of the proposed superoperator techniques in comparison with matrix method are explained.
NASA Astrophysics Data System (ADS)
Mobarakeh, Pouyan Shakeri; Grinchenko, Victor T.
2015-06-01
The majority of practical cases of acoustics problems requires solving the boundary problems in non-canonical domains. Therefore construction of analytical solutions of mathematical physics boundary problems for non-canonical domains is both lucrative from the academic viewpoint, and very instrumental for elaboration of efficient algorithms of quantitative estimation of the field characteristics under study. One of the main solving ideologies for such problems is based on the superposition method that allows one to analyze a wide class of specific problems with domains which can be constructed as the union of canonically-shaped subdomains. It is also assumed that an analytical solution (or quasi-solution) can be constructed for each subdomain in one form or another. However, this case implies some difficulties in the construction of calculation algorithms, insofar as the boundary conditions are incompletely defined in the intervals, where the functions appearing in the general solution are orthogonal to each other. We discuss several typical examples of problems with such difficulties, we study their nature and identify the optimal methods to overcome them.
Instanton rate constant calculations close to and above the crossover temperature.
McConnell, Sean; Kästner, Johannes
2017-11-15
Canonical instanton theory is known to overestimate the rate constant close to a system-dependent crossover temperature and is inapplicable above that temperature. We compare the accuracy of the reaction rate constants calculated using recent semi-classical rate expressions to those from canonical instanton theory. We show that rate constants calculated purely from solving the stability matrix for the action in degrees of freedom orthogonal to the instanton path is not applicable at arbitrarily low temperatures and use two methods to overcome this. Furthermore, as a by-product of the developed methods, we derive a simple correction to canonical instanton theory that can alleviate this known overestimation of rate constants close to the crossover temperature. The combined methods accurately reproduce the rate constants of the canonical theory along the whole temperature range without the spurious overestimation near the crossover temperature. We calculate and compare rate constants on three different reactions: H in the Müller-Brown potential, methylhydroxycarbene → acetaldehyde and H 2 + OH → H + H 2 O. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Application of Classification Models to Pharyngeal High-Resolution Manometry
ERIC Educational Resources Information Center
Mielens, Jason D.; Hoffman, Matthew R.; Ciucci, Michelle R.; McCulloch, Timothy M.; Jiang, Jack J.
2012-01-01
Purpose: The authors present 3 methods of performing pattern recognition on spatiotemporal plots produced by pharyngeal high-resolution manometry (HRM). Method: Classification models, including the artificial neural networks (ANNs) multilayer perceptron (MLP) and learning vector quantization (LVQ), as well as support vector machines (SVM), were…
Multicollinearity in canonical correlation analysis in maize.
Alves, B M; Cargnelutti Filho, A; Burin, C
2017-03-30
The objective of this study was to evaluate the effects of multicollinearity under two methods of canonical correlation analysis (with and without elimination of variables) in maize (Zea mays L.) crop. Seventy-six maize genotypes were evaluated in three experiments, conducted in a randomized block design with three replications, during the 2009/2010 crop season. Eleven agronomic variables (number of days from sowing until female flowering, number of days from sowing until male flowering, plant height, ear insertion height, ear placement, number of plants, number of ears, ear index, ear weight, grain yield, and one thousand grain weight), 12 protein-nutritional variables (crude protein, lysine, methionine, cysteine, threonine, tryptophan, valine, isoleucine, leucine, phenylalanine, histidine, and arginine), and 6 energetic-nutritional variables (apparent metabolizable energy, apparent metabolizable energy corrected for nitrogen, ether extract, crude fiber, starch, and amylose) were measured. A phenotypic correlation matrix was first generated among the 29 variables for each of the experiments. A multicollinearity diagnosis was later performed within each group of variables using methodologies such as variance inflation factor and condition number. Canonical correlation analysis was then performed, with and without the elimination of variables, among groups of agronomic and protein-nutritional, and agronomic and energetic-nutritional variables. The canonical correlation analysis in the presence of multicollinearity (without elimination of variables) overestimates the variability of canonical coefficients. The elimination of variables is an efficient method to circumvent multicollinearity in canonical correlation analysis.
NASA Astrophysics Data System (ADS)
Ni, Fang; Nakatsukasa, Takashi
2018-04-01
To describe quantal collective phenomena, it is useful to requantize the time-dependent mean-field dynamics. We study the time-dependent Hartree-Fock-Bogoliubov (TDHFB) theory for the two-level pairing Hamiltonian, and compare results of different quantization methods. The one constructing microscopic wave functions, using the TDHFB trajectories fulfilling the Einstein-Brillouin-Keller quantization condition, turns out to be the most accurate. The method is based on the stationary-phase approximation to the path integral. We also examine the performance of the collective model which assumes that the pairing gap parameter is the collective coordinate. The applicability of the collective model is limited for the nuclear pairing with a small number of single-particle levels, because the pairing gap parameter represents only a half of the pairing collective space.
Britton, Jr., Charles L.; Wintenberg, Alan L.
1993-01-01
A radiation detection method and system for continuously correcting the quantization of detected charge during pulse pile-up conditions. Charge pulses from a radiation detector responsive to the energy of detected radiation events are converted to voltage pulses of predetermined shape whose peak amplitudes are proportional to the quantity of charge of each corresponding detected event by means of a charge-sensitive preamplifier. These peak amplitudes are sampled and stored sequentially in accordance with their respective times of occurrence. Based on the stored peak amplitudes and times of occurrence, a correction factor is generated which represents the fraction of a previous pulses influence on a preceding pulse peak amplitude. This correction factor is subtracted from the following pulse amplitude in a summing amplifier whose output then represents the corrected charge quantity measurement.
BFV quantization on hermitian symmetric spaces
NASA Astrophysics Data System (ADS)
Fradkin, E. S.; Linetsky, V. Ya.
1995-02-01
Gauge-invariant BFV approach to geometric quantization is applied to the case of hermitian symmetric spaces G/ H. In particular, gauge invariant quantization on the Lobachevski plane and sphere is carried out. Due to the presence of symmetry, master equations for the first-class constraints, quantum observables and physical quantum states are exactly solvable. BFV-BRST operator defines a flat G-connection in the Fock bundle over G/ H. Physical quantum states are covariantly constant sections with respect to this connection and are shown to coincide with the generalized coherent states for the group G. Vacuum expectation values of the quantum observables commuting with the quantum first-class constraints reduce to the covariant symbols of Berezin. The gauge-invariant approach to quantization on symplectic manifolds synthesizes geometric, deformation and Berezin quantization approaches.
An, Ji-Yong; Meng, Fan-Rong; You, Zhu-Hong; Fang, Yu-Hong; Zhao, Yu-Jun; Zhang, Ming
2016-01-01
We propose a novel computational method known as RVM-LPQ that combines the Relevance Vector Machine (RVM) model and Local Phase Quantization (LPQ) to predict PPIs from protein sequences. The main improvements are the results of representing protein sequences using the LPQ feature representation on a Position Specific Scoring Matrix (PSSM), reducing the influence of noise using a Principal Component Analysis (PCA), and using a Relevance Vector Machine (RVM) based classifier. We perform 5-fold cross-validation experiments on Yeast and Human datasets, and we achieve very high accuracies of 92.65% and 97.62%, respectively, which is significantly better than previous works. To further evaluate the proposed method, we compare it with the state-of-the-art support vector machine (SVM) classifier on the Yeast dataset. The experimental results demonstrate that our RVM-LPQ method is obviously better than the SVM-based method. The promising experimental results show the efficiency and simplicity of the proposed method, which can be an automatic decision support tool for future proteomics research.
Quantum Structure of Space and Time
NASA Astrophysics Data System (ADS)
Duff, M. J.; Isham, C. J.
2012-07-01
Foreword Abdus Salam; Preface; List of participants; Part I. Quantum Gravity, Fields and Topology: 1. Some remarks on gravity and quantum mechanics Roger Penrose; 2. An experimental test of quantum gravity Don N. Page and C. D. Geilker; 3. Quantum mechanical origin of the sandwich theorem in classical gravitation theory Claudio Teitelboim; 4. θ-States induced by the diffeomorphism group in canonically quantized gravity C. J. Isham; 5. Strong coupling quantum gravity: an introduction Martin Pilati; 6. Quantizing fourth order gravity theories S. M. Christensen; 7. Green's functions, states and renormalisation M. R. Brown and A. C. Ottewill; 8. Introduction to quantum regge calculus Martin Roček and Ruth Williams; 9. Spontaneous symmetry breaking in curved space-time D. J. Toms; 10. Spontaneous symmetry breaking near a black hole M. S. Fawcett and B. F. Whiting; 11. Yang-Mills vacua in a general three-space G. Kunstatter; 12. Fermion fractionization in physics R. Jackiw; Part II. Supergravity: 13. The new minimal formulation of N=1 supergravity and its tensor calculus M. F. Sohnius and P. C. West; 14. A new deteriorated energy-momentum tensor M. J. Duff and P. K. Townsend; 15. Off-shell N=2 and N=4 supergravity in five dimensions P. Howe; 16. Supergravity in high dimensions P. van Niewenhuizen; 17. Building linearised extended supergravities J. G. Taylor; 18. (Super)gravity in the complex angular momentum plane M. T. Grisaru; 19. The multiplet structure of solitons in the O(2) supergravity theory G. W. Gibbons; 20. Ultra-violet properties of supersymmetric gauge theory S. Ferrara; 21. Extended supercurrents and the ultra-violet finiteness of N=4 supersymmetric Yang-Mills theories K. S. Stelle; 22. Duality rotations B. Zumino; Part III. Cosmology and the Early Universe: 23. Energy, stability and cosmological constant S. Deser; 24. Phase transitions in the early universe T. W. B. Kibble; 25. Complete cosmological theories L. P. Grishchuk and Ya. B. Zeldovich; 26. The cosmological constant and the weak anthropic principle S. W. Hawking.
NASA Astrophysics Data System (ADS)
Sakuraba, Takao
The approach to quantum physics via current algebra and unitary representations of the diffeomorphism group is established. This thesis studies possible infinite Bose gas systems using this approach. Systems of locally finite configurations and systems of configurations with accumulation points are considered, with the main emphasis on the latter. In Chapter 2, canonical quantization, quantization via current algebra and unitary representations of the diffeomorphism group are reviewed. In Chapter 3, a new definition of the space of configurations is proposed and an axiom for general configuration spaces is abstracted. Various subsets of the configuration space, including those specifying the number of points in a Borel set and those specifying the number of accumulation points in a Borel set are proved to be measurable using this axiom. In Chapter 4, known results on the space of locally finite configurations and Poisson measure are reviewed in the light of the approach developed in Chapter 3, including the approach to current algebra in the Poisson space by Albeverio, Kondratiev, and Rockner. Goldin and Moschella considered unitary representations of the group of diffeomorphisms of the line based on self-similar random processes, which may describe infinite quantum gas systems with clusters. In Chapter 5, the Goldin-Moschella theory is developed further. Their construction of measures quasi-invariant under diffeomorphisms is reviewed, and a rigorous proof of their conjectures is given. It is proved that their measures with distinct correlation parameters are mutually singular. A quasi-invariant measure constructed by Ismagilov on the space of configurations with accumulation points on the circle is proved to be singular with respect to the Goldin-Moschella measures. Finally a generalization of the Goldin-Moschella measures to the higher-dimensional case is studied, where the notion of covariance matrix and the notion of condition number play important roles. A rigorous construction of measures quasi-invariant under the group of diffeomorphisms of d-dimensional space stabilizing a point is given.
Quantized vortices in arbitrary dimensions and the normal-to-superfluid phase transition
NASA Astrophysics Data System (ADS)
Bora, Florin
The structure and energetics of superflow around quantized vortices, and the motion inherited by these vortices from this superflow, are explored in the general setting of a superfluid in arbitrary dimensions. The vortices may be idealized as objects of co-dimension two, such as one-dimensional loops and two-dimensional closed surfaces, respectively, in the cases of three- and four-dimensional superfluidity. By using the analogy between vortical superflow and Ampere-Maxwell magnetostatics, the equilibrium superflow containing any specified collection of vortices is constructed. The energy of the superflow is found to take on a simple form for vortices that are smooth and asymptotically large, compared with the vortex core size. The motion of vortices is analyzed in general, as well as for the special cases of hyper-spherical and weakly distorted hyper-planar vortices. In all dimensions, vortex motion reflects vortex geometry. In dimension four and higher, this includes not only extrinsic but also intrinsic aspects of the vortex shape, which enter via the first and second fundamental forms of classical geometry. For hyper-spherical vortices, which generalize the vortex rings of three dimensional superfluidity, the energy-momentum relation is determined. Simple scaling arguments recover the essential features of these results, up to numerical and logarithmic factors. Extending these results to systems containing multiple vortices is elementary due to the linearity of the theory. The energy for multiple vortices is thus a sum of self-energies and power-law interaction terms. The statistical mechanics of a system containing vortices is addressed via the grand canonical partition function. A renormalization-group analysis in which the low energy excitations are integrated approximately, is used to compute certain critical coefficients. The exponents obtained via this approximate procedure are compared with values obtained previously by other means. For dimensions higher than three the superfluid density is found to vanish as the critical temperature is approached from below.
Diffraction pattern simulation of cellulose fibrils using distributed and quantized pair distances
Zhang, Yan; Inouye, Hideyo; Crowley, Michael; ...
2016-10-14
Intensity simulation of X-ray scattering from large twisted cellulose molecular fibrils is important in understanding the impact of chemical or physical treatments on structural properties such as twisting or coiling. This paper describes a highly efficient method for the simulation of X-ray diffraction patterns from complex fibrils using atom-type-specific pair-distance quantization. Pair distances are sorted into arrays which are labelled by atom type. Histograms of pair distances in each array are computed and binned and the resulting population distributions are used to represent the whole pair-distance data set. These quantized pair-distance arrays are used with a modified and vectorized Debyemore » formula to simulate diffraction patterns. This approach utilizes fewer pair distances in each iteration, and atomic scattering factors are moved outside the iteration since the arrays are labelled by atom type. As a result, this algorithm significantly reduces the computation time while maintaining the accuracy of diffraction pattern simulation, making possible the simulation of diffraction patterns from large twisted fibrils in a relatively short period of time, as is required for model testing and refinement.« less
Diffraction pattern simulation of cellulose fibrils using distributed and quantized pair distances
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Yan; Inouye, Hideyo; Crowley, Michael
Intensity simulation of X-ray scattering from large twisted cellulose molecular fibrils is important in understanding the impact of chemical or physical treatments on structural properties such as twisting or coiling. This paper describes a highly efficient method for the simulation of X-ray diffraction patterns from complex fibrils using atom-type-specific pair-distance quantization. Pair distances are sorted into arrays which are labelled by atom type. Histograms of pair distances in each array are computed and binned and the resulting population distributions are used to represent the whole pair-distance data set. These quantized pair-distance arrays are used with a modified and vectorized Debyemore » formula to simulate diffraction patterns. This approach utilizes fewer pair distances in each iteration, and atomic scattering factors are moved outside the iteration since the arrays are labelled by atom type. This algorithm significantly reduces the computation time while maintaining the accuracy of diffraction pattern simulation, making possible the simulation of diffraction patterns from large twisted fibrils in a relatively short period of time, as is required for model testing and refinement.« less
Diffraction pattern simulation of cellulose fibrils using distributed and quantized pair distances
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Yan; Inouye, Hideyo; Crowley, Michael
Intensity simulation of X-ray scattering from large twisted cellulose molecular fibrils is important in understanding the impact of chemical or physical treatments on structural properties such as twisting or coiling. This paper describes a highly efficient method for the simulation of X-ray diffraction patterns from complex fibrils using atom-type-specific pair-distance quantization. Pair distances are sorted into arrays which are labelled by atom type. Histograms of pair distances in each array are computed and binned and the resulting population distributions are used to represent the whole pair-distance data set. These quantized pair-distance arrays are used with a modified and vectorized Debyemore » formula to simulate diffraction patterns. This approach utilizes fewer pair distances in each iteration, and atomic scattering factors are moved outside the iteration since the arrays are labelled by atom type. As a result, this algorithm significantly reduces the computation time while maintaining the accuracy of diffraction pattern simulation, making possible the simulation of diffraction patterns from large twisted fibrils in a relatively short period of time, as is required for model testing and refinement.« less
Quantum transport in graphene Hall bars: Effects of side gates
NASA Astrophysics Data System (ADS)
Petrović, M. D.; Peeters, F. M.
2017-05-01
Quantum electron transport in side-gated graphene Hall bars is investigated in the presence of quantizing external magnetic fields. The asymmetric potential of four side-gates distorts the otherwise flat bands of the relativistic Landau levels, and creates new propagating states in the Landau spectrum (i.e. snake states). The existence of these new states leads to an interesting modification of the bend and Hall resistances, with new quantizing plateaus appearing in close proximity of the Landau levels. The electron guiding in this system can be understood by studying the current density profiles of the incoming and outgoing modes. From the fact that guided electrons fully transmit without any backscattering (similarly to edge states), we are able to analytically predict the values of the quantized resistances, and they match the resistance data we obtain with our numerical (tight-binding) method. These insights in the electron guiding will be useful in predicting the resistances for other side-gate configurations, and possibly in other system geometries, as long as there is no backscattering of the guided states.
Quantization of Electromagnetic Fields in Cavities
NASA Technical Reports Server (NTRS)
Kakazu, Kiyotaka; Oshiro, Kazunori
1996-01-01
A quantization procedure for the electromagnetic field in a rectangular cavity with perfect conductor walls is presented, where a decomposition formula of the field plays an essential role. All vector mode functions are obtained by using the decomposition. After expanding the field in terms of the vector mode functions, we get the quantized electromagnetic Hamiltonian.
Quantization Distortion in Block Transform-Compressed Data
NASA Technical Reports Server (NTRS)
Boden, A. F.
1995-01-01
The popular JPEG image compression standard is an example of a block transform-based compression scheme; the image is systematically subdivided into block that are individually transformed, quantized, and encoded. The compression is achieved by quantizing the transformed data, reducing the data entropy and thus facilitating efficient encoding. A generic block transform model is introduced.
Low-rate image coding using vector quantization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makur, A.
1990-01-01
This thesis deals with the development and analysis of a computationally simple vector quantization image compression system for coding monochrome images at low bit rate. Vector quantization has been known to be an effective compression scheme when a low bit rate is desirable, but the intensive computation required in a vector quantization encoder has been a handicap in using it for low rate image coding. The present work shows that, without substantially increasing the coder complexity, it is indeed possible to achieve acceptable picture quality while attaining a high compression ratio. Several modifications to the conventional vector quantization coder aremore » proposed in the thesis. These modifications are shown to offer better subjective quality when compared to the basic coder. Distributed blocks are used instead of spatial blocks to construct the input vectors. A class of input-dependent weighted distortion functions is used to incorporate psychovisual characteristics in the distortion measure. Computationally simple filtering techniques are applied to further improve the decoded image quality. Finally, unique designs of the vector quantization coder using electronic neural networks are described, so that the coding delay is reduced considerably.« less
Probabilistic distance-based quantizer design for distributed estimation
NASA Astrophysics Data System (ADS)
Kim, Yoon Hak
2016-12-01
We consider an iterative design of independently operating local quantizers at nodes that should cooperate without interaction to achieve application objectives for distributed estimation systems. We suggest as a new cost function a probabilistic distance between the posterior distribution and its quantized one expressed as the Kullback Leibler (KL) divergence. We first present the analysis that minimizing the KL divergence in the cyclic generalized Lloyd design framework is equivalent to maximizing the logarithmic quantized posterior distribution on the average which can be further computationally reduced in our iterative design. We propose an iterative design algorithm that seeks to maximize the simplified version of the posterior quantized distribution and discuss that our algorithm converges to a global optimum due to the convexity of the cost function and generates the most informative quantized measurements. We also provide an independent encoding technique that enables minimization of the cost function and can be efficiently simplified for a practical use of power-constrained nodes. We finally demonstrate through extensive experiments an obvious advantage of improved estimation performance as compared with the typical designs and the novel design techniques previously published.
Application of Canonical Effective Methods to Background-Independent Theories
NASA Astrophysics Data System (ADS)
Buyukcam, Umut
Effective formalisms play an important role in analyzing phenomena above some given length scale when complete theories are not accessible. In diverse exotic but physically important cases, the usual path-integral techniques used in a standard Quantum Field Theory approach seldom serve as adequate tools. This thesis exposes a new effective method for quantum systems, called the Canonical Effective Method, which owns particularly wide applicability in backgroundindependent theories as in the case of gravitational phenomena. The central purpose of this work is to employ these techniques to obtain semi-classical dynamics from canonical quantum gravity theories. Application to non-associative quantum mechanics is developed and testable results are obtained. Types of non-associative algebras relevant for magnetic-monopole systems are discussed. Possible modifications of hypersurface deformation algebra and the emergence of effective space-times are presented. iii.
Light-cone quantization of two dimensional field theory in the path integral approach
NASA Astrophysics Data System (ADS)
Cortés, J. L.; Gamboa, J.
1999-05-01
A quantization condition due to the boundary conditions and the compatification of the light cone space-time coordinate x- is identified at the level of the classical equations for the right-handed fermionic field in two dimensions. A detailed analysis of the implications of the implementation of this quantization condition at the quantum level is presented. In the case of the Thirring model one has selection rules on the excitations as a function of the coupling and in the case of the Schwinger model a double integer structure of the vacuum is derived in the light-cone frame. Two different quantized chiral Schwinger models are found, one of them without a θ-vacuum structure. A generalization of the quantization condition to theories with several fermionic fields and to higher dimensions is presented.
Relational symplectic groupoid quantization for constant poisson structures
NASA Astrophysics Data System (ADS)
Cattaneo, Alberto S.; Moshayedi, Nima; Wernli, Konstantin
2017-09-01
As a detailed application of the BV-BFV formalism for the quantization of field theories on manifolds with boundary, this note describes a quantization of the relational symplectic groupoid for a constant Poisson structure. The presence of mixed boundary conditions and the globalization of results are also addressed. In particular, the paper includes an extension to space-times with boundary of some formal geometry considerations in the BV-BFV formalism, and specifically introduces into the BV-BFV framework a "differential" version of the classical and quantum master equations. The quantization constructed in this paper induces Kontsevich's deformation quantization on the underlying Poisson manifold, i.e., the Moyal product, which is known in full details. This allows focussing on the BV-BFV technology and testing it. For the inexperienced reader, this is also a practical and reasonably simple way to learn it.
A hybrid LBG/lattice vector quantizer for high quality image coding
NASA Technical Reports Server (NTRS)
Ramamoorthy, V.; Sayood, K.; Arikan, E. (Editor)
1991-01-01
It is well known that a vector quantizer is an efficient coder offering a good trade-off between quantization distortion and bit rate. The performance of a vector quantizer asymptotically approaches the optimum bound with increasing dimensionality. A vector quantized image suffers from the following types of degradations: (1) edge regions in the coded image contain staircase effects, (2) quasi-constant or slowly varying regions suffer from contouring effects, and (3) textured regions lose details and suffer from granular noise. All three of these degradations are due to the finite size of the code book, the distortion measures used in the design, and due to the finite training procedure involved in the construction of the code book. In this paper, we present an adaptive technique which attempts to ameliorate the edge distortion and contouring effects.
Molecular dynamics coupled with a virtual system for effective conformational sampling.
Hayami, Tomonori; Kasahara, Kota; Nakamura, Haruki; Higo, Junichi
2018-07-15
An enhanced conformational sampling method is proposed: virtual-system coupled canonical molecular dynamics (VcMD). Although VcMD enhances sampling along a reaction coordinate, this method is free from estimation of a canonical distribution function along the reaction coordinate. This method introduces a virtual system that does not necessarily obey a physical law. To enhance sampling the virtual system couples with a molecular system to be studied. Resultant snapshots produce a canonical ensemble. This method was applied to a system consisting of two short peptides in an explicit solvent. Conventional molecular dynamics simulation, which is ten times longer than VcMD, was performed along with adaptive umbrella sampling. Free-energy landscapes computed from the three simulations mutually converged well. The VcMD provided quicker association/dissociation motions of peptides than the conventional molecular dynamics did. The VcMD method is applicable to various complicated systems because of its methodological simplicity. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.
Electroencephalographic compression based on modulated filter banks and wavelet transform.
Bazán-Prieto, Carlos; Cárdenas-Barrera, Julián; Blanco-Velasco, Manuel; Cruz-Roldán, Fernando
2011-01-01
Due to the large volume of information generated in an electroencephalographic (EEG) study, compression is needed for storage, processing or transmission for analysis. In this paper we evaluate and compare two lossy compression techniques applied to EEG signals. It compares the performance of compression schemes with decomposition by filter banks or wavelet Packets transformation, seeking the best value for compression, best quality and more efficient real time implementation. Due to specific properties of EEG signals, we propose a quantization stage adapted to the dynamic range of each band, looking for higher quality. The results show that the compressor with filter bank performs better than transform methods. Quantization adapted to the dynamic range significantly enhances the quality.
An approach to the quantization of black hole quasi-normal modes
NASA Astrophysics Data System (ADS)
Pal, Soham; Rajeev, Karthik; Shankaranarayanan, S.
2015-07-01
In this work, we derive the asymptotic quasi-normal modes of a Banados-Teitelboim-Zanelli (BTZ) black hole using a quantum field theoretic Lagrangian. The BTZ black hole is a very popular system in the context of 2 + 1-dimensional quantum gravity. However, to our knowledge the quasi-normal modes of the BTZ black hole have been studied only in the classical domain. Here we show a way to quantize the quasi-normal modes of the BTZ black hole by mapping it to the Bateman-Feschbach-Tikochinsky oscillator and the Caldirola-Kanai oscillator. We have also discussed a couple of other black hole potentials to which this method can be applied.
Low bit rate coding of Earth science images
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Chung, Wilson C.; Smith, Mark J. T.
1993-01-01
In this paper, the authors discuss compression based on some new ideas in vector quantization and their incorporation in a sub-band coding framework. Several variations are considered, which collectively address many of the individual compression needs within the earth science community. The approach taken in this work is based on some recent advances in the area of variable rate residual vector quantization (RVQ). This new RVQ method is considered separately and in conjunction with sub-band image decomposition. Very good results are achieved in coding a variety of earth science images. The last section of the paper provides some comparisons that illustrate the improvement in performance attributable to this approach relative the the JPEG coding standard.
NASA Astrophysics Data System (ADS)
Lu, Li; Sheng, Wen; Liu, Shihua; Zhang, Xianzhi
2014-10-01
The ballistic missile hyperspectral data of imaging spectrometer from the near-space platform are generated by numerical method. The characteristic of the ballistic missile hyperspectral data is extracted and matched based on two different kinds of algorithms, which called transverse counting and quantization coding, respectively. The simulation results show that two algorithms extract the characteristic of ballistic missile adequately and accurately. The algorithm based on the transverse counting has the low complexity and can be implemented easily compared to the algorithm based on the quantization coding does. The transverse counting algorithm also shows the good immunity to the disturbance signals and speed up the matching and recognition of subsequent targets.
NASA Astrophysics Data System (ADS)
Shibata, K.; Yoshida, K.; Daiguji, K.; Sato, H.; , T., Ii; Hirakawa, K.
2017-10-01
An electric-field control of quantized conductance in metal (gold) quantum point contacts (QPCs) is demonstrated by adopting a liquid-gated electric-double-layer (EDL) transistor geometry. Atomic-scale gold QPCs were fabricated by applying the feedback-controlled electrical break junction method to the gold nanojunction. The electric conductance in gold QPCs shows quantized conductance plateaus and step-wise increase/decrease by the conductance quantum, G0 = 2e2/h, as EDL-gate voltage is swept, demonstrating a modulation of the conductance of gold QPCs by EDL gating. The electric-field control of conductance in metal QPCs may open a way for their application to local charge sensing at room temperature.
Cross-entropy embedding of high-dimensional data using the neural gas model.
Estévez, Pablo A; Figueroa, Cristián J; Saito, Kazumi
2005-01-01
A cross-entropy approach to mapping high-dimensional data into a low-dimensional space embedding is presented. The method allows to project simultaneously the input data and the codebook vectors, obtained with the Neural Gas (NG) quantizer algorithm, into a low-dimensional output space. The aim of this approach is to preserve the relationship defined by the NG neighborhood function for each pair of input and codebook vectors. A cost function based on the cross-entropy between input and output probabilities is minimized by using a Newton-Raphson method. The new approach is compared with Sammon's non-linear mapping (NLM) and the hierarchical approach of combining a vector quantizer such as the self-organizing feature map (SOM) or NG with the NLM recall algorithm. In comparison with these techniques, our method delivers a clear visualization of both data points and codebooks, and it achieves a better mapping quality in terms of the topology preservation measure q(m).
Perturbative Quantum Gravity and its Relation to Gauge Theory.
Bern, Zvi
2002-01-01
In this review we describe a non-trivial relationship between perturbative gauge theory and gravity scattering amplitudes. At the semi-classical or tree-level, the scattering amplitudes of gravity theories in flat space can be expressed as a sum of products of well defined pieces of gauge theory amplitudes. These relationships were first discovered by Kawai, Lewellen, and Tye in the context of string theory, but hold more generally. In particular, they hold for standard Einstein gravity. A method based on D -dimensional unitarity can then be used to systematically construct all quantum loop corrections order-by-order in perturbation theory using as input the gravity tree amplitudes expressed in terms of gauge theory ones. More generally, the unitarity method provides a means for perturbatively quantizing massless gravity theories without the usual formal apparatus associated with the quantization of constrained systems. As one application, this method was used to demonstrate that maximally supersymmetric gravity is less divergent in the ultraviolet than previously thought.
Symmetries for Light-Front Quantization of Yukawa Model with Renormalization
NASA Astrophysics Data System (ADS)
Żochowski, Jan; Przeszowski, Jerzy A.
2017-12-01
In this work we discuss the Yukawa model with the extra term of self-interacting scalar field in D=1+3 dimensions. We present the method of derivation the light-front commutators and anti-commutators from the Heisenberg equations induced by the kinematical generating operator of the translation P+. Mentioned Heisenberg equations are the starting point for obtaining this algebra of the (anti-) commutators. Some discrepancies between existing and proposed method of quantization are revealed. The Lorentz and the CPT symmetry, together with some features of the quantum theory were applied to obtain the two-point Wightman function for the free fermions. Moreover, these Wightman functions were computed especially without referring to the Fock expansion. The Gaussian effective potential for the Yukawa model was found in the terms of the Wightman functions. It was regularized by the space-like point-splitting method. The coupling constants within the model were redefined. The optimum mass parameters remained regularization independent. Finally, the Gaussian effective potential was renormalized.
Segmentation of magnetic resonance images using fuzzy algorithms for learning vector quantization.
Karayiannis, N B; Pai, P I
1999-02-01
This paper evaluates a segmentation technique for magnetic resonance (MR) images of the brain based on fuzzy algorithms for learning vector quantization (FALVQ). These algorithms perform vector quantization by updating all prototypes of a competitive network through an unsupervised learning process. Segmentation of MR images is formulated as an unsupervised vector quantization process, where the local values of different relaxation parameters form the feature vectors which are represented by a relatively small set of prototypes. The experiments evaluate a variety of FALVQ algorithms in terms of their ability to identify different tissues and discriminate between normal tissues and abnormalities.
Splitting Times of Doubly Quantized Vortices in Dilute Bose-Einstein Condensates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huhtamaeki, J. A. M.; Pietilae, V.; Virtanen, S. M. M.
2006-09-15
Recently, the splitting of a topologically created doubly quantized vortex into two singly quantized vortices was experimentally investigated in dilute atomic cigar-shaped Bose-Einstein condensates [Y. Shin et al., Phys. Rev. Lett. 93, 160406 (2004)]. In particular, the dependency of the splitting time on the peak particle density was studied. We present results of theoretical simulations which closely mimic the experimental setup. We show that the combination of gravitational sag and time dependency of the trapping potential alone suffices to split the doubly quantized vortex in time scales which are in good agreement with the experiments.
Response of two-band systems to a single-mode quantized field
NASA Astrophysics Data System (ADS)
Shi, Z. C.; Shen, H. Z.; Wang, W.; Yi, X. X.
2016-03-01
The response of topological insulators (TIs) to an external weakly classical field can be expressed in terms of Kubo formula, which predicts quantized Hall conductivity of the quantum Hall family. The response of TIs to a single-mode quantized field, however, remains unexplored. In this work, we take the quantum nature of the external field into account and define a Hall conductance to characterize the linear response of a two-band system to the quantized field. The theory is then applied to topological insulators. Comparisons with the traditional Hall conductance are presented and discussed.
Xiong, Wenjun; Yu, Xinghuo; Chen, Yao; Gao, Jie
2017-06-01
This brief investigates the quantized iterative learning problem for digital networks with time-varying topologies. The information is first encoded as symbolic data and then transmitted. After the data are received, a decoder is used by the receiver to get an estimate of the sender's state. Iterative learning quantized communication is considered in the process of encoding and decoding. A sufficient condition is then presented to achieve the consensus tracking problem in a finite interval using the quantized iterative learning controllers. Finally, simulation results are given to illustrate the usefulness of the developed criterion.
A new local-global approach for classification.
Peres, R T; Pedreira, C E
2010-09-01
In this paper, we propose a new local-global pattern classification scheme that combines supervised and unsupervised approaches, taking advantage of both, local and global environments. We understand as global methods the ones concerned with the aim of constructing a model for the whole problem space using the totality of the available observations. Local methods focus into sub regions of the space, possibly using an appropriately selected subset of the sample. In the proposed method, the sample is first divided in local cells by using a Vector Quantization unsupervised algorithm, the LBG (Linde-Buzo-Gray). In a second stage, the generated assemblage of much easier problems is locally solved with a scheme inspired by Bayes' rule. Four classification methods were implemented for comparison purposes with the proposed scheme: Learning Vector Quantization (LVQ); Feedforward Neural Networks; Support Vector Machine (SVM) and k-Nearest Neighbors. These four methods and the proposed scheme were implemented in eleven datasets, two controlled experiments, plus nine public available datasets from the UCI repository. The proposed method has shown a quite competitive performance when compared to these classical and largely used classifiers. Our method is simple concerning understanding and implementation and is based on very intuitive concepts. Copyright 2010 Elsevier Ltd. All rights reserved.
Visual Typo Correction by Collocative Optimization: A Case Study on Merchandize Images.
Wei, Xiao-Yong; Yang, Zhen-Qun; Ngo, Chong-Wah; Zhang, Wei
2014-02-01
Near-duplicate retrieval (NDR) in merchandize images is of great importance to a lot of online applications on e-Commerce websites. In those applications where the requirement of response time is critical, however, the conventional techniques developed for a general purpose NDR are limited, because expensive post-processing like spatial verification or hashing is usually employed to compromise the quantization errors among the visual words used for the images. In this paper, we argue that most of the errors are introduced because of the quantization process where the visual words are considered individually, which has ignored the contextual relations among words. We propose a "spelling or phrase correction" like process for NDR, which extends the concept of collocations to visual domain for modeling the contextual relations. Binary quadratic programming is used to enforce the contextual consistency of words selected for an image, so that the errors (typos) are eliminated and the quality of the quantization process is improved. The experimental results show that the proposed method can improve the efficiency of NDR by reducing vocabulary size by 1000% times, and under the scenario of merchandize image NDR, the expensive local interest point feature used in conventional approaches can be replaced by color-moment feature, which reduces the time cost by 9202% while maintaining comparable performance to the state-of-the-art methods.
NASA Astrophysics Data System (ADS)
DeBuvitz, William
2014-03-01
I am a volunteer reader at the Princeton unit of "Learning Ally" (formerly "Recording for the Blind & Dyslexic") and I recently discovered that high school students are introduced to the concept of quantization well before they take chemistry and physics. For the past few months I have been reading onto computer files a popular Algebra I textbook, and I was surprised and dismayed by how it treated simultaneous equations and quadratic equations. The coefficients are always simple integers in examples and exercises, even when they are related to physics. This is probably a good idea when these topics are first presented to the students. It makes it easy to solve simultaneous equations by the method of elimination of a variable. And it makes it easy to solve some quadratic equations by factoring. The textbook also discusses the method of substitution for linear equations and the use of the quadratic formula, but only with simple integers.
Estimation of color filter array data from JPEG images for improved demosaicking
NASA Astrophysics Data System (ADS)
Feng, Wei; Reeves, Stanley J.
2006-02-01
On-camera demosaicking algorithms are necessarily simple and therefore do not yield the best possible images. However, off-camera demosaicking algorithms face the additional challenge that the data has been compressed and therefore corrupted by quantization noise. We propose a method to estimate the original color filter array (CFA) data from JPEG-compressed images so that more sophisticated (and better) demosaicking schemes can be applied to get higher-quality images. The JPEG image formation process, including simple demosaicking, color space transformation, chrominance channel decimation and DCT, is modeled as a series of matrix operations followed by quantization on the CFA data, which is estimated by least squares. An iterative method is used to conserve memory and speed computation. Our experiments show that the mean square error (MSE) with respect to the original CFA data is reduced significantly using our algorithm, compared to that of unprocessed JPEG and deblocked JPEG data.
Universe creation from the third-quantized vacuum
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGuigan, M.
1989-04-15
Third quantization leads to a Hilbert space containing a third-quantized vacuum in which no universes are present as well as multiuniverse states. We consider the possibility of universe creation for the special case where the universe emerges in a no-particle state. The probability of such a creation is computed from both the path-integral and operator formalisms.
Universe creation from the third-quantized vacuum
NASA Astrophysics Data System (ADS)
McGuigan, Michael
1989-04-01
Third quantization leads to a Hilbert space containing a third-quantized vacuum in which no universes are present as well as multiuniverse states. We consider the possibility of universe creation for the special case where the universe emerges in a no-particle state. The probability of such a creation is computed from both the path-integral and operator formalisms.
4D Sommerfeld quantization of the complex extended charge
NASA Astrophysics Data System (ADS)
Bulyzhenkov, Igor E.
2017-12-01
Gravitational fields and accelerations cannot change quantized magnetic flux in closed line contours due to flat 3D section of curved 4D space-time-matter. The relativistic Bohr-Sommerfeld quantization of the imaginary charge reveals an electric analog of the Compton length, which can introduce quantitatively the fine structure constant and the Plank length.
Canonical Drude Weight for Non-integrable Quantum Spin Chains
NASA Astrophysics Data System (ADS)
Mastropietro, Vieri; Porta, Marcello
2018-03-01
The Drude weight is a central quantity for the transport properties of quantum spin chains. The canonical definition of Drude weight is directly related to Kubo formula of conductivity. However, the difficulty in the evaluation of such expression has led to several alternative formulations, accessible to different methods. In particular, the Euclidean, or imaginary-time, Drude weight can be studied via rigorous renormalization group. As a result, in the past years several universality results have been proven for such quantity at zero temperature; remarkably, the proofs work for both integrable and non-integrable quantum spin chains. Here we establish the equivalence of Euclidean and canonical Drude weights at zero temperature. Our proof is based on rigorous renormalization group methods, Ward identities, and complex analytic ideas.
Multiset canonical correlations analysis and multispectral, truly multitemporal remote sensing data.
Nielsen, Allan Aasbjerg
2002-01-01
This paper describes two- and multiset canonical correlations analysis (CCA) for data fusion, multisource, multiset, or multitemporal exploratory data analysis. These techniques transform multivariate multiset data into new orthogonal variables called canonical variates (CVs) which, when applied in remote sensing, exhibit ever-decreasing similarity (as expressed by correlation measures) over sets consisting of 1) spectral variables at fixed points in time (R-mode analysis), or 2) temporal variables with fixed wavelengths (T-mode analysis). The CVs are invariant to linear and affine transformations of the original variables within sets which means, for example, that the R-mode CVs are insensitive to changes over time in offset and gain in a measuring device. In a case study, CVs are calculated from Landsat Thematic Mapper (TM) data with six spectral bands over six consecutive years. Both Rand T-mode CVs clearly exhibit the desired characteristic: they show maximum similarity for the low-order canonical variates and minimum similarity for the high-order canonical variates. These characteristics are seen both visually and in objective measures. The results from the multiset CCA R- and T-mode analyses are very different. This difference is ascribed to the noise structure in the data. The CCA methods are related to partial least squares (PLS) methods. This paper very briefly describes multiset CCA-based multiset PLS. Also, the CCA methods can be applied as multivariate extensions to empirical orthogonal functions (EOF) techniques. Multiset CCA is well-suited for inclusion in geographical information systems (GIS).
NASA Astrophysics Data System (ADS)
Ji, Zhengping; Ovsiannikov, Ilia; Wang, Yibing; Shi, Lilong; Zhang, Qiang
2015-05-01
In this paper, we develop a server-client quantization scheme to reduce bit resolution of deep learning architecture, i.e., Convolutional Neural Networks, for image recognition tasks. Low bit resolution is an important factor in bringing the deep learning neural network into hardware implementation, which directly determines the cost and power consumption. We aim to reduce the bit resolution of the network without sacrificing its performance. To this end, we design a new quantization algorithm called supervised iterative quantization to reduce the bit resolution of learned network weights. In the training stage, the supervised iterative quantization is conducted via two steps on server - apply k-means based adaptive quantization on learned network weights and retrain the network based on quantized weights. These two steps are alternated until the convergence criterion is met. In this testing stage, the network configuration and low-bit weights are loaded to the client hardware device to recognize coming input in real time, where optimized but expensive quantization becomes infeasible. Considering this, we adopt a uniform quantization for the inputs and internal network responses (called feature maps) to maintain low on-chip expenses. The Convolutional Neural Network with reduced weight and input/response precision is demonstrated in recognizing two types of images: one is hand-written digit images and the other is real-life images in office scenarios. Both results show that the new network is able to achieve the performance of the neural network with full bit resolution, even though in the new network the bit resolution of both weight and input are significantly reduced, e.g., from 64 bits to 4-5 bits.
Hierarchically clustered adaptive quantization CMAC and its learning convergence.
Teddy, S D; Lai, E M K; Quek, C
2007-11-01
The cerebellar model articulation controller (CMAC) neural network (NN) is a well-established computational model of the human cerebellum. Nevertheless, there are two major drawbacks associated with the uniform quantization scheme of the CMAC network. They are the following: (1) a constant output resolution associated with the entire input space and (2) the generalization-accuracy dilemma. Moreover, the size of the CMAC network is an exponential function of the number of inputs. Depending on the characteristics of the training data, only a small percentage of the entire set of CMAC memory cells is utilized. Therefore, the efficient utilization of the CMAC memory is a crucial issue. One approach is to quantize the input space nonuniformly. For existing nonuniformly quantized CMAC systems, there is a tradeoff between memory efficiency and computational complexity. Inspired by the underlying organizational mechanism of the human brain, this paper presents a novel CMAC architecture named hierarchically clustered adaptive quantization CMAC (HCAQ-CMAC). HCAQ-CMAC employs hierarchical clustering for the nonuniform quantization of the input space to identify significant input segments and subsequently allocating more memory cells to these regions. The stability of the HCAQ-CMAC network is theoretically guaranteed by the proof of its learning convergence. The performance of the proposed network is subsequently benchmarked against the original CMAC network, as well as two other existing CMAC variants on two real-life applications, namely, automated control of car maneuver and modeling of the human blood glucose dynamics. The experimental results have demonstrated that the HCAQ-CMAC network offers an efficient memory allocation scheme and improves the generalization and accuracy of the network output to achieve better or comparable performances with smaller memory usages. Index Terms-Cerebellar model articulation controller (CMAC), hierarchical clustering, hierarchically clustered adaptive quantization CMAC (HCAQ-CMAC), learning convergence, nonuniform quantization.
A comparison of frame synchronization methods. [Deep Space Network
NASA Technical Reports Server (NTRS)
Swanson, L.
1982-01-01
Different methods are considered for frame synchronization of a concatenated block code/Viterbi link. Synchronization after Viterbi decoding, synchronization before Viterbi decoding based on hard-quantized channel symbols are all compared. For each scheme, the probability under certain conditions of true detection of sync within four 10,000 bit frames is tabulated.
Imaging agents for monitoring changes of dopamine receptors and methods of using thereof
Mukherjee, Jogeshwar; Chandy, George; Milne, Norah; Wang, Ping H.; Easwaramoorthy, Balu; Mantil, Joseph; Garcia, Adriana
2017-05-30
The present invention is related generally to a method for screening subjects to determine those subjects more likely to develop diabetes by quantization of insulin producing cells. The present invention is also related to the diagnosis of diabetes and related to monitor disease progression or treatment efficacy of candidate drugs.
A Method of Reducing Random Drift in the Combined Signal of an Array of Inertial Sensors
2015-09-30
stability of the collective output, Bayard et al, US Patent 6,882,964. The prior art methods rely upon the use of Kalman filtering and averaging...including scale-factor errors, quantization effects, temperature effects, random drift, and additive noise. A comprehensive account of all of these
NASA Technical Reports Server (NTRS)
Gray, Robert M.
1989-01-01
During the past ten years Vector Quantization (VQ) has developed from a theoretical possibility promised by Shannon's source coding theorems into a powerful and competitive technique for speech and image coding and compression at medium to low bit rates. In this survey, the basic ideas behind the design of vector quantizers are sketched and some comments made on the state-of-the-art and current research efforts.
Robust vector quantization for noisy channels
NASA Technical Reports Server (NTRS)
Demarca, J. R. B.; Farvardin, N.; Jayant, N. S.; Shoham, Y.
1988-01-01
The paper briefly discusses techniques for making vector quantizers more tolerant to tranmsission errors. Two algorithms are presented for obtaining an efficient binary word assignment to the vector quantizer codewords without increasing the transmission rate. It is shown that about 4.5 dB gain over random assignment can be achieved with these algorithms. It is also proposed to reduce the effects of error propagation in vector-predictive quantizers by appropriately constraining the response of the predictive loop. The constrained system is shown to have about 4 dB of SNR gain over an unconstrained system in a noisy channel, with a small loss of clean-channel performance.
Immirzi parameter without Immirzi ambiguity: Conformal loop quantization of scalar-tensor gravity
NASA Astrophysics Data System (ADS)
Veraguth, Olivier J.; Wang, Charles H.-T.
2017-10-01
Conformal loop quantum gravity provides an approach to loop quantization through an underlying conformal structure i.e. conformally equivalent class of metrics. The property that general relativity itself has no conformal invariance is reinstated with a constrained scalar field setting the physical scale. Conformally equivalent metrics have recently been shown to be amenable to loop quantization including matter coupling. It has been suggested that conformal geometry may provide an extended symmetry to allow a reformulated Immirzi parameter necessary for loop quantization to behave like an arbitrary group parameter that requires no further fixing as its present standard form does. Here, we find that this can be naturally realized via conformal frame transformations in scalar-tensor gravity. Such a theory generally incorporates a dynamical scalar gravitational field and reduces to general relativity when the scalar field becomes a pure gauge. In particular, we introduce a conformal Einstein frame in which loop quantization is implemented. We then discuss how different Immirzi parameters under this description may be related by conformal frame transformations and yet share the same quantization having, for example, the same area gaps, modulated by the scalar gravitational field.
Tribology of the lubricant quantized sliding state.
Castelli, Ivano Eligio; Capozza, Rosario; Vanossi, Andrea; Santoro, Giuseppe E; Manini, Nicola; Tosatti, Erio
2009-11-07
In the framework of Langevin dynamics, we demonstrate clear evidence of the peculiar quantized sliding state, previously found in a simple one-dimensional boundary lubricated model [A. Vanossi et al., Phys. Rev. Lett. 97, 056101 (2006)], for a substantially less idealized two-dimensional description of a confined multilayer solid lubricant under shear. This dynamical state, marked by a nontrivial "quantized" ratio of the averaged lubricant center-of-mass velocity to the externally imposed sliding speed, is recovered, and shown to be robust against the effects of thermal fluctuations, quenched disorder in the confining substrates, and over a wide range of loading forces. The lubricant softness, setting the width of the propagating solitonic structures, is found to play a major role in promoting in-registry commensurate regions beneficial to this quantized sliding. By evaluating the force instantaneously exerted on the top plate, we find that this quantized sliding represents a dynamical "pinned" state, characterized by significantly low values of the kinetic friction. While the quantized sliding occurs due to solitons being driven gently, the transition to ordinary unpinned sliding regimes can involve lubricant melting due to large shear-induced Joule heating, for example at large speed.
A Low Power Digital Accumulation Technique for Digital-Domain CMOS TDI Image Sensor.
Yu, Changwei; Nie, Kaiming; Xu, Jiangtao; Gao, Jing
2016-09-23
In this paper, an accumulation technique suitable for digital domain CMOS time delay integration (TDI) image sensors is proposed to reduce power consumption without degrading the rate of imaging. In terms of the slight variations of quantization codes among different pixel exposures towards the same object, the pixel array is divided into two groups: one is for coarse quantization of high bits only, and the other one is for fine quantization of low bits. Then, the complete quantization codes are composed of both results from the coarse-and-fine quantization. The equivalent operation comparably reduces the total required bit numbers of the quantization. In the 0.18 µm CMOS process, two versions of 16-stage digital domain CMOS TDI image sensor chains based on a 10-bit successive approximate register (SAR) analog-to-digital converter (ADC), with and without the proposed technique, are designed. The simulation results show that the average power consumption of slices of the two versions are 6 . 47 × 10 - 8 J/line and 7 . 4 × 10 - 8 J/line, respectively. Meanwhile, the linearity of the two versions are 99.74% and 99.99%, respectively.
Wigner Functions for the Bateman System on Noncommutative Phase Space
NASA Astrophysics Data System (ADS)
Heng, Tai-Hua; Lin, Bing-Sheng; Jing, Si-Cong
2010-09-01
We study an important dissipation system, i.e. the Bateman model on noncommutative phase space. Using the method of deformation quantization, we calculate the Exp functions, and then derive the Wigner functions and the corresponding energy spectra.
Can noncommutative effects account for the present speed up of the cosmic expansion?
NASA Astrophysics Data System (ADS)
Obregon, Octavio; Quiros, Israel
2011-08-01
In this paper we investigate to which extent noncommutativity, an intrinsically quantum property, may influence the Friedmann-Robertson-Walker cosmological dynamics at late times/large scales. To our purpose it will be enough to explore the asymptotic properties of the cosmological model in the phase space. Our recipe to build noncommutativity into our model is based in the approach of Ref. and can be summarized in the following steps: i) the Hamiltonian is derived from the Einstein-Hilbert action (plus a self-interacting scalar field action) for a Friedmann-Robertson-Walker space-time with flat spatial sections, ii) canonical quantization recipe is applied, i.e., the mini-superspace variables are promoted to operators, and the WDW equation is written in terms of these variables, iii) noncommutativity in the mini-superspace is achieved through the replacement of the standard product of functions by the Moyal star product in the WDW equation, and, finally, iv) semiclassical cosmological equations are obtained by means of the WKB approximation applied to the (equivalent) modified Hamilton-Jacobi equation. We demonstrate, indeed, that noncommutative effects of the kind considered here can be those responsible for the present speed up of the cosmic expansion.
Lorentz covariance of loop quantum gravity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rovelli, Carlo; Speziale, Simone
2011-05-15
The kinematics of loop gravity can be given a manifestly Lorentz-covariant formulation: the conventional SU(2)-spin-network Hilbert space can be mapped to a space K of SL(2,C) functions, where Lorentz covariance is manifest. K can be described in terms of a certain subset of the projected spin networks studied by Livine, Alexandrov and Dupuis. It is formed by SL(2,C) functions completely determined by their restriction on SU(2). These are square-integrable in the SU(2) scalar product, but not in the SL(2,C) one. Thus, SU(2)-spin-network states can be represented by Lorentz-covariant SL(2,C) functions, as two-component photons can be described in the Lorentz-covariant Gupta-Bleulermore » formalism. As shown by Wolfgang Wieland in a related paper, this manifestly Lorentz-covariant formulation can also be directly obtained from canonical quantization. We show that the spinfoam dynamics of loop quantum gravity is locally SL(2,C)-invariant in the bulk, and yields states that are precisely in K on the boundary. This clarifies how the SL(2,C) spinfoam formalism yields an SU(2) theory on the boundary. These structures define a tidy Lorentz-covariant formalism for loop gravity.« less
Simplicity constraints: A 3D toy model for loop quantum gravity
NASA Astrophysics Data System (ADS)
Charles, Christoph
2018-05-01
In loop quantum gravity, tremendous progress has been made using the Ashtekar-Barbero variables. These variables, defined in a gauge fixing of the theory, correspond to a parametrization of the solutions of the so-called simplicity constraints. Their geometrical interpretation is however unsatisfactory as they do not constitute a space-time connection. It would be possible to resolve this point by using a full Lorentz connection or, equivalently, by using the self-dual Ashtekar variables. This leads however to simplicity constraints or reality conditions which are notoriously difficult to implement in the quantum theory. We explore in this paper the possibility of using completely degenerate actions to impose such constraints at the quantum level in the context of canonical quantization. To do so, we define a simpler model, in 3D, with similar constraints by extending the phase space to include an independent vielbein. We define the classical model and show that a precise quantum theory by gauge unfixing can be defined out of it, completely equivalent to the standard 3D Euclidean quantum gravity. We discuss possible future explorations around this model as it could help as a stepping stone to define full-fledged covariant loop quantum gravity.
NASA Astrophysics Data System (ADS)
Stumpf, Harald
2006-09-01
Based on the assumption that electroweak bosons, leptons and quarks possess a substructure of elementary fermionic constituents, in previous papers the effect of CP-symmetry breaking on the effective dynamics of these particles was calculated. Motivated by the phenomenological procedure in this paper, isospin symmetry breaking will be added and the physical consequences of these calculations will be discussed. The dynamical law of the fermionic constituents is given by a relativistically invariant nonlinear spinor field equation with local interaction, canonical quantization, selfregularization and probability interpretation. The corresponding effective dynamics is derived by algebraic weak mapping theorems. In contrast to the commonly applied modifications of the quark mass matrices, CP-symmetry breaking is introduced into this algebraic formalism by an inequivalent vacuum with respect to the CP-invariant case, represented by a modified spinor field propagator. This leads to an extension of the standard model as effective theory which contains besides the "electric" electroweak bosons additional "magnetic" electroweak bosons and corresponding interactions. If furthermore the isospin invariance of the propagator is broken too, it will be demonstrated in detail that in combination with CP-symmetry breaking this induces a considerable modification of electroweak nuclear reaction rates.
Effect of Vacuum Properties on Electroweak Processes - A Theoretical Interpretation of Experiments
NASA Astrophysics Data System (ADS)
Stumpf, Harald
2008-06-01
Recently for discharges in fluids induced nuclear transmutations have been observed. It is our hypothesis that these reactions are due to a symmetry breaking of the electroweak vacuum by the experimental arrangement. The treatment of this hypothesis is based on the assumption that electroweak bosons, leptons and quarks possess a substructure of elementary fermionic constituents. The dynamical law of these fermionic constituents is given by a relativistically invariant nonlinear spinor field equation with local interaction, canonical quantization, selfregularization and probability interpretation. Phenomenological quantities of electroweak processes follow from the derivation of corresponding effective theories obtained by algebraic weak mapping theorems where the latter theories depend on the spinor field propagator, i. e. a vacuum expectation value. This propagator and its equation are studied for conserved and for broken discrete symmetries. For combined CP- and isospin symmetry breaking it is shown that the propagator corresponds to the experimental arrangements under consideration. The modifications of the effective electroweak theory due to this modified propagator are discussed. Based on these results a mechanism is sketched which offers a qualitative interpretation of the appearance of induced nuclear transmutations. A numerical estimate of electron capture is given.
Quantized Majorana conductance
NASA Astrophysics Data System (ADS)
Zhang, Hao; Liu, Chun-Xiao; Gazibegovic, Sasa; Xu, Di; Logan, John A.; Wang, Guanzhong; van Loo, Nick; Bommer, Jouri D. S.; de Moor, Michiel W. A.; Car, Diana; Op Het Veld, Roy L. M.; van Veldhoven, Petrus J.; Koelling, Sebastian; Verheijen, Marcel A.; Pendharkar, Mihir; Pennachio, Daniel J.; Shojaei, Borzoyeh; Lee, Joon Sue; Palmstrøm, Chris J.; Bakkers, Erik P. A. M.; Sarma, S. Das; Kouwenhoven, Leo P.
2018-04-01
Majorana zero-modes—a type of localized quasiparticle—hold great promise for topological quantum computing. Tunnelling spectroscopy in electrical transport is the primary tool for identifying the presence of Majorana zero-modes, for instance as a zero-bias peak in differential conductance. The height of the Majorana zero-bias peak is predicted to be quantized at the universal conductance value of 2e2/h at zero temperature (where e is the charge of an electron and h is the Planck constant), as a direct consequence of the famous Majorana symmetry in which a particle is its own antiparticle. The Majorana symmetry protects the quantization against disorder, interactions and variations in the tunnel coupling. Previous experiments, however, have mostly shown zero-bias peaks much smaller than 2e2/h, with a recent observation of a peak height close to 2e2/h. Here we report a quantized conductance plateau at 2e2/h in the zero-bias conductance measured in indium antimonide semiconductor nanowires covered with an aluminium superconducting shell. The height of our zero-bias peak remains constant despite changing parameters such as the magnetic field and tunnel coupling, indicating that it is a quantized conductance plateau. We distinguish this quantized Majorana peak from possible non-Majorana origins by investigating its robustness to electric and magnetic fields as well as its temperature dependence. The observation of a quantized conductance plateau strongly supports the existence of Majorana zero-modes in the system, consequently paving the way for future braiding experiments that could lead to topological quantum computing.
Quantized Majorana conductance.
Zhang, Hao; Liu, Chun-Xiao; Gazibegovic, Sasa; Xu, Di; Logan, John A; Wang, Guanzhong; van Loo, Nick; Bommer, Jouri D S; de Moor, Michiel W A; Car, Diana; Op Het Veld, Roy L M; van Veldhoven, Petrus J; Koelling, Sebastian; Verheijen, Marcel A; Pendharkar, Mihir; Pennachio, Daniel J; Shojaei, Borzoyeh; Lee, Joon Sue; Palmstrøm, Chris J; Bakkers, Erik P A M; Sarma, S Das; Kouwenhoven, Leo P
2018-04-05
Majorana zero-modes-a type of localized quasiparticle-hold great promise for topological quantum computing. Tunnelling spectroscopy in electrical transport is the primary tool for identifying the presence of Majorana zero-modes, for instance as a zero-bias peak in differential conductance. The height of the Majorana zero-bias peak is predicted to be quantized at the universal conductance value of 2e 2 /h at zero temperature (where e is the charge of an electron and h is the Planck constant), as a direct consequence of the famous Majorana symmetry in which a particle is its own antiparticle. The Majorana symmetry protects the quantization against disorder, interactions and variations in the tunnel coupling. Previous experiments, however, have mostly shown zero-bias peaks much smaller than 2e 2 /h, with a recent observation of a peak height close to 2e 2 /h. Here we report a quantized conductance plateau at 2e 2 /h in the zero-bias conductance measured in indium antimonide semiconductor nanowires covered with an aluminium superconducting shell. The height of our zero-bias peak remains constant despite changing parameters such as the magnetic field and tunnel coupling, indicating that it is a quantized conductance plateau. We distinguish this quantized Majorana peak from possible non-Majorana origins by investigating its robustness to electric and magnetic fields as well as its temperature dependence. The observation of a quantized conductance plateau strongly supports the existence of Majorana zero-modes in the system, consequently paving the way for future braiding experiments that could lead to topological quantum computing.
Controlling charge quantization with quantum fluctuations.
Jezouin, S; Iftikhar, Z; Anthore, A; Parmentier, F D; Gennser, U; Cavanna, A; Ouerghi, A; Levkivskyi, I P; Idrisov, E; Sukhorukov, E V; Glazman, L I; Pierre, F
2016-08-04
In 1909, Millikan showed that the charge of electrically isolated systems is quantized in units of the elementary electron charge e. Today, the persistence of charge quantization in small, weakly connected conductors allows for circuits in which single electrons are manipulated, with applications in, for example, metrology, detectors and thermometry. However, as the connection strength is increased, the discreteness of charge is progressively reduced by quantum fluctuations. Here we report the full quantum control and characterization of charge quantization. By using semiconductor-based tunable elemental conduction channels to connect a micrometre-scale metallic island to a circuit, we explore the complete evolution of charge quantization while scanning the entire range of connection strengths, from a very weak (tunnel) to a perfect (ballistic) contact. We observe, when approaching the ballistic limit, that charge quantization is destroyed by quantum fluctuations, and scales as the square root of the residual probability for an electron to be reflected across the quantum channel; this scaling also applies beyond the different regimes of connection strength currently accessible to theory. At increased temperatures, the thermal fluctuations result in an exponential suppression of charge quantization and in a universal square-root scaling, valid for all connection strengths, in agreement with expectations. Besides being pertinent for the improvement of single-electron circuits and their applications, and for the metal-semiconductor hybrids relevant to topological quantum computing, knowledge of the quantum laws of electricity will be essential for the quantum engineering of future nanoelectronic devices.
Betel, Doron; Koppal, Anjali; Agius, Phaedra; Sander, Chris; Leslie, Christina
2010-01-01
mirSVR is a new machine learning method for ranking microRNA target sites by a down-regulation score. The algorithm trains a regression model on sequence and contextual features extracted from miRanda-predicted target sites. In a large-scale evaluation, miRanda-mirSVR is competitive with other target prediction methods in identifying target genes and predicting the extent of their downregulation at the mRNA or protein levels. Importantly, the method identifies a significant number of experimentally determined non-canonical and non-conserved sites.
Study of high-performance canonical molecular orbitals calculation for proteins
NASA Astrophysics Data System (ADS)
Hirano, Toshiyuki; Sato, Fumitoshi
2017-11-01
The canonical molecular orbital (CMO) calculation can help to understand chemical properties and reactions in proteins. However, it is difficult to perform the CMO calculation of proteins because of its self-consistent field (SCF) convergence problem and expensive computational cost. To certainly obtain the CMO of proteins, we work in research and development of high-performance CMO applications and perform experimental studies. We have proposed the third-generation density-functional calculation method of calculating the SCF, which is more advanced than the FILE and direct method. Our method is based on Cholesky decomposition for two-electron integrals calculation and the modified grid-free method for the pure-XC term evaluation. By using the third-generation density-functional calculation method, the Coulomb, the Fock-exchange, and the pure-XC terms can be given by simple linear algebraic procedure in the SCF loop. Therefore, we can expect to get a good parallel performance in solving the SCF problem by using a well-optimized linear algebra library such as BLAS on the distributed memory parallel computers. The third-generation density-functional calculation method is implemented to our program, ProteinDF. To achieve computing electronic structure of the large molecule, not only overcoming expensive computation cost and also good initial guess for safe SCF convergence are required. In order to prepare a precise initial guess for the macromolecular system, we have developed the quasi-canonical localized orbital (QCLO) method. The QCLO has the characteristics of both localized and canonical orbital in a certain region of the molecule. We have succeeded in the CMO calculations of proteins by using the QCLO method. For simplified and semi-automated calculation of the QCLO method, we have also developed a Python-based program, QCLObot.
Evaluation of NASA speech encoder
NASA Technical Reports Server (NTRS)
1976-01-01
Techniques developed by NASA for spaceflight instrumentation were used in the design of a quantizer for speech-decoding. Computer simulation of the actions of the quantizer was tested with synthesized and real speech signals. Results were evaluated by a phometician. Topics discussed include the relationship between the number of quantizer levels and the required sampling rate; reconstruction of signals; digital filtering; speech recording, sampling, and storage, and processing results.
Petit and grand ensemble Monte Carlo calculations of the thermodynamics of the lattice gas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murch, G.E.; Thorn, R.J.
1978-11-01
A direct Monte Carlo method for estimating the chemical potential in the petit canonical ensemble was applied to the simple cubic Ising-like lattice gas. The method is based on a simple relationship between the chemical potential and the potential energy distribution in a lattice gas at equilibrium as derived independently by Widom, and Jackson and Klein. Results are presented here for the chemical potential at various compositions and temperatures above and below the zero field ferromagnetic and antiferromagnetic critical points. The same lattice gas model was reconstructed in the form of a restricted grand canonical ensemble and results at severalmore » temperatures were compared with those from the petit canonical ensemble. The agreement was excellent in these cases.« less
On the relationships between higher and lower bit-depth system measurements
NASA Astrophysics Data System (ADS)
Burks, Stephen D.; Haefner, David P.; Doe, Joshua M.
2018-04-01
The quality of an imaging system can be assessed through controlled laboratory objective measurements. Currently, all imaging measurements require some form of digitization in order to evaluate a metric. Depending on the device, the amount of bits available, relative to a fixed dynamic range, will exhibit quantization artifacts. From a measurement standpoint, measurements are desired to be performed at the highest possible bit-depth available. In this correspondence, we described the relationship between higher and lower bit-depth measurements. The limits to which quantization alters the observed measurements will be presented. Specifically, we address dynamic range, MTF, SiTF, and noise. Our results provide guidelines to how systems of lower bit-depth should be characterized and the corresponding experimental methods.
The FBI compression standard for digitized fingerprint images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brislawn, C.M.; Bradley, J.N.; Onyshczak, R.J.
1996-10-01
The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the currentmore » status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.« less
FBI compression standard for digitized fingerprint images
NASA Astrophysics Data System (ADS)
Brislawn, Christopher M.; Bradley, Jonathan N.; Onyshczak, Remigius J.; Hopper, Thomas
1996-11-01
The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the current status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.
Magnetic monopole in noncommutative space-time and Wu-Yang singularity-free gauge transformations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laangvik, Miklos; Salminen, Tapio; Tureanu, Anca
2011-04-15
We investigate the validity of the Dirac quantization condition for magnetic monopoles in noncommutative space-time. We use an approach which is based on an extension of the method introduced by Wu and Yang. To study the effects of noncommutativity of space-time, we consider the gauge transformations of U{sub *}(1) gauge fields and use the corresponding deformed Maxwell's equations. Using a perturbation expansion in the noncommutativity parameter {theta}, we show that the Dirac quantization condition remains unmodified up to the first order in the expansion parameter. The result is obtained for a class of noncommutative source terms, which reduce to themore » Dirac delta function in the commutative limit.« less
Observational constraints on Tachyon and DBI inflation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Sheng; Liddle, Andrew R., E-mail: sl277@sussex.ac.uk, E-mail: arl@roe.ac.uk
2014-03-01
We present a systematic method for evaluation of perturbation observables in non-canonical single-field inflation models within the slow-roll approximation, which allied with field redefinitions enables predictions to be established for a wide range of models. We use this to investigate various non-canonical inflation models, including Tachyon inflation and DBI inflation. The Lambert W function will be used extensively in our method for the evaluation of observables. In the Tachyon case, in the slow-roll approximation the model can be approximated by a canonical field with a redefined potential, which yields predictions in better agreement with observations than the canonical equivalents. Formore » DBI inflation models we consider contributions from both the scalar potential and the warp geometry. In the case of a quartic potential, we find a formula for the observables under both non-relativistic (sound speed c{sub s}{sup 2} ∼ 1) and relativistic behaviour (c{sub s}{sup 2} || 1) of the scalar DBI inflaton. For a quadratic potential we find two branches in the non-relativistic c{sub s}{sup 2} ∼ 1 case, determined by the competition of model parameters, while for the relativistic case c{sub s}{sup 2} → 0, we find consistency with results already in the literature. We present a comparison to the latest Planck satellite observations. Most of the non-canonical models we investigate, including the Tachyon, are better fits to data than canonical models with the same potential, but we find that DBI models in the slow-roll regime have difficulty in matching the data.« less
Random discrete linear canonical transform.
Wei, Deyun; Wang, Ruikui; Li, Yuan-Min
2016-12-01
Linear canonical transforms (LCTs) are a family of integral transforms with wide applications in optical, acoustical, electromagnetic, and other wave propagation problems. In this paper, we propose the random discrete linear canonical transform (RDLCT) by randomizing the kernel transform matrix of the discrete linear canonical transform (DLCT). The RDLCT inherits excellent mathematical properties from the DLCT along with some fantastic features of its own. It has a greater degree of randomness because of the randomization in terms of both eigenvectors and eigenvalues. Numerical simulations demonstrate that the RDLCT has an important feature that the magnitude and phase of its output are both random. As an important application of the RDLCT, it can be used for image encryption. The simulation results demonstrate that the proposed encryption method is a security-enhanced image encryption scheme.
NASA Astrophysics Data System (ADS)
Jurčo, B.; Schlieker, M.
1995-07-01
In this paper explicitly natural (from the geometrical point of view) Fock-space representations (contragradient Verma modules) of the quantized enveloping algebras are constructed. In order to do so, one starts from the Gauss decomposition of the quantum group and introduces the differential operators on the corresponding q-deformed flag manifold (assumed as a left comodule for the quantum group) by a projection to it of the right action of the quantized enveloping algebra on the quantum group. Finally, the representatives of the elements of the quantized enveloping algebra corresponding to the left-invariant vector fields on the quantum group are expressed as first-order differential operators on the q-deformed flag manifold.
Investigation of valley-resolved transmission through gate defined graphene carrier guiders
NASA Astrophysics Data System (ADS)
Cao, Shi-Min; Zhou, Jiao-Jiao; Wei, Xuan; Cheng, Shu-Guang
2017-04-01
Massless charge carriers in gate potentials modulate graphene quantum well transport in the same way that a electromagnetic wave propagates in optical fibers. A recent experiment by Kim et al (2016 Nat. Phys. 12 1022) reports valley symmetry preserved transport in a graphene carrier guider. Based on a tight-binding model, the valley-resolved transport coefficients are calculated with the method of scattering matrix theory. For a straight potential well, valley-resolved conductance is quantized with a value of 2n + 1 and multiplied by 2e 2/h with integer n. In the absence of disorder, intervalley scattering, only occurring at both ends of the potential well, is weak. The propagating modes inside the potential well are analyzed with the help of band structure and wave function distribution. The conductance is better preserved for a longer carrier guider. The quantized conductance is barely affected by the boundaries of different types or slightly changing the orientation of the carrier guider. For a curved model, the state with momentum closes to the neutral point is more fragile to boundary scattering and the quantized conductance is ruined as well.
Yang-Baxter maps, discrete integrable equations and quantum groups
NASA Astrophysics Data System (ADS)
Bazhanov, Vladimir V.; Sergeev, Sergey M.
2018-01-01
For every quantized Lie algebra there exists a map from the tensor square of the algebra to itself, which by construction satisfies the set-theoretic Yang-Baxter equation. This map allows one to define an integrable discrete quantum evolution system on quadrilateral lattices, where local degrees of freedom (dynamical variables) take values in a tensor power of the quantized Lie algebra. The corresponding equations of motion admit the zero curvature representation. The commuting Integrals of Motion are defined in the standard way via the Quantum Inverse Problem Method, utilizing Baxter's famous commuting transfer matrix approach. All elements of the above construction have a meaningful quasi-classical limit. As a result one obtains an integrable discrete Hamiltonian evolution system, where the local equation of motion are determined by a classical Yang-Baxter map and the action functional is determined by the quasi-classical asymptotics of the universal R-matrix of the underlying quantum algebra. In this paper we present detailed considerations of the above scheme on the example of the algebra Uq (sl (2)) leading to discrete Liouville equations, however the approach is rather general and can be applied to any quantized Lie algebra.
BOOK REVIEW: Modern Canonical Quantum General Relativity
NASA Astrophysics Data System (ADS)
Kiefer, Claus
2008-06-01
The open problem of constructing a consistent and experimentally tested quantum theory of the gravitational field has its place at the heart of fundamental physics. The main approaches can be roughly divided into two classes: either one seeks a unified quantum framework of all interactions or one starts with a direct quantization of general relativity. In the first class, string theory (M-theory) is the only known example. In the second class, one can make an additional methodological distinction: while covariant approaches such as path-integral quantization use the four-dimensional metric as an essential ingredient of their formalism, canonical approaches start with a foliation of spacetime into spacelike hypersurfaces in order to arrive at a Hamiltonian formulation. The present book is devoted to one of the canonical approaches—loop quantum gravity. It is named modern canonical quantum general relativity by the author because it uses connections and holonomies as central variables, which are analogous to the variables used in Yang Mills theories. In fact, the canonically conjugate variables are a holonomy of a connection and the flux of a non-Abelian electric field. This has to be contrasted with the older geometrodynamical approach in which the metric of three-dimensional space and the second fundamental form are the fundamental entities, an approach which is still actively being pursued. It is the author's ambition to present loop quantum gravity in a way in which every step is formulated in a mathematically rigorous form. In his own words: 'loop quantum gravity is an attempt to construct a mathematically rigorous, background-independent, non-perturbative quantum field theory of Lorentzian general relativity and all known matter in four spacetime dimensions, not more and not less'. The formal Leitmotiv of loop quantum gravity is background independence. Non-gravitational theories are usually quantized on a given non-dynamical background. In contrast, due to the geometrical nature of gravity, no such background exists in quantum gravity. Instead, the notion of a background is supposed to emerge a posteriori as an approximate notion from quantum states of geometry. As a consequence, the standard ultraviolet divergences of quantum field theory do not show up because there is no limit of Δx → 0 to be taken in a given spacetime. On the other hand, it is open whether the theory is free of any type of divergences and anomalies. A central feature of any canonical approach, independent of the choice of variables, is the existence of constraints. In geometrodynamics, these are the Hamiltonian and diffeomorphism constraints. They also hold in loop quantum gravity, but are supplemented there by the Gauss constraint, which emerges due to the use of triads in the formalism. These constraints capture all the physics of the quantum theory because no spacetime is present anymore (analogous to the absence of trajectories in quantum mechanics), so no additional equations of motion are needed. This book presents a careful and comprehensive discussion of these constraints. In particular, the constraint algebra is calculated in a transparent and explicit way. The author makes the important assumption that a Hilbert-space structure is still needed on the fundamental level of quantum gravity. In ordinary quantum theory, such a structure is needed for the probability interpretation, in particular for the conservation of probability with respect to external time. It is thus interesting to see how far this concept can be extrapolated into the timeless realm of quantum gravity. On the kinematical level, that is, before the constraints are imposed, an essentially unique Hilbert space can be constructed in terms of spin-network states. Potentially problematic features are the implementation of the diffeomorphism and Hamiltonian constraints. The Hilbert space Hdiff defined on the diffeomorphism subspace can throw states out of the kinematical Hilbert space and is thus not contained in it. Moreover, the Hamiltonian constraint does not seem to preserve Hdiff, so its implementation remains open. To avoid some of these problems, the author proposes his 'master constraint programme' in which the infinitely many local Hamiltonian constraints are combined into one master constraint. This is a subject of his current research. With regard to this situation, it is not surprising that the main results in loop quantum gravity are found on the kinematical level. An especially important feature are the discrete spectra of geometric operators such as the area operator. This quantifies the earlier heuristic ideas about a discreteness at the Planck scale. The hope is that these results survive the consistent implementation of all constraints. The status of loop quantum gravity is concisely and competently summarized in this volume, whose author is himself one of the pioneers of this approach. What is the relation of this book to the other monograph on loop quantum gravity, written by Carlo Rovelli and published in 2004 under the title Quantum Gravity with the same company? In the words of the present author: 'the two books are complementary in the sense that they can be regarded almost as volume I ('introduction and conceptual framework') and volume II ('mathematical framework and applications') of a general presentation of quantum general relativity in general and loop quantum gravity in particular'. In fact, the present volume gives a complete and self-contained presentation of the required mathematics, especially on the approximately 200 pages of chapters 18 33. As for the physical applications, the main topic is the microscopic derivation of the black-hole entropy. This is presented in a clear and detailed form. Employing the concept of an isolated horizon (a local generalization of an event horizon), the counting of surface states gives an entropy proportional to the horizon area. It also contains the Barbero Immirzi parameter β, which is a free parameter of the theory. Demanding, on the other hand, that the entropy be equal to the Bekenstein Hawking entropy would fix this parameter. Other applications such as loop quantum cosmology are only briefly touched upon. Since loop quantum gravity is a very active field of research, the author warns that the present book can at best be seen as a snapshot. Part of the overall picture may thus in the future be subject to modifications. For example, recent work by the author using a concept of dust time is not yet covered here. Nevertheless, I expect that this volume will continue to serve as a valuable introduction and reference book. It is essential reading for everyone working on loop quantum gravity.
Magnetic resonance image compression using scalar-vector quantization
NASA Astrophysics Data System (ADS)
Mohsenian, Nader; Shahri, Homayoun
1995-12-01
A new coding scheme based on the scalar-vector quantizer (SVQ) is developed for compression of medical images. SVQ is a fixed-rate encoder and its rate-distortion performance is close to that of optimal entropy-constrained scalar quantizers (ECSQs) for memoryless sources. The use of a fixed-rate quantizer is expected to eliminate some of the complexity issues of using variable-length scalar quantizers. When transmission of images over noisy channels is considered, our coding scheme does not suffer from error propagation which is typical of coding schemes which use variable-length codes. For a set of magnetic resonance (MR) images, coding results obtained from SVQ and ECSQ at low bit-rates are indistinguishable. Furthermore, our encoded images are perceptually indistinguishable from the original, when displayed on a monitor. This makes our SVQ based coder an attractive compression scheme for picture archiving and communication systems (PACS), currently under consideration for an all digital radiology environment in hospitals, where reliable transmission, storage, and high fidelity reconstruction of images are desired.
Bulk-edge correspondence in topological transport and pumping
NASA Astrophysics Data System (ADS)
Imura, Ken-Ichiro; Yoshimura, Yukinori; Fukui, Takahiro; Hatsugai, Yasuhiro
2018-03-01
The bulk-edge correspondence (BEC) refers to a one-to-one relation between the bulk and edge properties ubiquitous in topologically nontrivial systems. Depending on the setup, BEC manifests in different forms and govern the spectral and transport properties of topological insulators and semimetals. Although the topological pump is theoretically old, BEC in the pump has been established just recently [1] motivated by the state-of-the-art experiments using cold atoms [2, 3]. The center of mass (CM) of a system with boundaries shows a sequence of quantized jumps in the adiabatic limit associated with the edge states. Despite that the bulk is adiabatic, the edge is inevitably non-adiabatic in the experimental setup or in any numerical simulations. Still the pumped charge is quantized and carried by the bulk. Its quantization is guaranteed by a compensation between the bulk and edges. We show that in the presence of disorder the pumped charge continues to be quantized despite the appearance of non-quantized jumps.
2-Step scalar deadzone quantization for bitplane image coding.
Auli-Llinas, Francesc
2013-12-01
Modern lossy image coding systems generate a quality progressive codestream that, truncated at increasing rates, produces an image with decreasing distortion. Quality progressivity is commonly provided by an embedded quantizer that employs uniform scalar deadzone quantization (USDQ) together with a bitplane coding strategy. This paper introduces a 2-step scalar deadzone quantization (2SDQ) scheme that achieves same coding performance as that of USDQ while reducing the coding passes and the emitted symbols of the bitplane coding engine. This serves to reduce the computational costs of the codec and/or to code high dynamic range images. The main insights behind 2SDQ are the use of two quantization step sizes that approximate wavelet coefficients with more or less precision depending on their density, and a rate-distortion optimization technique that adjusts the distortion decreases produced when coding 2SDQ indexes. The integration of 2SDQ in current codecs is straightforward. The applicability and efficiency of 2SDQ are demonstrated within the framework of JPEG2000.
Fast large-scale object retrieval with binary quantization
NASA Astrophysics Data System (ADS)
Zhou, Shifu; Zeng, Dan; Shen, Wei; Zhang, Zhijiang; Tian, Qi
2015-11-01
The objective of large-scale object retrieval systems is to search for images that contain the target object in an image database. Where state-of-the-art approaches rely on global image representations to conduct searches, we consider many boxes per image as candidates to search locally in a picture. In this paper, a feature quantization algorithm called binary quantization is proposed. In binary quantization, a scale-invariant feature transform (SIFT) feature is quantized into a descriptive and discriminative bit-vector, which allows itself to adapt to the classic inverted file structure for box indexing. The inverted file, which stores the bit-vector and box ID where the SIFT feature is located inside, is compact and can be loaded into the main memory for efficient box indexing. We evaluate our approach on available object retrieval datasets. Experimental results demonstrate that the proposed approach is fast and achieves excellent search quality. Therefore, the proposed approach is an improvement over state-of-the-art approaches for object retrieval.
Montoya-Castillo, Andrés; Reichman, David R
2017-01-14
We derive a semi-analytical form for the Wigner transform for the canonical density operator of a discrete system coupled to a harmonic bath based on the path integral expansion of the Boltzmann factor. The introduction of this simple and controllable approach allows for the exact rendering of the canonical distribution and permits systematic convergence of static properties with respect to the number of path integral steps. In addition, the expressions derived here provide an exact and facile interface with quasi- and semi-classical dynamical methods, which enables the direct calculation of equilibrium time correlation functions within a wide array of approaches. We demonstrate that the present method represents a practical path for the calculation of thermodynamic data for the spin-boson and related systems. We illustrate the power of the present approach by detailing the improvement of the quality of Ehrenfest theory for the correlation function C zz (t)=Re⟨σ z (0)σ z (t)⟩ for the spin-boson model with systematic convergence to the exact sampling function. Importantly, the numerically exact nature of the scheme presented here and its compatibility with semiclassical methods allows for the systematic testing of commonly used approximations for the Wigner-transformed canonical density.
From black holes to white holes: a quantum gravitational, symmetric bounce
NASA Astrophysics Data System (ADS)
Olmedo, Javier; Saini, Sahil; Singh, Parampreet
2017-11-01
Recently, a consistent non-perturbative quantization of the Schwarzschild interior resulting in a bounce from black hole to white hole geometry has been obtained by loop quantizing the Kantowski-Sachs vacuum spacetime. As in other spacetimes where the singularity is dominated by the Weyl part of the spacetime curvature, the structure of the singularity is highly anisotropic in the Kantowski-Sachs vacuum spacetime. As a result, the bounce turns out to be in general asymmetric, creating a large mass difference between the parent black hole and the child white hole. In this manuscript, we investigate under what circumstances a symmetric bounce scenario can be constructed in the above quantization. Using the setting of Dirac observables and geometric clocks, we obtain a symmetric bounce condition which can be satisfied by a slight modification in the construction of loops over which holonomies are considered in the quantization procedure. These modifications can be viewed as quantization ambiguities, and are demonstrated in three different flavors, all of which lead to a non-singular black to white hole transition with identical masses. Our results show that quantization ambiguities can mitigate or even qualitatively change some key features of the physics of singularity resolution. Further, these results are potentially helpful in motivating and constructing symmetric black to white hole transition scenarios.
Design and evaluation of sparse quantization index modulation watermarking schemes
NASA Astrophysics Data System (ADS)
Cornelis, Bruno; Barbarien, Joeri; Dooms, Ann; Munteanu, Adrian; Cornelis, Jan; Schelkens, Peter
2008-08-01
In the past decade the use of digital data has increased significantly. The advantages of digital data are, amongst others, easy editing, fast, cheap and cross-platform distribution and compact storage. The most crucial disadvantages are the unauthorized copying and copyright issues, by which authors and license holders can suffer considerable financial losses. Many inexpensive methods are readily available for editing digital data and, unlike analog information, the reproduction in the digital case is simple and robust. Hence, there is great interest in developing technology that helps to protect the integrity of a digital work and the copyrights of its owners. Watermarking, which is the embedding of a signal (known as the watermark) into the original digital data, is one method that has been proposed for the protection of digital media elements such as audio, video and images. In this article, we examine watermarking schemes for still images, based on selective quantization of the coefficients of a wavelet transformed image, i.e. sparse quantization-index modulation (QIM) watermarking. Different grouping schemes for the wavelet coefficients are evaluated and experimentally verified for robustness against several attacks. Wavelet tree-based grouping schemes yield a slightly improved performance over block-based grouping schemes. Additionally, the impact of the deployment of error correction codes on the most promising configurations is examined. The utilization of BCH-codes (Bose, Ray-Chaudhuri, Hocquenghem) results in an improved robustness as long as the capacity of the error codes is not exceeded (cliff-effect).
From Weyl to Born-Jordan quantization: The Schrödinger representation revisited
NASA Astrophysics Data System (ADS)
de Gosson, Maurice A.
2016-03-01
The ordering problem has been one of the long standing and much discussed questions in quantum mechanics from its very beginning. Nowadays, there is more or less a consensus among physicists that the right prescription is Weyl's rule, which is closely related to the Moyal-Wigner phase space formalism. We propose in this report an alternative approach by replacing Weyl quantization with the less well-known Born-Jordan quantization. This choice is actually natural if we want the Heisenberg and Schrödinger pictures of quantum mechanics to be mathematically equivalent. It turns out that, in addition, Born-Jordan quantization can be recovered from Feynman's path integral approach provided that one used short-time propagators arising from correct formulas for the short-time action, as observed by Makri and Miller. These observations lead to a slightly different quantum mechanics, exhibiting some unexpected features, and this without affecting the main existing theory; for instance quantizations of physical Hamiltonian functions are the same as in the Weyl correspondence. The differences are in fact of a more subtle nature; for instance, the quantum observables will not correspond in a one-to-one fashion to classical ones, and the dequantization of a Born-Jordan quantum operator is less straightforward than that of the corresponding Weyl operator. The use of Born-Jordan quantization moreover solves the "angular momentum dilemma", which already puzzled L. Pauling. Born-Jordan quantization has been known for some time (but not fully exploited) by mathematicians working in time-frequency analysis and signal analysis, but ignored by physicists. One of the aims of this report is to collect and synthesize these sporadic discussions, while analyzing the conceptual differences with Weyl quantization, which is also reviewed in detail. Another striking feature is that the Born-Jordan formalism leads to a redefinition of phase space quantum mechanics, where the usual Wigner distribution has to be replaced with a new quasi-distribution reducing interference effects.
Kernel canonical-correlation Granger causality for multiple time series
NASA Astrophysics Data System (ADS)
Wu, Guorong; Duan, Xujun; Liao, Wei; Gao, Qing; Chen, Huafu
2011-04-01
Canonical-correlation analysis as a multivariate statistical technique has been applied to multivariate Granger causality analysis to infer information flow in complex systems. It shows unique appeal and great superiority over the traditional vector autoregressive method, due to the simplified procedure that detects causal interaction between multiple time series, and the avoidance of potential model estimation problems. However, it is limited to the linear case. Here, we extend the framework of canonical correlation to include the estimation of multivariate nonlinear Granger causality for drawing inference about directed interaction. Its feasibility and effectiveness are verified on simulated data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ulmer, W
Purpose: During the past decade the quantization of coupled/forced electromagnetic circuits with or without Ohm’s resistance has gained the subject of some fundamental studies, since even problems of quantum electrodynamics can be solved in an elegant manner, e.g. the creation of quantized electromagnetic fields. In this communication, we shall use these principles to describe optimization procedures in the design of klystrons, synchrotron irradiation and high energy bremsstrahlung. Methods: The base is the Hamiltonian of an electromagnetic circuit and the extension to coupled circuits, which allow the study of symmetries and perturbed symmetries in a very apparent way (SU2, SU3, SU4).more » The introduction resistance and forced oscillators for the emission and absorption in such coupled systems provides characteristic resonance conditions, and atomic orbitals can be described by that. The extension to virtual orbitals leads to creation of bremsstrahlung, if the incident electron (velocity v nearly c) is described by a current, which is associated with its inductivitance and the virtual orbital to the charge distribution (capacitance). Coupled systems with forced oscillators can be used to amplify drastically the resonance frequencies to describe klystrons and synchrotron radiation. Results: The cross-section formula for bremsstrahlung given by the propagator method of Feynman can readily be derived. The design of klystrons and synchrotrons inclusive the radiation outcome can be described and optimized by the determination of the mutual magnetic couplings between the oscillators induced by the currents. Conclusions: The presented methods of quantization of circuits inclusive resistance provide rather a straightforward way to understand complex technical processes such as creation of bremsstrahlung or creation of radiation by klystrons and synchrotrons. They can either be used for optimization procedures and, last but not least, for pedagogical purposes with regard to a qualified understanding of radiation physics for students.« less
An analogue of Weyl’s law for quantized irreducible generalized flag manifolds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matassa, Marco, E-mail: marco.matassa@gmail.com, E-mail: mmatassa@math.uio.no
2015-09-15
We prove an analogue of Weyl’s law for quantized irreducible generalized flag manifolds. This is formulated in terms of a zeta function which, similarly to the classical setting, satisfies the following two properties: as a functional on the quantized algebra it is proportional to the Haar state and its first singularity coincides with the classical dimension. The relevant formulas are given for the more general case of compact quantum groups.
Skydan, Oleksandr A; Lilley, Francis; Lalor, Michael J; Burton, David R
2003-09-10
We present an investigation into the phase errors that occur in fringe pattern analysis that are caused by quantization effects. When acquisition devices with a limited value of camera bit depth are used, there are a limited number of quantization levels available to record the signal. This may adversely affect the recorded signal and adds a potential source of instrumental error to the measurement system. Quantization effects also determine the accuracy that may be achieved by acquisition devices in a measurement system. We used the Fourier fringe analysis measurement technique. However, the principles can be applied equally well for other phase measuring techniques to yield a phase error distribution that is caused by the camera bit depth.
Performance of customized DCT quantization tables on scientific data
NASA Technical Reports Server (NTRS)
Ratnakar, Viresh; Livny, Miron
1994-01-01
We show that it is desirable to use data-specific or customized quantization tables for scaling the spatial frequency coefficients obtained using the Discrete Cosine Transform (DCT). DCT is widely used for image and video compression (MP89, PM93) but applications typically use default quantization matrices. Using actual scientific data gathered from divers sources such as spacecrafts and electron-microscopes, we show that the default compression/quality tradeoffs can be significantly improved upon by using customized tables. We also show that significant improvements are possible for the standard test images Lena and Baboon. This work is part of an effort to develop a practical scheme for optimizing quantization matrices for any given image or video stream, under any given quality or compression constraints.
Gravitational surface Hamiltonian and entropy quantization
NASA Astrophysics Data System (ADS)
Bakshi, Ashish; Majhi, Bibhas Ranjan; Samanta, Saurav
2017-02-01
The surface Hamiltonian corresponding to the surface part of a gravitational action has xp structure where p is conjugate momentum of x. Moreover, it leads to TS on the horizon of a black hole. Here T and S are temperature and entropy of the horizon. Imposing the hermiticity condition we quantize this Hamiltonian. This leads to an equidistant spectrum of its eigenvalues. Using this we show that the entropy of the horizon is quantized. This analysis holds for any order of Lanczos-Lovelock gravity. For general relativity, the area spectrum is consistent with Bekenstein's observation. This provides a more robust confirmation of this earlier result as the calculation is based on the direct quantization of the Hamiltonian in the sense of usual quantum mechanics.
Thermodynamics of pairing in mesoscopic systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sumaryada, Tony; Volya, Alexander
Using numerical and analytical methods implemented for different models, we conduct a systematic study of the thermodynamic properties of pairing correlations in mesoscopic nuclear systems. Various quantities are calculated and analyzed using the exact solution of pairing. An in-depth comparison of canonical, grand canonical, and microcanonical ensembles is conducted. The nature of the pairing phase transition in a small system is of a particular interest. We discuss the onset of discontinuity in the thermodynamic variables, fluctuations, and evolution of zeros of the canonical and grand canonical partition functions in the complex plane. The behavior of the invariant correlational entropy ismore » also studied in the transitional region of interest. The change in the character of the phase transition due to the presence of a magnetic field is discussed along with studies of superconducting thermodynamics.« less
Algebra of constraints for a string in curved background
NASA Astrophysics Data System (ADS)
Wess, Julius
1990-06-01
A string field theory with curved background develops anomalies and Schwinger terms in the conformal algebra. It is generally believed that these Schwinger terms and anomalies are expressible in terms of the curvature tensor of the background metric 1 and that, therefore, they are covariant under a change of coordinates in the target space. As far as I know, all the relevant computations have been done in special gauges, i.e. in Riemann normal coordinates. The question remains whether this is true in any gauge. We have tried to investigate this problem in a Hamiltonian formulation of the model. A classical Lagrangian serves to define the canonical variables and the classical constraints. They are expressed in terms of the canonical variables and, classically, they are first class. When quantized, an ordering prescription has to be imposed which leads to anomalies and Schwinger terms. We then try to redefine the constraints in such a way that the Schwinger terms depend on the curvature tensor only. The redefinition of the constraints is limited by the requirement that it should be local and that the energy momentum tensor should be conserved. In our approach, it is natural that the constraints are improved, order by order, in the number of derivatives: we find that, up to third order in the derivatives, Schwinger terms and anomalies are expressible in terms of the curvature tensor. In the fourth order of the derivaties however, we find a contribution to the Schwinger terms that cannot be removed by a redefinition and that cannot be cast in a covariant form. The anomaly on the other hand is fully expressible in terms of the curvature scalar. The energy momentum tensor ceases to be symmetric which indicates a Lorentz anomaly as well. The question remains if the Schwinger terms take a covariant form if we allow Einstein anomalies as well 2.
Distributed Adaptive Binary Quantization for Fast Nearest Neighbor Search.
Xianglong Liu; Zhujin Li; Cheng Deng; Dacheng Tao
2017-11-01
Hashing has been proved an attractive technique for fast nearest neighbor search over big data. Compared with the projection based hashing methods, prototype-based ones own stronger power to generate discriminative binary codes for the data with complex intrinsic structure. However, existing prototype-based methods, such as spherical hashing and K-means hashing, still suffer from the ineffective coding that utilizes the complete binary codes in a hypercube. To address this problem, we propose an adaptive binary quantization (ABQ) method that learns a discriminative hash function with prototypes associated with small unique binary codes. Our alternating optimization adaptively discovers the prototype set and the code set of a varying size in an efficient way, which together robustly approximate the data relations. Our method can be naturally generalized to the product space for long hash codes, and enjoys the fast training linear to the number of the training data. We further devise a distributed framework for the large-scale learning, which can significantly speed up the training of ABQ in the distributed environment that has been widely deployed in many areas nowadays. The extensive experiments on four large-scale (up to 80 million) data sets demonstrate that our method significantly outperforms state-of-the-art hashing methods, with up to 58.84% performance gains relatively.
Quantization of Gaussian samples at very low SNR regime in continuous variable QKD applications
NASA Astrophysics Data System (ADS)
Daneshgaran, Fred; Mondin, Marina
2016-09-01
The main problem for information reconciliation in continuous variable Quantum Key Distribution (QKD) at low Signal to Noise Ratio (SNR) is quantization and assignment of labels to the samples of the Gaussian Random Variables (RVs) observed at Alice and Bob. Trouble is that most of the samples, assuming that the Gaussian variable is zero mean which is de-facto the case, tend to have small magnitudes and are easily disturbed by noise. Transmission over longer and longer distances increases the losses corresponding to a lower effective SNR exasperating the problem. This paper looks at the quantization problem of the Gaussian samples at very low SNR regime from an information theoretic point of view. We look at the problem of two bit per sample quantization of the Gaussian RVs at Alice and Bob and derive expressions for the mutual information between the bit strings as a result of this quantization. The quantization threshold for the Most Significant Bit (MSB) should be chosen based on the maximization of the mutual information between the quantized bit strings. Furthermore, while the LSB string at Alice and Bob are balanced in a sense that their entropy is close to maximum, this is not the case for the second most significant bit even under optimal threshold. We show that with two bit quantization at SNR of -3 dB we achieve 75.8% of maximal achievable mutual information between Alice and Bob, hence, as the number of quantization bits increases beyond 2-bits, the number of additional useful bits that can be extracted for secret key generation decreases rapidly. Furthermore, the error rates between the bit strings at Alice and Bob at the same significant bit level are rather high demanding very powerful error correcting codes. While our calculations and simulation shows that the mutual information between the LSB at Alice and Bob is 0.1044 bits, that at the MSB level is only 0.035 bits. Hence, it is only by looking at the bits jointly that we are able to achieve a mutual information of 0.2217 bits which is 75.8% of maximum achievable. The implication is that only by coding both MSB and LSB jointly can we hope to get close to this 75.8% limit. Hence, non-binary codes are essential to achieve acceptable performance.
Image segmentation using hidden Markov Gauss mixture models.
Pyun, Kyungsuk; Lim, Johan; Won, Chee Sun; Gray, Robert M
2007-07-01
Image segmentation is an important tool in image processing and can serve as an efficient front end to sophisticated algorithms and thereby simplify subsequent processing. We develop a multiclass image segmentation method using hidden Markov Gauss mixture models (HMGMMs) and provide examples of segmentation of aerial images and textures. HMGMMs incorporate supervised learning, fitting the observation probability distribution given each class by a Gauss mixture estimated using vector quantization with a minimum discrimination information (MDI) distortion. We formulate the image segmentation problem using a maximum a posteriori criteria and find the hidden states that maximize the posterior density given the observation. We estimate both the hidden Markov parameter and hidden states using a stochastic expectation-maximization algorithm. Our results demonstrate that HMGMM provides better classification in terms of Bayes risk and spatial homogeneity of the classified objects than do several popular methods, including classification and regression trees, learning vector quantization, causal hidden Markov models (HMMs), and multiresolution HMMs. The computational load of HMGMM is similar to that of the causal HMM.
Fault detection of helicopter gearboxes using the multi-valued influence matrix method
NASA Technical Reports Server (NTRS)
Chin, Hsinyung; Danai, Kourosh; Lewicki, David G.
1993-01-01
In this paper we investigate the effectiveness of a pattern classifying fault detection system that is designed to cope with the variability of fault signatures inherent in helicopter gearboxes. For detection, the measurements are monitored on-line and flagged upon the detection of abnormalities, so that they can be attributed to a faulty or normal case. As such, the detection system is composed of two components, a quantization matrix to flag the measurements, and a multi-valued influence matrix (MVIM) that represents the behavior of measurements during normal operation and at fault instances. Both the quantization matrix and influence matrix are tuned during a training session so as to minimize the error in detection. To demonstrate the effectiveness of this detection system, it was applied to vibration measurements collected from a helicopter gearbox during normal operation and at various fault instances. The results indicate that the MVIM method provides excellent results when the full range of faults effects on the measurements are included in the training set.
Advance in multi-hit detection and quantization in atom probe tomography.
Da Costa, G; Wang, H; Duguay, S; Bostel, A; Blavette, D; Deconihout, B
2012-12-01
The preferential retention of high evaporation field chemical species at the sample surface in atom-probe tomography (e.g., boron in silicon or in metallic alloys) leads to correlated field evaporation and pronounced pile-up effects on the detector. The latter severely affects the reliability of concentration measurements of current 3D atom probes leading to an under-estimation of the concentrations of the high-field species. The multi-hit capabilities of the position-sensitive time-resolved detector is shown to play a key role. An innovative method based on Fourier space signal processing of signals supplied by an advance delay-line position-sensitive detector is shown to drastically improve the time resolving power of the detector and consequently its capability to detect multiple events. Results show that up to 30 ions on the same evaporation pulse can be detected and properly positioned. The major impact of this new method on the quantization of chemical composition in materials, particularly in highly-doped Si(B) samples is highlighted.
Steganalysis based on JPEG compatibility
NASA Astrophysics Data System (ADS)
Fridrich, Jessica; Goljan, Miroslav; Du, Rui
2001-11-01
In this paper, we introduce a new forensic tool that can reliably detect modifications in digital images, such as distortion due to steganography and watermarking, in images that were originally stored in the JPEG format. The JPEG compression leave unique fingerprints and serves as a fragile watermark enabling us to detect changes as small as modifying the LSB of one randomly chosen pixel. The detection of changes is based on investigating the compatibility of 8x8 blocks of pixels with JPEG compression with a given quantization matrix. The proposed steganalytic method is applicable to virtually all steganongraphic and watermarking algorithms with the exception of those that embed message bits into the quantized JPEG DCT coefficients. The method can also be used to estimate the size of the secret message and identify the pixels that carry message bits. As a consequence of our steganalysis, we strongly recommend avoiding using images that have been originally stored in the JPEG format as cover-images for spatial-domain steganography.
Optimized universal color palette design for error diffusion
NASA Astrophysics Data System (ADS)
Kolpatzik, Bernd W.; Bouman, Charles A.
1995-04-01
Currently, many low-cost computers can only simultaneously display a palette of 256 color. However, this palette is usually selectable from a very large gamut of available colors. For many applications, this limited palette size imposes a significant constraint on the achievable image quality. We propose a method for designing an optimized universal color palette for use with halftoning methods such as error diffusion. The advantage of a universal color palette is that it is fixed and therefore allows multiple images to be displayed simultaneously. To design the palette, we employ a new vector quantization method known as sequential scalar quantization (SSQ) to allocate the colors in a visually uniform color space. The SSQ method achieves near-optimal allocation, but may be efficiently implemented using a series of lookup tables. When used with error diffusion, SSQ adds little computational overhead and may be used to minimize the visual error in an opponent color coordinate system. We compare the performance of the optimized algorithm to standard error diffusion by evaluating a visually weighted mean-squared-error measure. Our metric is based on the color difference in CIE L*AL*B*, but also accounts for the lowpass characteristic of human contrast sensitivity.
A new similarity index for nonlinear signal analysis based on local extrema patterns
NASA Astrophysics Data System (ADS)
Niknazar, Hamid; Motie Nasrabadi, Ali; Shamsollahi, Mohammad Bagher
2018-02-01
Common similarity measures of time domain signals such as cross-correlation and Symbolic Aggregate approximation (SAX) are not appropriate for nonlinear signal analysis. This is because of the high sensitivity of nonlinear systems to initial points. Therefore, a similarity measure for nonlinear signal analysis must be invariant to initial points and quantify the similarity by considering the main dynamics of signals. The statistical behavior of local extrema (SBLE) method was previously proposed to address this problem. The SBLE similarity index uses quantized amplitudes of local extrema to quantify the dynamical similarity of signals by considering patterns of sequential local extrema. By adding time information of local extrema as well as fuzzifying quantized values, this work proposes a new similarity index for nonlinear and long-term signal analysis, which extends the SBLE method. These new features provide more information about signals and reduce noise sensitivity by fuzzifying them. A number of practical tests were performed to demonstrate the ability of the method in nonlinear signal clustering and classification on synthetic data. In addition, epileptic seizure detection based on electroencephalography (EEG) signal processing was done by the proposed similarity to feature the potentials of the method as a real-world application tool.
[Discrimination of Rice Syrup Adulterant of Acacia Honey Based Using Near-Infrared Spectroscopy].
Zhang, Yan-nan; Chen, Lan-zhen; Xue, Xiao-feng; Wu, Li-ming; Li, Yi; Yang, Juan
2015-09-01
At present, the rice syrup as a low price of the sweeteners was often adulterated into acacia honey and the adulterated honeys were sold in honey markets, while there is no suitable and fast method to identify honey adulterated with rice syrup. In this study, Near infrared spectroscopy (NIR) combined with chemometric methods were used to discriminate authenticity of honey. 20 unprocessed acacia honey samples from the different honey producing areas, mixed? with different proportion of rice syrup, were prepared of seven different concentration gradient? including 121 samples. The near infrared spectrum (NIR) instrument and spectrum processing software have been applied in the? spectrum? scanning and data conversion on adulterant samples, respectively. Then it was analyzed by Principal component analysis (PCA) and canonical discriminant analysis methods in order to discriminating adulterated honey. The results showed that after principal components analysis, the first two principal components accounted for 97.23% of total variation, but the regionalism of the score plot of the first two PCs was not obvious, so the canonical discriminant analysis was used to make the further discrimination, all samples had been discriminated correctly, the first two discriminant functions accounted for 91.6% among the six canonical discriminant functions, Then the different concentration of adulterant samples can be discriminated correctly, it illustrate that canonical discriminant analysis method combined with NIR spectroscopy is not only feasible but also practical for rapid and effective discriminate of the rice syrup adulterant of acacia honey.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Inomata, A.; Junker, G.; Wilson, R.
1993-08-01
The unified treatment of the Dirac monopole, the Schwinger monopole, and the Aharonov-Bahn problem by Barut and Wilson is revisited via a path integral approach. The Kustaanheimo-Stiefel transformation of space and time is utilized to calculate the path integral for a charged particle in the singular vector potential. In the process of dimensional reduction, a topological charge quantization rule is derived, which contains Dirac's quantization condition as a special case. 32 refs.
2014-07-01
establishment of Glioblastoma ( GBM ) cell lines from GBM patient’s tumor samples and quantized cell populations of each of the parental GBM cell lines, we... GBM patients are now well established and from the basis of the molecular characterization of the tumor development and signatures presented by these...analysis of these quantized cell sub populations and have begun to assemble the protein signatures of GBM tumors underpinned by the comprehensive
Differential calculus on quantized simple lie groups
NASA Astrophysics Data System (ADS)
Jurčo, Branislav
1991-07-01
Differential calculi, generalizations of Woronowicz's four-dimensional calculus on SU q (2), are introduced for quantized classical simple Lie groups in a constructive way. For this purpose, the approach of Faddeev and his collaborators to quantum groups was used. An equivalence of Woronowicz's enveloping algebra generated by the dual space to the left-invariant differential forms and the corresponding quantized universal enveloping algebra, is obtained for our differential calculi. Real forms for q ∈ ℝ are also discussed.
Light-hole quantization in the optical response of ultra-wide GaAs/Al(x)Ga(1-x)As quantum wells.
Solovyev, V V; Bunakov, V A; Schmult, S; Kukushkin, I V
2013-01-16
Temperature-dependent reflectivity and photoluminescence spectra are studied for undoped ultra-wide 150 and 250 nm GaAs quantum wells. It is shown that spectral features previously attributed to a size quantization of the exciton motion in the z-direction coincide well with energies of quantized levels for light holes. Furthermore, optical spectra reveal very similar properties at temperatures above the exciton dissociation point.
Deformation quantizations with separation of variables on a Kähler manifold
NASA Astrophysics Data System (ADS)
Karabegov, Alexander V.
1996-10-01
We give a simple geometric description of all formal differentiable deformation quantizations on a Kähler manifold M such that for each open subset U⊂ M ⋆-multiplication from the left by a holomorphic function and from the right by an antiholomorphic function on U coincides with the pointwise multiplication by these functions. We show that these quantizations are in 1-1 correspondence with the formal deformations of the original Kähler metrics on M.
Extension of loop quantum gravity to f(R) theories.
Zhang, Xiangdong; Ma, Yongge
2011-04-29
The four-dimensional metric f(R) theories of gravity are cast into connection-dynamical formalism with real su(2) connections as configuration variables. Through this formalism, the classical metric f(R) theories are quantized by extending the loop quantization scheme of general relativity. Our results imply that the nonperturbative quantization procedure of loop quantum gravity is valid not only for general relativity but also for a rather general class of four-dimensional metric theories of gravity.
Vector quantizer designs for joint compression and terrain categorization of multispectral imagery
NASA Technical Reports Server (NTRS)
Gorman, John D.; Lyons, Daniel F.
1994-01-01
Two vector quantizer designs for compression of multispectral imagery and their impact on terrain categorization performance are evaluated. The mean-squared error (MSE) and classification performance of the two quantizers are compared, and it is shown that a simple two-stage design minimizing MSE subject to a constraint on classification performance has a significantly better classification performance than a standard MSE-based tree-structured vector quantizer followed by maximum likelihood classification. This improvement in classification performance is obtained with minimal loss in MSE performance. The results show that it is advantageous to tailor compression algorithm designs to the required data exploitation tasks. Applications of joint compression/classification include compression for the archival or transmission of Landsat imagery that is later used for land utility surveys and/or radiometric analysis.
Uncertainty relations, zero point energy and the linear canonical group
NASA Technical Reports Server (NTRS)
Sudarshan, E. C. G.
1993-01-01
The close relationship between the zero point energy, the uncertainty relations, coherent states, squeezed states, and correlated states for one mode is investigated. This group-theoretic perspective enables the parametrization and identification of their multimode generalization. In particular the generalized Schroedinger-Robertson uncertainty relations are analyzed. An elementary method of determining the canonical structure of the generalized correlated states is presented.
Explicit high-order non-canonical symplectic particle-in-cell algorithms for Vlasov-Maxwell systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiao, Jianyuan; Qin, Hong; Liu, Jian
2015-11-01
Explicit high-order non-canonical symplectic particle-in-cell algorithms for classical particle-field systems governed by the Vlasov-Maxwell equations are developed. The algorithms conserve a discrete non-canonical symplectic structure derived from the Lagrangian of the particle-field system, which is naturally discrete in particles. The electromagnetic field is spatially discretized using the method of discrete exterior calculus with high-order interpolating differential forms for a cubic grid. The resulting time-domain Lagrangian assumes a non-canonical symplectic structure. It is also gauge invariant and conserves charge. The system is then solved using a structure-preserving splitting method discovered by He et al. [preprint arXiv: 1505.06076 (2015)], which produces fivemore » exactly soluble sub-systems, and high-order structure-preserving algorithms follow by combinations. The explicit, high-order, and conservative nature of the algorithms is especially suitable for long-term simulations of particle-field systems with extremely large number of degrees of freedom on massively parallel supercomputers. The algorithms have been tested and verified by the two physics problems, i.e., the nonlinear Landau damping and the electron Bernstein wave. (C) 2015 AIP Publishing LLC.« less
Guo, Yang; Riplinger, Christoph; Becker, Ute; Liakos, Dimitrios G; Minenkov, Yury; Cavallo, Luigi; Neese, Frank
2018-01-07
In this communication, an improved perturbative triples correction (T) algorithm for domain based local pair-natural orbital singles and doubles coupled cluster (DLPNO-CCSD) theory is reported. In our previous implementation, the semi-canonical approximation was used and linear scaling was achieved for both the DLPNO-CCSD and (T) parts of the calculation. In this work, we refer to this previous method as DLPNO-CCSD(T 0 ) to emphasize the semi-canonical approximation. It is well-established that the DLPNO-CCSD method can predict very accurate absolute and relative energies with respect to the parent canonical CCSD method. However, the (T 0 ) approximation may introduce significant errors in absolute energies as the triples correction grows up in magnitude. In the majority of cases, the relative energies from (T 0 ) are as accurate as the canonical (T) results of themselves. Unfortunately, in rare cases and in particular for small gap systems, the (T 0 ) approximation breaks down and relative energies show large deviations from the parent canonical CCSD(T) results. To address this problem, an iterative (T) algorithm based on the previous DLPNO-CCSD(T 0 ) algorithm has been implemented [abbreviated here as DLPNO-CCSD(T)]. Using triples natural orbitals to represent the virtual spaces for triples amplitudes, storage bottlenecks are avoided. Various carefully designed approximations ease the computational burden such that overall, the increase in the DLPNO-(T) calculation time over DLPNO-(T 0 ) only amounts to a factor of about two (depending on the basis set). Benchmark calculations for the GMTKN30 database show that compared to DLPNO-CCSD(T 0 ), the errors in absolute energies are greatly reduced and relative energies are moderately improved. The particularly problematic case of cumulene chains of increasing lengths is also successfully addressed by DLPNO-CCSD(T).
NASA Astrophysics Data System (ADS)
Guo, Yang; Riplinger, Christoph; Becker, Ute; Liakos, Dimitrios G.; Minenkov, Yury; Cavallo, Luigi; Neese, Frank
2018-01-01
In this communication, an improved perturbative triples correction (T) algorithm for domain based local pair-natural orbital singles and doubles coupled cluster (DLPNO-CCSD) theory is reported. In our previous implementation, the semi-canonical approximation was used and linear scaling was achieved for both the DLPNO-CCSD and (T) parts of the calculation. In this work, we refer to this previous method as DLPNO-CCSD(T0) to emphasize the semi-canonical approximation. It is well-established that the DLPNO-CCSD method can predict very accurate absolute and relative energies with respect to the parent canonical CCSD method. However, the (T0) approximation may introduce significant errors in absolute energies as the triples correction grows up in magnitude. In the majority of cases, the relative energies from (T0) are as accurate as the canonical (T) results of themselves. Unfortunately, in rare cases and in particular for small gap systems, the (T0) approximation breaks down and relative energies show large deviations from the parent canonical CCSD(T) results. To address this problem, an iterative (T) algorithm based on the previous DLPNO-CCSD(T0) algorithm has been implemented [abbreviated here as DLPNO-CCSD(T)]. Using triples natural orbitals to represent the virtual spaces for triples amplitudes, storage bottlenecks are avoided. Various carefully designed approximations ease the computational burden such that overall, the increase in the DLPNO-(T) calculation time over DLPNO-(T0) only amounts to a factor of about two (depending on the basis set). Benchmark calculations for the GMTKN30 database show that compared to DLPNO-CCSD(T0), the errors in absolute energies are greatly reduced and relative energies are moderately improved. The particularly problematic case of cumulene chains of increasing lengths is also successfully addressed by DLPNO-CCSD(T).
Theory of the Quantized Hall Conductance in Periodic Systems: a Topological Analysis.
NASA Astrophysics Data System (ADS)
Czerwinski, Michael Joseph
The integral quantization of the Hall conductance in two-dimensional periodic systems is investigated from a topological point of view. Attention is focused on the contributions from the electronic sub-bands which arise from perturbed Landau levels. After reviewing the theoretical work leading to the identification of the Hall conductance as a topological quantum number, both a determination and interpretation of these quantized values for the sub-band conductances is made. It is shown that the Hall conductance of each sub-band can be regarded as the sum of two terms which will be referred to as classical and nonclassical. Although each of these contributions individually leads to a fractional conductance, the sum of these two contributions does indeed yield an integer. These integral conductances are found to be given by the solution of a simple Diophantine equation which depends on the periodic perturbation. A connection between the quantized value of the Hall conductance and the covering of real space by the zeroes of the sub-band wavefunctions allows for a determination of these conductances under more general potentials. A method is described for obtaining the conductance values from only those states bordering the Brillouin zone, and not the states in its interior. This method is demonstrated to give Hall conductances in agreement with those obtained from the Diophantine equation for the sinusoidal potential case explored earlier. Generalizing a simple gauge invariance argument from real space to k-space, a k-space 'vector potential' is introduced. This allows for a explicit identification of the Hall conductance with the phase winding number of the sub-band wavefunction around the Brillouin zone. The previously described division of the Hall conductance into classical and nonclassical contributions is in this way made more rigorous; based on periodicity considerations alone, these terms are identified as the winding numbers associated with (i) the basis states and (ii) the coefficients of these basis states, respectively. In this way a general Diophantine equation, independent of the periodic potential, is obtained. Finally, the use of the 'parallel transport' of state vectors in the determination of an overall phase convention for these states is described. This is seen to lead to a simple and straightforward method for determining the Hall conductance. This method is based on the states directly, without reference to the particular component wavefunctions of these states. Mention is made of the generality of calculations of this type, within the context of the geometric (or Berry) phases acquired by systems under an adiabatic modification of their environment.
The wavelet/scalar quantization compression standard for digital fingerprint images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradley, J.N.; Brislawn, C.M.
1994-04-01
A new digital image compression standard has been adopted by the US Federal Bureau of Investigation for use on digitized gray-scale fingerprint images. The algorithm is based on adaptive uniform scalar quantization of a discrete wavelet transform image decomposition and is referred to as the wavelet/scalar quantization standard. The standard produces archival quality images at compression ratios of around 20:1 and will allow the FBI to replace their current database of paper fingerprint cards with digital imagery.
Table look-up estimation of signal and noise parameters from quantized observables
NASA Technical Reports Server (NTRS)
Vilnrotter, V. A.; Rodemich, E. R.
1986-01-01
A table look-up algorithm for estimating underlying signal and noise parameters from quantized observables is examined. A general mathematical model is developed, and a look-up table designed specifically for estimating parameters from four-bit quantized data is described. Estimator performance is evaluated both analytically and by means of numerical simulation, and an example is provided to illustrate the use of the look-up table for estimating signal-to-noise ratios commonly encountered in Voyager-type data.
Digital television system design study
NASA Technical Reports Server (NTRS)
Huth, G. K.
1976-01-01
The use of digital techniques for transmission of pictorial data is discussed for multi-frame images (television). Video signals are processed in a manner which includes quantization and coding such that they are separable from the noise introduced into the channel. The performance of digital television systems is determined by the nature of the processing techniques (i.e., whether the video signal itself or, instead, something related to the video signal is quantized and coded) and to the quantization and coding schemes employed.
Rotating effects on the Landau quantization for an atom with a magnetic quadrupole moment
NASA Astrophysics Data System (ADS)
Fonseca, I. C.; Bakke, K.
2016-01-01
Based on the single particle approximation [Dmitriev et al., Phys. Rev. C 50, 2358 (1994) and C.-C. Chen, Phys. Rev. A 51, 2611 (1995)], the Landau quantization associated with an atom with a magnetic quadrupole moment is introduced, and then, rotating effects on this analogue of the Landau quantization is investigated. It is shown that rotating effects can modify the cyclotron frequency and breaks the degeneracy of the analogue of the Landau levels.
1991-11-01
2 mega joule/m 2 (MJ/m 2 ) curie 3.700000 x E +1 *giga becquerel (GBq) degree (angle) 1.745329 x E -2 radian (rad) degree Farenheit tK = (tp...quantization assigned two quantization values. One value was assigned for demodulation values that was larger than zero and another quantization value to...demodulation values that were smaller than zero (for maximum-likelihood decisions). Logic 0 was assigned for a positive demodulation value and a logic 1 was
Rotating effects on the Landau quantization for an atom with a magnetic quadrupole moment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fonseca, I. C.; Bakke, K., E-mail: kbakke@fisica.ufpb.br
2016-01-07
Based on the single particle approximation [Dmitriev et al., Phys. Rev. C 50, 2358 (1994) and C.-C. Chen, Phys. Rev. A 51, 2611 (1995)], the Landau quantization associated with an atom with a magnetic quadrupole moment is introduced, and then, rotating effects on this analogue of the Landau quantization is investigated. It is shown that rotating effects can modify the cyclotron frequency and breaks the degeneracy of the analogue of the Landau levels.
Kalathil, Shafeer; Lee, Jintae; Cho, Moo Hwan
2013-02-01
Oppan quantized style: By adding a gold precursor at its cathode, a microbial fuel cell (MFC) is demonstrated to form gold nanoparticles that can be used to simultaneously produce bioelectricity and hydrogen. By exploiting the quantized capacitance charging effect, the gold nanoparticles mediate the production of hydrogen without requiring an external power supply, while the MFC produces a stable power density. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Reformulation of the covering and quantizer problems as ground states of interacting particles.
Torquato, S
2010-11-01
It is known that the sphere-packing problem and the number-variance problem (closely related to an optimization problem in number theory) can be posed as energy minimizations associated with an infinite number of point particles in d-dimensional Euclidean space R(d) interacting via certain repulsive pair potentials. We reformulate the covering and quantizer problems as the determination of the ground states of interacting particles in R(d) that generally involve single-body, two-body, three-body, and higher-body interactions. This is done by linking the covering and quantizer problems to certain optimization problems involving the "void" nearest-neighbor functions that arise in the theory of random media and statistical mechanics. These reformulations, which again exemplify the deep interplay between geometry and physics, allow one now to employ theoretical and numerical optimization techniques to analyze and solve these energy minimization problems. The covering and quantizer problems have relevance in numerous applications, including wireless communication network layouts, the search of high-dimensional data parameter spaces, stereotactic radiation therapy, data compression, digital communications, meshing of space for numerical analysis, and coding and cryptography, among other examples. In the first three space dimensions, the best known solutions of the sphere-packing and number-variance problems (or their "dual" solutions) are directly related to those of the covering and quantizer problems, but such relationships may or may not exist for d≥4 , depending on the peculiarities of the dimensions involved. Our reformulation sheds light on the reasons for these similarities and differences. We also show that disordered saturated sphere packings provide relatively thin (economical) coverings and may yield thinner coverings than the best known lattice coverings in sufficiently large dimensions. In the case of the quantizer problem, we derive improved upper bounds on the quantizer error using sphere-packing solutions, which are generally substantially sharper than an existing upper bound in low to moderately large dimensions. We also demonstrate that disordered saturated sphere packings yield relatively good quantizers. Finally, we remark on possible applications of our results for the detection of gravitational waves.
Reformulation of the covering and quantizer problems as ground states of interacting particles
NASA Astrophysics Data System (ADS)
Torquato, S.
2010-11-01
It is known that the sphere-packing problem and the number-variance problem (closely related to an optimization problem in number theory) can be posed as energy minimizations associated with an infinite number of point particles in d -dimensional Euclidean space Rd interacting via certain repulsive pair potentials. We reformulate the covering and quantizer problems as the determination of the ground states of interacting particles in Rd that generally involve single-body, two-body, three-body, and higher-body interactions. This is done by linking the covering and quantizer problems to certain optimization problems involving the “void” nearest-neighbor functions that arise in the theory of random media and statistical mechanics. These reformulations, which again exemplify the deep interplay between geometry and physics, allow one now to employ theoretical and numerical optimization techniques to analyze and solve these energy minimization problems. The covering and quantizer problems have relevance in numerous applications, including wireless communication network layouts, the search of high-dimensional data parameter spaces, stereotactic radiation therapy, data compression, digital communications, meshing of space for numerical analysis, and coding and cryptography, among other examples. In the first three space dimensions, the best known solutions of the sphere-packing and number-variance problems (or their “dual” solutions) are directly related to those of the covering and quantizer problems, but such relationships may or may not exist for d≥4 , depending on the peculiarities of the dimensions involved. Our reformulation sheds light on the reasons for these similarities and differences. We also show that disordered saturated sphere packings provide relatively thin (economical) coverings and may yield thinner coverings than the best known lattice coverings in sufficiently large dimensions. In the case of the quantizer problem, we derive improved upper bounds on the quantizer error using sphere-packing solutions, which are generally substantially sharper than an existing upper bound in low to moderately large dimensions. We also demonstrate that disordered saturated sphere packings yield relatively good quantizers. Finally, we remark on possible applications of our results for the detection of gravitational waves.
Topos quantum theory on quantization-induced sheaves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakayama, Kunji, E-mail: nakayama@law.ryukoku.ac.jp
2014-10-15
In this paper, we construct a sheaf-based topos quantum theory. It is well known that a topos quantum theory can be constructed on the topos of presheaves on the category of commutative von Neumann algebras of bounded operators on a Hilbert space. Also, it is already known that quantization naturally induces a Lawvere-Tierney topology on the presheaf topos. We show that a topos quantum theory akin to the presheaf-based one can be constructed on sheaves defined by the quantization-induced Lawvere-Tierney topology. That is, starting from the spectral sheaf as a state space of a given quantum system, we construct sheaf-basedmore » expressions of physical propositions and truth objects, and thereby give a method of truth-value assignment to the propositions. Furthermore, we clarify the relationship to the presheaf-based quantum theory. We give translation rules between the sheaf-based ingredients and the corresponding presheaf-based ones. The translation rules have “coarse-graining” effects on the spaces of the presheaf-based ingredients; a lot of different proposition presheaves, truth presheaves, and presheaf-based truth-values are translated to a proposition sheaf, a truth sheaf, and a sheaf-based truth-value, respectively. We examine the extent of the coarse-graining made by translation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skalozub, A.S.; Tsaune, A.Ya.
1994-12-01
A new approach for analyzing the highly excited vibration-rotation (VR) states of nonrigid molecules is suggested. It is based on the separation of the vibrational and rotational terms in the molecular VR Hamiltonian by introducing periodic auxiliary fields. These fields transfer different interactions within a molecule and are treated in terms of the mean-field approximation. As a result, the solution of the stationary Schroedinger equation with the VR Hamiltonian amounts to a quantization of the Berry phase in a problem of the molecular angular-momentum motion in a certain periodic VR field (rotational problem). The quantization procedure takes into account themore » motion of the collective vibrational variables in the appropriate VR potentials (vibrational problem). The quantization rules, the mean-field configurations of auxiliary interactions, and the solutions to the Schrodinger equations for the vibrational and rotational problems are self-consistently connected with one another. The potentialities of the theory are demonstrated by the bending-rotation interaction modeled by the Bunker-Landsberg potential function in the H{sub 2} molecule. The calculations are compared with both the results of the exact computations and those of other approximate methods. 32 refs., 4 tabs.« less
Wavelet-based image compression using shuffling and bit plane correlation
NASA Astrophysics Data System (ADS)
Kim, Seungjong; Jeong, Jechang
2000-12-01
In this paper, we propose a wavelet-based image compression method using shuffling and bit plane correlation. The proposed method improves coding performance in two steps: (1) removing the sign bit plane by shuffling process on quantized coefficients, (2) choosing the arithmetic coding context according to maximum correlation direction. The experimental results are comparable or superior for some images with low correlation, to existing coders.
Error diffusion concept for multi-level quantization
NASA Astrophysics Data System (ADS)
Broja, Manfred; Michalowski, Kristina; Bryngdahl, Olof
1990-11-01
The error diffusion binarization procedure is adapted to multi-level quantization. The threshold parameters then available have a noticeable influence on the process. Characteristic features of the technique are shown together with experimental results.
Natural inflation from polymer quantization
NASA Astrophysics Data System (ADS)
Ali, Masooma; Seahra, Sanjeev S.
2017-11-01
We study the polymer quantization of a homogeneous massive scalar field in the early Universe using a prescription inequivalent to those previously appearing in the literature. Specifically, we assume a Hilbert space for which the scalar field momentum is well defined but its amplitude is not. This is closer in spirit to the quantization scheme of loop quantum gravity, in which no unique configuration operator exists. We show that in the semiclassical approximation, the main effect of this polymer quantization scheme is to compactify the phase space of chaotic inflation in the field amplitude direction. This gives rise to an effective scalar potential closely resembling that of hybrid natural inflation. Unlike polymer schemes in which the scalar field amplitude is well defined, the semiclassical dynamics involves a past cosmological singularity; i.e., this approach does not mitigate the big bang.
Optimal sampling and quantization of synthetic aperture radar signals
NASA Technical Reports Server (NTRS)
Wu, C.
1978-01-01
Some theoretical and experimental results on optimal sampling and quantization of synthetic aperture radar (SAR) signals are presented. It includes a description of a derived theoretical relationship between the pixel signal to noise ratio of processed SAR images and the number of quantization bits per sampled signal, assuming homogeneous extended targets. With this relationship known, a solution may be realized for the problem of optimal allocation of a fixed data bit-volume (for specified surface area and resolution criterion) between the number of samples and the number of bits per sample. The results indicate that to achieve the best possible image quality for a fixed bit rate and a given resolution criterion, one should quantize individual samples coarsely and thereby maximize the number of multiple looks. The theoretical results are then compared with simulation results obtained by processing aircraft SAR data.
Effect of temperature degeneracy and Landau quantization on drift solitary waves and double layers
NASA Astrophysics Data System (ADS)
Shan, Shaukat Ali; Haque, Q.
2018-01-01
The linear and nonlinear drift ion acoustic waves have been investigated in an inhomogeneous, magnetized, dense degenerate, and quantized magnetic field plasma. The linear drift ion acoustic wave propagation along with the nonlinear structures like double layers and solitary waves has been found to be strongly dependent on the drift speed, magnetic field quantization parameter β, and the temperature degeneracy. The graphical illustrations show that the frequency of linear waves and the amplitude of the solitary waves increase with the increase in temperature degeneracy and Landau quantization effect, while the amplitude of the double layers decreases with the increase in η and T. The relevance of the present study is pointed out in the plasma environment of fast ignition inertial confinement fusion, the white dwarf stars, and short pulsed petawatt laser technology.
Time-Symmetric Quantization in Spacetimes with Event Horizons
NASA Astrophysics Data System (ADS)
Kobakhidze, Archil; Rodd, Nicholas
2013-08-01
The standard quantization formalism in spacetimes with event horizons implies a non-unitary evolution of quantum states, as initial pure states may evolve into thermal states. This phenomenon is behind the famous black hole information loss paradox which provoked long-standing debates on the compatibility of quantum mechanics and gravity. In this paper we demonstrate that within an alternative time-symmetric quantization formalism thermal radiation is absent and states evolve unitarily in spacetimes with event horizons. We also discuss the theoretical consistency of the proposed formalism. We explicitly demonstrate that the theory preserves the microcausality condition and suggest a "reinterpretation postulate" to resolve other apparent pathologies associated with negative energy states. Accordingly as there is a consistent alternative, we argue that choosing to use time-asymmetric quantization is a necessary condition for the black hole information loss paradox.
Ao, Wei; Song, Yongdong; Wen, Changyun
2017-05-01
In this paper, we investigate the adaptive control problem for a class of nonlinear uncertain MIMO systems with actuator faults and quantization effects. Under some mild conditions, an adaptive robust fault-tolerant control is developed to compensate the affects of uncertainties, actuator failures and errors caused by quantization, and a range of the parameters for these quantizers is established. Furthermore, a Lyapunov-like approach is adopted to demonstrate that the ultimately uniformly bounded output tracking error is guaranteed by the controller, and the signals of the closed-loop system are ensured to be bounded, even in the presence of at most m-q actuators stuck or outage. Finally, numerical simulations are provided to verify and illustrate the effectiveness of the proposed adaptive schemes. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Dictionary learning-based spatiotemporal regularization for 3D dense speckle tracking
NASA Astrophysics Data System (ADS)
Lu, Allen; Zontak, Maria; Parajuli, Nripesh; Stendahl, John C.; Boutagy, Nabil; Eberle, Melissa; O'Donnell, Matthew; Sinusas, Albert J.; Duncan, James S.
2017-03-01
Speckle tracking is a common method for non-rigid tissue motion analysis in 3D echocardiography, where unique texture patterns are tracked through the cardiac cycle. However, poor tracking often occurs due to inherent ultrasound issues, such as image artifacts and speckle decorrelation; thus regularization is required. Various methods, such as optical flow, elastic registration, and block matching techniques have been proposed to track speckle motion. Such methods typically apply spatial and temporal regularization in a separate manner. In this paper, we propose a joint spatiotemporal regularization method based on an adaptive dictionary representation of the dense 3D+time Lagrangian motion field. Sparse dictionaries have good signal adaptive and noise-reduction properties; however, they are prone to quantization errors. Our method takes advantage of the desirable noise suppression, while avoiding the undesirable quantization error. The idea is to enforce regularization only on the poorly tracked trajectories. Specifically, our method 1.) builds data-driven 4-dimensional dictionary of Lagrangian displacements using sparse learning, 2.) automatically identifies poorly tracked trajectories (outliers) based on sparse reconstruction errors, and 3.) performs sparse reconstruction of the outliers only. Our approach can be applied on dense Lagrangian motion fields calculated by any method. We demonstrate the effectiveness of our approach on a baseline block matching speckle tracking and evaluate performance of the proposed algorithm using tracking and strain accuracy analysis.
Vector coding of wavelet-transformed images
NASA Astrophysics Data System (ADS)
Zhou, Jun; Zhi, Cheng; Zhou, Yuanhua
1998-09-01
Wavelet, as a brand new tool in signal processing, has got broad recognition. Using wavelet transform, we can get octave divided frequency band with specific orientation which combines well with the properties of Human Visual System. In this paper, we discuss the classified vector quantization method for multiresolution represented image.
Quantization of charged fields in the presence of critical potential steps
NASA Astrophysics Data System (ADS)
Gavrilov, S. P.; Gitman, D. M.
2016-02-01
QED with strong external backgrounds that can create particles from the vacuum is well developed for the so-called t -electric potential steps, which are time-dependent external electric fields that are switched on and off at some time instants. However, there exist many physically interesting situations where external backgrounds do not switch off at the time infinity. E.g., these are time-independent nonuniform electric fields that are concentrated in restricted space areas. The latter backgrounds represent a kind of spatial x -electric potential steps for charged particles. They can also create particles from the vacuum, the Klein paradox being closely related to this process. Approaches elaborated for treating quantum effects in the t -electric potential steps are not directly applicable to the x -electric potential steps and their generalization for x -electric potential steps was not sufficiently developed. We believe that the present work represents a consistent solution of the latter problem. We have considered a canonical quantization of the Dirac and scalar fields with x -electric potential step and have found in- and out-creation and annihilation operators that allow one to have particle interpretation of the physical system under consideration. To identify in- and out-operators we have performed a detailed mathematical and physical analysis of solutions of the relativistic wave equations with an x -electric potential step with subsequent QFT analysis of correctness of such an identification. We elaborated a nonperturbative (in the external field) technique that allows one to calculate all characteristics of zero-order processes, such, for example, scattering, reflection, and electron-positron pair creation, without radiation corrections, and also to calculate Feynman diagrams that describe all characteristics of processes with interaction between the in-, out-particles and photons. These diagrams have formally the usual form, but contain special propagators. Expressions for these propagators in terms of in- and out-solutions are presented. We apply the elaborated approach to two popular exactly solvable cases of x -electric potential steps, namely, to the Sauter potential and to the Klein step.
Explorations in fuzzy physics and non-commutative geometry
NASA Astrophysics Data System (ADS)
Kurkcuoglu, Seckin
Fuzzy spaces arise as discrete approximations to continuum manifolds. They are usually obtained through quantizing coadjoint orbits of compact Lie groups and they can be described in terms of finite-dimensional matrix algebras, which for large matrix sizes approximate the algebra of functions of the limiting continuum manifold. Their ability to exactly preserve the symmetries of their parent manifolds is especially appealing for physical applications. Quantum Field Theories are built over them as finite-dimensional matrix models preserving almost all the symmetries of their respective continuum models. In this dissertation, we first focus our attention to the study of fuzzy supersymmetric spaces. In this regard, we obtain the fuzzy supersphere S2,2F through quantizing the supersphere, and demonstrate that it has exact supersymmetry. We derive a finite series formula for the *-product of functions over S2,2F and analyze the differential geometric information encoded in this formula. Subsequently, we show that quantum field theories on S2,2F are realized as finite-dimensional supermatrix models, and in particular we obtain the non-linear sigma model over the fuzzy supersphere by constructing the fuzzy supersymmetric extensions of a certain class of projectors. We show that this model too, is realized as a finite-dimensional supermatrix model with exact supersymmetry. Next, we show that fuzzy spaces have a generalized Hopf algebra structure. By focusing on the fuzzy sphere, we establish that there is a *-homomorphism from the group algebra SU(2)* of SU(2) to the fuzzy sphere. Using this and the canonical Hopf algebra structure of SU(2)* we show that both the fuzzy sphere and their direct sum are Hopf algebras. Using these results, we discuss processes in which a fuzzy sphere with angular momenta J splits into fuzzy spheres with angular momenta K and L. Finally, we study the formulation of Chern-Simons (CS) theory on an infinite strip of the non-commutative plane. We develop a finite-dimensional matrix model, whose large size limit approximates the CS theory on the infinite strip, and show that there are edge observables in this model obeying a finite-dimensional Lie algebra, that resembles the Kac-Moody algebra.
NASA Astrophysics Data System (ADS)
Manjanaik, N.; Parameshachari, B. D.; Hanumanthappa, S. N.; Banu, Reshma
2017-08-01
Intra prediction process of H.264 video coding standard used to code first frame i.e. Intra frame of video to obtain good coding efficiency compare to previous video coding standard series. More benefit of intra frame coding is to reduce spatial pixel redundancy with in current frame, reduces computational complexity and provides better rate distortion performance. To code Intra frame it use existing process Rate Distortion Optimization (RDO) method. This method increases computational complexity, increases in bit rate and reduces picture quality so it is difficult to implement in real time applications, so the many researcher has been developed fast mode decision algorithm for coding of intra frame. The previous work carried on Intra frame coding in H.264 standard using fast decision mode intra prediction algorithm based on different techniques was achieved increased in bit rate, degradation of picture quality(PSNR) for different quantization parameters. Many previous approaches of fast mode decision algorithms on intra frame coding achieved only reduction of computational complexity or it save encoding time and limitation was increase in bit rate with loss of quality of picture. In order to avoid increase in bit rate and loss of picture quality a better approach was developed. In this paper developed a better approach i.e. Gaussian pulse for Intra frame coding using diagonal down left intra prediction mode to achieve higher coding efficiency in terms of PSNR and bitrate. In proposed method Gaussian pulse is multiplied with each 4x4 frequency domain coefficients of 4x4 sub macro block of macro block of current frame before quantization process. Multiplication of Gaussian pulse for each 4x4 integer transformed coefficients at macro block levels scales the information of the coefficients in a reversible manner. The resulting signal would turn abstract. Frequency samples are abstract in a known and controllable manner without intermixing of coefficients, it avoids picture getting bad hit for higher values of quantization parameters. The proposed work was implemented using MATLAB and JM 18.6 reference software. The proposed work measure the performance parameters PSNR, bit rate and compression of intra frame of yuv video sequences in QCIF resolution under different values of quantization parameter with Gaussian value for diagonal down left intra prediction mode. The simulation results of proposed algorithm are tabulated and compared with previous algorithm i.e. Tian et al method. The proposed algorithm achieved reduced in bit rate averagely 30.98% and maintain consistent picture quality for QCIF sequences compared to previous algorithm i.e. Tian et al method.
Wu, Xiongwu; Damjanovic, Ana; Brooks, Bernard R.
2013-01-01
This review provides a comprehensive description of the self-guided Langevin dynamics (SGLD) and the self-guided molecular dynamics (SGMD) methods and their applications. Example systems are included to provide guidance on optimal application of these methods in simulation studies. SGMD/SGLD has enhanced ability to overcome energy barriers and accelerate rare events to affordable time scales. It has been demonstrated that with moderate parameters, SGLD can routinely cross energy barriers of 20 kT at a rate that molecular dynamics (MD) or Langevin dynamics (LD) crosses 10 kT barriers. The core of these methods is the use of local averages of forces and momenta in a direct manner that can preserve the canonical ensemble. The use of such local averages results in methods where low frequency motion “borrows” energy from high frequency degrees of freedom when a barrier is approached and then returns that excess energy after a barrier is crossed. This self-guiding effect also results in an accelerated diffusion to enhance conformational sampling efficiency. The resulting ensemble with SGLD deviates in a small way from the canonical ensemble, and that deviation can be corrected with either an on-the-fly or a post processing reweighting procedure that provides an excellent canonical ensemble for systems with a limited number of accelerated degrees of freedom. Since reweighting procedures are generally not size extensive, a newer method, SGLDfp, uses local averages of both momenta and forces to preserve the ensemble without reweighting. The SGLDfp approach is size extensive and can be used to accelerate low frequency motion in large systems, or in systems with explicit solvent where solvent diffusion is also to be enhanced. Since these methods are direct and straightforward, they can be used in conjunction with many other sampling methods or free energy methods by simply replacing the integration of degrees of freedom that are normally sampled by MD or LD. PMID:23913991
A low complexity, low spur digital IF conversion circuit for high-fidelity GNSS signal playback
NASA Astrophysics Data System (ADS)
Su, Fei; Ying, Rendong
2016-01-01
A low complexity high efficiency and low spur digital intermediate frequency (IF) conversion circuit is discussed in the paper. This circuit is key element in high-fidelity GNSS signal playback instrument. We analyze the spur performance of a finite state machine (FSM) based numerically controlled oscillators (NCO), by optimization of the control algorithm, a FSM based NCO with 3 quantization stage can achieves 65dB SFDR in the range of the seventh harmonic. Compare with traditional lookup table based NCO design with the same Spurious Free Dynamic Range (SFDR) performance, the logic resource require to implemented the NCO is reduced to 1/3. The proposed design method can be extended to the IF conversion system with good SFDR in the range of higher harmonic components by increasing the quantization stage.
NASA Astrophysics Data System (ADS)
Mezey, Paul G.
2017-11-01
Two strongly related theorems on non-degenerate ground state electron densities serve as the basis of "Molecular Informatics". The Hohenberg-Kohn theorem is a statement on global molecular information, ensuring that the complete electron density contains the complete molecular information. However, the Holographic Electron Density Theorem states more: the local information present in each and every positive volume density fragment is already complete: the information in the fragment is equivalent to the complete molecular information. In other words, the complete molecular information provided by the Hohenberg-Kohn Theorem is already provided, in full, by any positive volume, otherwise arbitrarily small electron density fragment. In this contribution some of the consequences of the Holographic Electron Density Theorem are discussed within the framework of the "Nuclear Charge Space" and the Universal Molecule Model. In the Nuclear Charge Space" the nuclear charges are regarded as continuous variables, and in the more general Universal Molecule Model some other quantized parameteres are also allowed to become "de-quantized and then re-quantized, leading to interrelations among real molecules through abstract molecules. Here the specific role of the Holographic Electron Density Theorem is discussed within the above context.
Zhang, Lu; Pang, Xiaodan; Ozolins, Oskars; Udalcovs, Aleksejs; Popov, Sergei; Xiao, Shilin; Hu, Weisheng; Chen, Jiajia
2018-04-01
We propose a spectrally efficient digitized radio-over-fiber (D-RoF) system by grouping highly correlated neighboring samples of the analog signals into multidimensional vectors, where the k-means clustering algorithm is adopted for adaptive quantization. A 30 Gbit/s D-RoF system is experimentally demonstrated to validate the proposed scheme, reporting a carrier aggregation of up to 40 100 MHz orthogonal frequency division multiplexing (OFDM) channels with quadrate amplitude modulation (QAM) order of 4 and an aggregation of 10 100 MHz OFDM channels with a QAM order of 16384. The equivalent common public radio interface rates from 37 to 150 Gbit/s are supported. Besides, the error vector magnitude (EVM) of 8% is achieved with the number of quantization bits of 4, and the EVM can be further reduced to 1% by increasing the number of quantization bits to 7. Compared with conventional pulse coding modulation-based D-RoF systems, the proposed D-RoF system improves the signal-to-noise-ratio up to ∼9 dB and greatly reduces the EVM, given the same number of quantization bits.
Landau quantization effects on hole-acoustic instability in semiconductor plasmas
NASA Astrophysics Data System (ADS)
Sumera, P.; Rasheed, A.; Jamil, M.; Siddique, M.; Areeb, F.
2017-12-01
The growth rate of the hole acoustic waves (HAWs) exciting in magnetized semiconductor quantum plasma pumped by the electron beam has been investigated. The instability of the waves contains quantum effects including the exchange and correlation potential, Bohm potential, Fermi-degenerate pressure, and the magnetic quantization of semiconductor plasma species. The effects of various plasma parameters, which include relative concentration of plasma particles, beam electron temperature, beam speed, plasma temperature (temperature of electrons/holes), and Landau electron orbital magnetic quantization parameter η, on the growth rate of HAWs, have been discussed. The numerical study of our model of acoustic waves has been applied, as an example, to the GaAs semiconductor exposed to electron beam in the magnetic field environment. An increment in either the concentration of the semiconductor electrons or the speed of beam electrons, in the presence of magnetic quantization of fermion orbital motion, enhances remarkably the growth rate of the HAWs. Although the growth rate of the waves reduces with a rise in the thermal temperature of plasma species, at a particular temperature, we receive a higher instability due to the contribution of magnetic quantization of fermions to it.
Exact renormalization group in Batalin-Vilkovisky theory
NASA Astrophysics Data System (ADS)
Zucchini, Roberto
2018-03-01
In this paper, inspired by the Costello's seminal work [11], we present a general formulation of exact renormalization group (RG) within the Batalin-Vilkovisky (BV) quantization scheme. In the spirit of effective field theory, the BV bracket and Laplacian structure as well as the BV effective action (EA) depend on an effective energy scale. The BV EA at a certain scale satisfies the BV quantum master equation at that scale. The RG flow of the EA is implemented by BV canonical maps intertwining the BV structures at different scales. Infinitesimally, this generates the BV exact renormalization group equation (RGE). We show that BV RG theory can be extended by augmenting the scale parameter space R to its shifted tangent bundle T [1]ℝ. The extra odd direction in scale space allows for a BV RG supersymmetry that constrains the structure of the BV RGE bringing it to Polchinski's form [6]. We investigate the implications of BV RG supersymmetry in perturbation theory. Finally, we illustrate our findings by constructing free models of BV RG flow and EA exhibiting RG supersymmetry in the degree -1 symplectic framework and studying the perturbation theory thereof. We find in particular that the odd partner of effective action describes perturbatively the deviation of the interacting RG flow from its free counterpart.
NASA Astrophysics Data System (ADS)
Faghihi, M. J.; Tavassoly, M. K.; Bagheri Harouni, M.
2014-04-01
In this paper, we study the interaction between a Λ-type three-level atom and two quantized electromagnetic fields which are simultaneously injected in a bichromatic cavity surrounded by a Kerr medium in the presence of field-field interaction (parametric down conversion) and detuning parameters. By applying a canonical transformation, the introduced model is reduced to a well-known form of the generalized Jaynes-Cummings model. Under particular initial conditions which may be prepared for the atom and the field, the time evolution of the state vector of the entire system is analytically evaluated. Then, the dynamics of the atom is studied through the evolution of the atomic population inversion. In addition, two different measures of entanglement between the tripartite system (three entities make the system: two field modes and one atom), i.e., von Neumann and linear entropy are investigated. Also, two kinds of entropic uncertainty relations, from which entropy squeezing can be obtained, are discussed. In each case, the influences of the detuning parameters and Kerr medium on the above nonclassicality features are analyzed in detail via numerical results. It is illustrated that the amount of the above-mentioned physical phenomena can be tuned by choosing the evolved parameters, appropriately.
Asymptotic states and the definition of the S-matrix in quantum gravity
NASA Astrophysics Data System (ADS)
Wiesendanger, C.
2013-04-01
Viewing gravitational energy-momentum p_G^\\mu as equal by observation, but different in essence from inertial energy-momentum p_I^\\mu naturally leads to the gauge theory of volume-preserving diffeomorphisms of an inner Minkowski space M4. The generalized asymptotic free scalar, Dirac and gauge fields in that theory are canonically quantized, the Fock spaces of stationary states are constructed and the gravitational limit—mapping the gravitational energy-momentum onto the inertial energy-momentum to account for their observed equality—is introduced. Next the S-matrix in quantum gravity is defined as the gravitational limit of the transition amplitudes of asymptotic in- to out-states in the gauge theory of volume-preserving diffeomorphisms. The so-defined S-matrix relates in- and out-states of observable particles carrying gravitational equal to inertial energy-momentum. Finally, generalized Lehmann-Symanzik-Zimmermann reduction formulae for scalar, Dirac and gauge fields are established which allow us to express S-matrix elements as the gravitational limit of truncated Fourier-transformed vacuum expectation values of time-ordered products of field operators of the interacting theory. Together with the generating functional of the latter established in Wiesendanger (2011 arXiv:1103.1012) any transition amplitude can in principle be computed consistently to any order in perturbative quantum gravity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karmakar, Partha; Das, Pradip Kumar; Mondal, Seema Sarkar
2010-10-26
Pb pollution from automobile exhausts around highways is a persistent problem in India. Pb intoxication in mammalian body is a complex phenomenon which is influence by agonistic and antagonistic interactions of several other heavy metals and micronutrients. An attempt has been made to study the association between Pb and Zn accumulation in different physiological systems of cattles (n = 200) by application of both canonical correlation and canonical correspondence analyses. Pb was estimated from plasma, liver, bone, muscle, kidney, blood and milk where as Zn was measured from all these systems except bone, blood and milk. Both statistical techniques demonstratedmore » that there was a strong association among blood-Pb, liver-Zn, kidney-Zn and muscle-Zn. From observations, it can be assumed that Zn accumulation in cattles' muscle, liver and kidney directs Pb mobilization from those organs which in turn increases Pb pool in blood. It indicates antagonistic activity of Zn to the accumulation of Pb. Although there were some contradictions between the observations obtained from the two different statistical methods, the overall pattern of Pb accumulation in various organs as influenced by Zn were same. It is mainly due to the fact that canonical correlation is actually a special type of canonical correspondence analyses where linear relationship is followed between two groups of variables instead of Gaussian relationship.« less
NASA Astrophysics Data System (ADS)
Karmakar, Partha; Das, Pradip Kumar; Mondal, Seema Sarkar; Karmakar, Sougata; Mazumdar, Debasis
2010-10-01
Pb pollution from automobile exhausts around highways is a persistent problem in India. Pb intoxication in mammalian body is a complex phenomenon which is influence by agonistic and antagonistic interactions of several other heavy metals and micronutrients. An attempt has been made to study the association between Pb and Zn accumulation in different physiological systems of cattles (n = 200) by application of both canonical correlation and canonical correspondence analyses. Pb was estimated from plasma, liver, bone, muscle, kidney, blood and milk where as Zn was measured from all these systems except bone, blood and milk. Both statistical techniques demonstrated that there was a strong association among blood-Pb, liver-Zn, kidney-Zn and muscle-Zn. From observations, it can be assumed that Zn accumulation in cattles' muscle, liver and kidney directs Pb mobilization from those organs which in turn increases Pb pool in blood. It indicates antagonistic activity of Zn to the accumulation of Pb. Although there were some contradictions between the observations obtained from the two different statistical methods, the overall pattern of Pb accumulation in various organs as influenced by Zn were same. It is mainly due to the fact that canonical correlation is actually a special type of canonical correspondence analyses where linear relationship is followed between two groups of variables instead of Gaussian relationship.
Quantization of Space-like States in Lorentz-Violating Theories
NASA Astrophysics Data System (ADS)
Colladay, Don
2018-01-01
Lorentz violation frequently induces modified dispersion relations that can yield space-like states that impede the standard quantization procedures. In certain cases, an extended Hamiltonian formalism can be used to define observer-covariant normalization factors for field expansions and phase space integrals. These factors extend the theory to include non-concordant frames in which there are negative-energy states. This formalism provides a rigorous way to quantize certain theories containing space-like states and allows for the consistent computation of Cherenkov radiation rates in arbitrary frames and avoids singular expressions.
Correspondence between quantization schemes for two-player nonzero-sum games and CNOT complexity
NASA Astrophysics Data System (ADS)
Vijayakrishnan, V.; Balakrishnan, S.
2018-05-01
The well-known quantization schemes for two-player nonzero-sum games are Eisert-Wilkens-Lewenstein scheme and Marinatto-Weber scheme. In this work, we establish the connection between the two schemes from the perspective of quantum circuits. Further, we provide the correspondence between any game quantization schemes and the CNOT complexity, where CNOT complexity is up to the local unitary operations. While CNOT complexity is known to be useful in the analysis of universal quantum circuit, in this work, we find its applicability in quantum game theory.
Luminance-model-based DCT quantization for color image compression
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.; Peterson, Heidi A.
1992-01-01
A model is developed to approximate visibility thresholds for discrete cosine transform (DCT) coefficient quantization error based on the peak-to-peak luminance of the error image. Experimentally measured visibility thresholds for R, G, and B DCT basis functions can be predicted by a simple luminance-based detection model. This model allows DCT coefficient quantization matrices to be designed for display conditions other than those of the experimental measurements: other display luminances, other veiling luminances, and other spatial frequencies (different pixel spacings, viewing distances, and aspect ratios).
Comparison of Penalty Functions for Sparse Canonical Correlation Analysis
Chalise, Prabhakar; Fridley, Brooke L.
2011-01-01
Canonical correlation analysis (CCA) is a widely used multivariate method for assessing the association between two sets of variables. However, when the number of variables far exceeds the number of subjects, such in the case of large-scale genomic studies, the traditional CCA method is not appropriate. In addition, when the variables are highly correlated the sample covariance matrices become unstable or undefined. To overcome these two issues, sparse canonical correlation analysis (SCCA) for multiple data sets has been proposed using a Lasso type of penalty. However, these methods do not have direct control over sparsity of solution. An additional step that uses Bayesian Information Criterion (BIC) has also been suggested to further filter out unimportant features. In this paper, a comparison of four penalty functions (Lasso, Elastic-net, SCAD and Hard-threshold) for SCCA with and without the BIC filtering step have been carried out using both real and simulated genotypic and mRNA expression data. This study indicates that the SCAD penalty with BIC filter would be a preferable penalty function for application of SCCA to genomic data. PMID:21984855
Application of State Quantization-Based Methods in HEP Particle Transport Simulation
NASA Astrophysics Data System (ADS)
Santi, Lucio; Ponieman, Nicolás; Jun, Soon Yung; Genser, Krzysztof; Elvira, Daniel; Castro, Rodrigo
2017-10-01
Simulation of particle-matter interactions in complex geometries is one of the main tasks in high energy physics (HEP) research. An essential aspect of it is an accurate and efficient particle transportation in a non-uniform magnetic field, which includes the handling of volume crossings within a predefined 3D geometry. Quantized State Systems (QSS) is a family of numerical methods that provides attractive features for particle transportation processes, such as dense output (sequences of polynomial segments changing only according to accuracy-driven discrete events) and lightweight detection and handling of volume crossings (based on simple root-finding of polynomial functions). In this work we present a proof-of-concept performance comparison between a QSS-based standalone numerical solver and an application based on the Geant4 simulation toolkit, with its default Runge-Kutta based adaptive step method. In a case study with a charged particle circulating in a vacuum (with interactions with matter turned off), in a uniform magnetic field, and crossing up to 200 volume boundaries twice per turn, simulation results showed speedups of up to 6 times in favor of QSS while it being 10 times slower in the case with zero volume boundaries.
Optimal Compression Methods for Floating-point Format Images
NASA Technical Reports Server (NTRS)
Pence, W. D.; White, R. L.; Seaman, R.
2009-01-01
We report on the results of a comparison study of different techniques for compressing FITS images that have floating-point (real*4) pixel values. Standard file compression methods like GZIP are generally ineffective in this case (with compression ratios only in the range 1.2 - 1.6), so instead we use a technique of converting the floating-point values into quantized scaled integers which are compressed using the Rice algorithm. The compressed data stream is stored in FITS format using the tiled-image compression convention. This is technically a lossy compression method, since the pixel values are not exactly reproduced, however all the significant photometric and astrometric information content of the image can be preserved while still achieving file compression ratios in the range of 4 to 8. We also show that introducing dithering, or randomization, when assigning the quantized pixel-values can significantly improve the photometric and astrometric precision in the stellar images in the compressed file without adding additional noise. We quantify our results by comparing the stellar magnitudes and positions as measured in the original uncompressed image to those derived from the same image after applying successively greater amounts of compression.
JND measurements of the speech formants parameters and its implication in the LPC pole quantization
NASA Astrophysics Data System (ADS)
Orgad, Yaakov
1988-08-01
The inherent sensitivity of auditory perception is explicitly used with the objective of designing an efficient speech encoder. Speech can be modelled by a filter representing the vocal tract shape that is driven by an excitation signal representing glottal air flow. This work concentrates on the filter encoding problem, assuming that excitation signal encoding is optimal. Linear predictive coding (LPC) techniques were used to model a short speech segment by an all-pole filter; each pole was directly related to the speech formants. Measurements were made of the auditory just noticeable difference (JND) corresponding to the natural speech formants, with the LPC filter poles as the best candidates to represent the speech spectral envelope. The JND is the maximum precision required in speech quantization; it was defined on the basis of the shift of one pole parameter of a single frame of a speech segment, necessary to induce subjective perception of the distortion, with .75 probability. The average JND in LPC filter poles in natural speech was found to increase with increasing pole bandwidth and, to a lesser extent, frequency. The JND measurements showed a large spread of the residuals around the average values, indicating that inter-formant coupling and, perhaps, other, not yet fully understood, factors were not taken into account at this stage of the research. A future treatment should consider these factors. The average JNDs obtained in this work were used to design pole quantization tables for speech coding and provided a better bit-rate than the standard quantizer of reflection coefficient; a 30-bits-per-frame pole quantizer yielded a speech quality similar to that obtained with a standard 41-bits-per-frame reflection coefficient quantizer. Owing to the complexity of the numerical root extraction system, the practical implementation of the pole quantization approach remains to be proved.
On the Perturbative Equivalence Between the Hamiltonian and Lagrangian Quantizations
NASA Astrophysics Data System (ADS)
Batalin, I. A.; Tyutin, I. V.
The Hamiltonian (BFV) and Lagrangian (BV) quantization schemes are proved to be perturbatively equivalent to each other. It is shown in particular that the quantum master equation being treated perturbatively possesses a local formal solution.
Fill-in binary loop pulse-torque quantizer
NASA Technical Reports Server (NTRS)
Lory, C. B.
1975-01-01
Fill-in binary (FIB) loop provides constant heating of torque generator, an advantage of binary current switching. At the same time, it avoids mode-related dead zone and data delay of binary, an advantage of ternary quantization.
Theory of quantized systems: formal basis for DEVS/HLA distributed simulation environment
NASA Astrophysics Data System (ADS)
Zeigler, Bernard P.; Lee, J. S.
1998-08-01
In the context of a DARPA ASTT project, we are developing an HLA-compliant distributed simulation environment based on the DEVS formalism. This environment will provide a user- friendly, high-level tool-set for developing interoperable discrete and continuous simulation models. One application is the study of contract-based predictive filtering. This paper presents a new approach to predictive filtering based on a process called 'quantization' to reduce state update transmission. Quantization, which generates state updates only at quantum level crossings, abstracts a sender model into a DEVS representation. This affords an alternative, efficient approach to embedding continuous models within distributed discrete event simulations. Applications of quantization to message traffic reduction are discussed. The theory has been validated by DEVSJAVA simulations of test cases. It will be subject to further test in actual distributed simulations using the DEVS/HLA modeling and simulation environment.
Locally adaptive vector quantization: Data compression with feature preservation
NASA Technical Reports Server (NTRS)
Cheung, K. M.; Sayano, M.
1992-01-01
A study of a locally adaptive vector quantization (LAVQ) algorithm for data compression is presented. This algorithm provides high-speed one-pass compression and is fully adaptable to any data source and does not require a priori knowledge of the source statistics. Therefore, LAVQ is a universal data compression algorithm. The basic algorithm and several modifications to improve performance are discussed. These modifications are nonlinear quantization, coarse quantization of the codebook, and lossless compression of the output. Performance of LAVQ on various images using irreversible (lossy) coding is comparable to that of the Linde-Buzo-Gray algorithm, but LAVQ has a much higher speed; thus this algorithm has potential for real-time video compression. Unlike most other image compression algorithms, LAVQ preserves fine detail in images. LAVQ's performance as a lossless data compression algorithm is comparable to that of Lempel-Ziv-based algorithms, but LAVQ uses far less memory during the coding process.
Necessary conditions for the optimality of variable rate residual vector quantizers
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Smith, Mark J. T.; Barnes, Christopher F.
1993-01-01
Residual vector quantization (RVQ), or multistage VQ, as it is also called, has recently been shown to be a competitive technique for data compression. The competitive performance of RVQ reported in results from the joint optimization of variable rate encoding and RVQ direct-sum code books. In this paper, necessary conditions for the optimality of variable rate RVQ's are derived, and an iterative descent algorithm based on a Lagrangian formulation is introduced for designing RVQ's having minimum average distortion subject to an entropy constraint. Simulation results for these entropy-constrained RVQ's (EC-RVQ's) are presented for memory less Gaussian, Laplacian, and uniform sources. A Gauss-Markov source is also considered. The performance is superior to that of entropy-constrained scalar quantizers (EC-SQ's) and practical entropy-constrained vector quantizers (EC-VQ's), and is competitive with that of some of the best source coding techniques that have appeared in the literature.
More on quantum groups from the quantization point of view
NASA Astrophysics Data System (ADS)
Jurčo, Branislav
1994-12-01
Star products on the classical double group of a simple Lie group and on corresponding symplectic groupoids are given so that the quantum double and the “quantized tangent bundle” are obtained in the deformation description. “Complex” quantum groups and bicovariant quantum Lie algebras are discussed from this point of view. Further we discuss the quantization of the Poisson structure on the symmetric algebra S(g) leading to the quantized enveloping algebra U h (g) as an example of biquantization in the sense of Turaev. Description of U h (g) in terms of the generators of the bicovariant differential calculus on F(G q ) is very convenient for this purpose. Finaly we interpret in the deformation framework some well known properties of compact quantum groups as simple consequences of corresponding properties of classical compact Lie groups. An analogue of the classical Kirillov's universal character formula is given for the unitary irreducble representation in the compact case.
Quantization of gauge fields, graph polynomials and graph homology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kreimer, Dirk, E-mail: kreimer@physik.hu-berlin.de; Sars, Matthias; Suijlekom, Walter D. van
2013-09-15
We review quantization of gauge fields using algebraic properties of 3-regular graphs. We derive the Feynman integrand at n loops for a non-abelian gauge theory quantized in a covariant gauge from scalar integrands for connected 3-regular graphs, obtained from the two Symanzik polynomials. The transition to the full gauge theory amplitude is obtained by the use of a third, new, graph polynomial, the corolla polynomial. This implies effectively a covariant quantization without ghosts, where all the relevant signs of the ghost sector are incorporated in a double complex furnished by the corolla polynomial–we call it cycle homology–and by graph homology.more » -- Highlights: •We derive gauge theory Feynman from scalar field theory with 3-valent vertices. •We clarify the role of graph homology and cycle homology. •We use parametric renormalization and the new corolla polynomial.« less
Augmenting Phase Space Quantization to Introduce Additional Physical Effects
NASA Astrophysics Data System (ADS)
Robbins, Matthew P. G.
Quantum mechanics can be done using classical phase space functions and a star product. The state of the system is described by a quasi-probability distribution. A classical system can be quantized in phase space in different ways with different quasi-probability distributions and star products. A transition differential operator relates different phase space quantizations. The objective of this thesis is to introduce additional physical effects into the process of quantization by using the transition operator. As prototypical examples, we first look at the coarse-graining of the Wigner function and the damped simple harmonic oscillator. By generalizing the transition operator and star product to also be functions of the position and momentum, we show that additional physical features beyond damping and coarse-graining can be introduced into a quantum system, including the generalized uncertainty principle of quantum gravity phenomenology, driving forces, and decoherence.
Rakkiyappan, R; Maheswari, K; Velmurugan, G; Park, Ju H
2018-05-17
This paper investigates H ∞ state estimation problem for a class of semi-Markovian jumping discrete-time neural networks model with event-triggered scheme and quantization. First, a new event-triggered communication scheme is introduced to determine whether or not the current sampled sensor data should be broad-casted and transmitted to the quantizer, which can save the limited communication resource. Second, a novel communication framework is employed by the logarithmic quantizer that quantifies and reduces the data transmission rate in the network, which apparently improves the communication efficiency of networks. Third, a stabilization criterion is derived based on the sufficient condition which guarantees a prescribed H ∞ performance level in the estimation error system in terms of the linear matrix inequalities. Finally, numerical simulations are given to illustrate the correctness of the proposed scheme. Copyright © 2018 Elsevier Ltd. All rights reserved.
Fine structure constant and quantized optical transparency of plasmonic nanoarrays.
Kravets, V G; Schedin, F; Grigorenko, A N
2012-01-24
Optics is renowned for displaying quantum phenomena. Indeed, studies of emission and absorption lines, the photoelectric effect and blackbody radiation helped to build the foundations of quantum mechanics. Nevertheless, it came as a surprise that the visible transparency of suspended graphene is determined solely by the fine structure constant, as this kind of universality had been previously reserved only for quantized resistance and flux quanta in superconductors. Here we describe a plasmonic system in which relative optical transparency is determined solely by the fine structure constant. The system consists of a regular array of gold nanoparticles fabricated on a thin metallic sublayer. We show that its relative transparency can be quantized in the near-infrared, which we attribute to the quantized contact resistance between the nanoparticles and the metallic sublayer. Our results open new possibilities in the exploration of universal dynamic conductance in plasmonic nanooptics.
NASA Astrophysics Data System (ADS)
Inoue, Makoto
2017-12-01
Some new formulae of the canonical correlation functions for the one dimensional quantum transverse Ising model are found by the ST-transformation method using a Morita's sum rule and its extensions for the two dimensional classical Ising model. As a consequence we obtain a time-independent term of the dynamical correlation functions. Differences of quantum version and classical version of these formulae are also discussed.
Image Classification of Ribbed Smoked Sheet using Learning Vector Quantization
NASA Astrophysics Data System (ADS)
Rahmat, R. F.; Pulungan, A. F.; Faza, S.; Budiarto, R.
2017-01-01
Natural rubber is an important export commodity in Indonesia, which can be a major contributor to national economic development. One type of rubber used as rubber material exports is Ribbed Smoked Sheet (RSS). The quantity of RSS exports depends on the quality of RSS. RSS rubber quality has been assigned in SNI 06-001-1987 and the International Standards of Quality and Packing for Natural Rubber Grades (The Green Book). The determination of RSS quality is also known as the sorting process. In the rubber factones, the sorting process is still done manually by looking and detecting at the levels of air bubbles on the surface of the rubber sheet by naked eyes so that the result is subjective and not so good. Therefore, a method is required to classify RSS rubber automatically and precisely. We propose some image processing techniques for the pre-processing, zoning method for feature extraction and Learning Vector Quantization (LVQ) method for classifying RSS rubber into two grades, namely RSS1 and RSS3. We used 120 RSS images as training dataset and 60 RSS images as testing dataset. The result shows that our proposed method can give 89% of accuracy and the best perform epoch is in the fifteenth epoch.
Bfv Quantization of Relativistic Spinning Particles with a Single Bosonic Constraint
NASA Astrophysics Data System (ADS)
Rabello, Silvio J.; Vaidya, Arvind N.
Using the BFV approach we quantize a pseudoclassical model of the spin-1/2 relativistic particle that contains a single bosonic constraint, contrary to the usual locally supersymmetric models that display first and second class constraints.
Quantized Step-up Model for Evaluation of Internship in Teaching of Prospective Science Teachers.
ERIC Educational Resources Information Center
Sindhu, R. S.
2002-01-01
Describes the quantized step-up model developed for the evaluation purposes of internship in teaching which is an analogous model of the atomic structure. Assesses prospective teachers' abilities in lesson delivery. (YDS)
Minimum uncertainty and squeezing in diffusion processes and stochastic quantization
NASA Technical Reports Server (NTRS)
Demartino, S.; Desiena, S.; Illuminati, Fabrizo; Vitiello, Giuseppe
1994-01-01
We show that uncertainty relations, as well as minimum uncertainty coherent and squeezed states, are structural properties for diffusion processes. Through Nelson stochastic quantization we derive the stochastic image of the quantum mechanical coherent and squeezed states.
A consistent covariant quantization of the Brink-Schwarz superparticle
NASA Astrophysics Data System (ADS)
Eisenberg, Yeshayahu
1992-02-01
We perform the covariant quantization of the ten-dimensional Brink-Schwarz superparticle by reducing it to a system whose constraints are all first class, covariant and have only two levels of reducibility. Research supported by the Rothschild Fellowship.
Fabrication of Subnanometer-Precision Nanopores in Hexagonal Boron Nitride
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilbert, S. Matt; Dunn, Gabriel; Azizi, Amin
Here, we demonstrate the fabrication of individual nanopores in hexagonal boron nitride (h-BN) with atomically precise control of the pore shape and size. Previous methods of pore production in other 2D materials typically create pores with irregular geometry and imprecise diameters. In contrast, other studies have shown that with careful control of electron irradiation, defects in h-BN grow with pristine zig-zag edges at quantized triangular sizes, but they have failed to demonstrate production and control of isolated defects. In this work, we combine these techniques to yield a method in which we can create individual size-quantized triangular nanopores through anmore » h-BN sheet. The pores are created using the electron beam of a conventional transmission electron microscope; which can strip away multiple layers of h-BN exposing single-layer regions, introduce single vacancies, and preferentially grow vacancies only in the single-layer region. We further demonstrate how the geometry of these pores can be altered beyond triangular by changing beam conditions. Precisely size- and geometry-tuned nanopores could find application in molecular sensing, DNA sequencing, water desalination, and molecular separation.« less
Fabrication of Subnanometer-Precision Nanopores in Hexagonal Boron Nitride
Gilbert, S. Matt; Dunn, Gabriel; Azizi, Amin; ...
2017-11-08
Here, we demonstrate the fabrication of individual nanopores in hexagonal boron nitride (h-BN) with atomically precise control of the pore shape and size. Previous methods of pore production in other 2D materials typically create pores with irregular geometry and imprecise diameters. In contrast, other studies have shown that with careful control of electron irradiation, defects in h-BN grow with pristine zig-zag edges at quantized triangular sizes, but they have failed to demonstrate production and control of isolated defects. In this work, we combine these techniques to yield a method in which we can create individual size-quantized triangular nanopores through anmore » h-BN sheet. The pores are created using the electron beam of a conventional transmission electron microscope; which can strip away multiple layers of h-BN exposing single-layer regions, introduce single vacancies, and preferentially grow vacancies only in the single-layer region. We further demonstrate how the geometry of these pores can be altered beyond triangular by changing beam conditions. Precisely size- and geometry-tuned nanopores could find application in molecular sensing, DNA sequencing, water desalination, and molecular separation.« less
Scale-Resolving simulations (SRS): How much resolution do we really need?
NASA Astrophysics Data System (ADS)
Pereira, Filipe M. S.; Girimaji, Sharath
2017-11-01
Scale-resolving simulations (SRS) are emerging as the computational approach of choice for many engineering flows with coherent structures. The SRS methods seek to resolve only the most important features of the coherent structures and model the remainder of the flow field with canonical closures. With reference to a typical Large-Eddy Simulation (LES), practical SRS methods aim to resolve a considerably narrower range of scales (reduced physical resolution) to achieve an adequate degree of accuracy at reasonable computational effort. While the objective of SRS is well-founded, the criteria for establishing the optimal degree of resolution required to achieve an acceptable level of accuracy are not clear. This study considers the canonical case of the flow around a circular cylinder to address the issue of `optimal' resolution. Two important criteria are developed. The first condition addresses the issue of adequate resolution of the flow field. The second guideline provides an assessment of whether the modeled field is canonical (stochastic) turbulence amenable to closure-based computations.
Toward a perceptual image quality assessment of color quantized images
NASA Astrophysics Data System (ADS)
Frackiewicz, Mariusz; Palus, Henryk
2018-04-01
Color image quantization is an important operation in the field of color image processing. In this paper, we consider new perceptual image quality metrics for assessment of quantized images. These types of metrics, e.g. DSCSI, MDSIs, MDSIm and HPSI achieve the highest correlation coefficients with MOS during tests on the six publicly available image databases. Research was limited to images distorted by two types of compression: JPG and JPG2K. Statistical analysis of correlation coefficients based on the Friedman test and post-hoc procedures showed that the differences between the four new perceptual metrics are not statistically significant.
Constraints on operator ordering from third quantization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ohkuwa, Yoshiaki; Faizal, Mir, E-mail: f2mir@uwaterloo.ca; Ezawa, Yasuo
2016-02-15
In this paper, we analyse the Wheeler–DeWitt equation in the third quantized formalism. We will demonstrate that for certain operator ordering, the early stages of the universe are dominated by quantum fluctuations, and the universe becomes classical at later stages during the cosmic expansion. This is physically expected, if the universe is formed from quantum fluctuations in the third quantized formalism. So, we will argue that this physical requirement can be used to constrain the form of the operator ordering chosen. We will explicitly demonstrate this to be the case for two different cosmological models.
Information efficiency in visual communication
NASA Astrophysics Data System (ADS)
Alter-Gartenberg, Rachel; Rahman, Zia-ur
1993-08-01
This paper evaluates the quantization process in the context of the end-to-end performance of the visual-communication channel. Results show that the trade-off between data transmission and visual quality revolves around the information in the acquired signal, not around its energy. Improved information efficiency is gained by frequency dependent quantization that maintains the information capacity of the channel and reduces the entropy of the encoded signal. Restorations with energy bit-allocation lose both in sharpness and clarity relative to restorations with information bit-allocation. Thus, quantization with information bit-allocation is preferred for high information efficiency and visual quality in optimized visual communication.
Information efficiency in visual communication
NASA Technical Reports Server (NTRS)
Alter-Gartenberg, Rachel; Rahman, Zia-Ur
1993-01-01
This paper evaluates the quantization process in the context of the end-to-end performance of the visual-communication channel. Results show that the trade-off between data transmission and visual quality revolves around the information in the acquired signal, not around its energy. Improved information efficiency is gained by frequency dependent quantization that maintains the information capacity of the channel and reduces the entropy of the encoded signal. Restorations with energy bit-allocation lose both in sharpness and clarity relative to restorations with information bit-allocation. Thus, quantization with information bit-allocation is preferred for high information efficiency and visual quality in optimized visual communication.
Cascade Error Projection with Low Bit Weight Quantization for High Order Correlation Data
NASA Technical Reports Server (NTRS)
Duong, Tuan A.; Daud, Taher
1998-01-01
In this paper, we reinvestigate the solution for chaotic time series prediction problem using neural network approach. The nature of this problem is such that the data sequences are never repeated, but they are rather in chaotic region. However, these data sequences are correlated between past, present, and future data in high order. We use Cascade Error Projection (CEP) learning algorithm to capture the high order correlation between past and present data to predict a future data using limited weight quantization constraints. This will help to predict a future information that will provide us better estimation in time for intelligent control system. In our earlier work, it has been shown that CEP can sufficiently learn 5-8 bit parity problem with 4- or more bits, and color segmentation problem with 7- or more bits of weight quantization. In this paper, we demonstrate that chaotic time series can be learned and generalized well with as low as 4-bit weight quantization using round-off and truncation techniques. The results show that generalization feature will suffer less as more bit weight quantization is available and error surfaces with the round-off technique are more symmetric around zero than error surfaces with the truncation technique. This study suggests that CEP is an implementable learning technique for hardware consideration.
Group theoretical quantization of isotropic loop cosmology
NASA Astrophysics Data System (ADS)
Livine, Etera R.; Martín-Benito, Mercedes
2012-06-01
We achieve a group theoretical quantization of the flat Friedmann-Robertson-Walker model coupled to a massless scalar field adopting the improved dynamics of loop quantum cosmology. Deparemetrizing the system using the scalar field as internal time, we first identify a complete set of phase space observables whose Poisson algebra is isomorphic to the su(1,1) Lie algebra. It is generated by the volume observable and the Hamiltonian. These observables describe faithfully the regularized phase space underlying the loop quantization: they account for the polymerization of the variable conjugate to the volume and for the existence of a kinematical nonvanishing minimum volume. Since the Hamiltonian is an element in the su(1,1) Lie algebra, the dynamics is now implemented as SU(1, 1) transformations. At the quantum level, the system is quantized as a timelike irreducible representation of the group SU(1, 1). These representations are labeled by a half-integer spin, which gives the minimal volume. They provide superselection sectors without quantization anomalies and no factor ordering ambiguity arises when representing the Hamiltonian. We then explicitly construct SU(1, 1) coherent states to study the quantum evolution. They not only provide semiclassical states but truly dynamical coherent states. Their use further clarifies the nature of the bounce that resolves the big bang singularity.
Kang, Zhe; Yuan, Jinhui; Zhang, Xianting; Sang, Xinzhu; Wang, Kuiru; Wu, Qiang; Yan, Binbin; Li, Feng; Zhou, Xian; Zhong, Kangping; Zhou, Guiyao; Yu, Chongxiu; Farrell, Gerald; Lu, Chao; Yaw Tam, Hwa; Wai, P. K. A.
2016-01-01
High performance all-optical quantizer based on silicon waveguide is believed to have significant applications in photonic integratable optical communication links, optical interconnection networks, and real-time signal processing systems. In this paper, we propose an integratable all-optical quantizer for on-chip and low power consumption all-optical analog-to-digital converters. The quantization is realized by the strong cross-phase modulation and interference in a silicon-organic hybrid (SOH) slot waveguide based Mach-Zehnder interferometer. By carefully designing the dimension of the SOH waveguide, large nonlinear coefficients up to 16,000 and 18,069 W−1/m for the pump and probe signals can be obtained respectively, along with a low pulse walk-off parameter of 66.7 fs/mm, and all-normal dispersion in the wavelength regime considered. Simulation results show that the phase shift of the probe signal can reach 8π at a low pump pulse peak power of 206 mW and propagation length of 5 mm such that a 4-bit all-optical quantizer can be realized. The corresponding signal-to-noise ratio is 23.42 dB and effective number of bit is 3.89-bit. PMID:26777054
Feedback linearization of singularly perturbed systems based on canonical similarity transformations
NASA Astrophysics Data System (ADS)
Kabanov, A. A.
2018-05-01
This paper discusses the problem of feedback linearization of a singularly perturbed system in a state-dependent coefficient form. The result is based on the introduction of a canonical similarity transformation. The transformation matrix is constructed from separate blocks for fast and slow part of an original singularly perturbed system. The transformed singular perturbed system has a linear canonical form that significantly simplifies a control design problem. Proposed similarity transformation allows accomplishing linearization of the system without considering the virtual output (as it is needed for normal form method), a technique of a transition from phase coordinates of the transformed system to state variables of the original system is simpler. The application of the proposed approach is illustrated through example.
Servo-hydraulic actuator in controllable canonical form: Identification and experimental validation
NASA Astrophysics Data System (ADS)
Maghareh, Amin; Silva, Christian E.; Dyke, Shirley J.
2018-02-01
Hydraulic actuators have been widely used to experimentally examine structural behavior at multiple scales. Real-time hybrid simulation (RTHS) is one innovative testing method that largely relies on such servo-hydraulic actuators. In RTHS, interface conditions must be enforced in real time, and controllers are often used to achieve tracking of the desired displacements. Thus, neglecting the dynamics of hydraulic transfer system may result either in system instability or sub-optimal performance. Herein, we propose a nonlinear dynamical model for a servo-hydraulic actuator (a.k.a. hydraulic transfer system) coupled with a nonlinear physical specimen. The nonlinear dynamical model is transformed into controllable canonical form for further tracking control design purposes. Through a number of experiments, the controllable canonical model is validated.
An Off-Grid Turbo Channel Estimation Algorithm for Millimeter Wave Communications.
Han, Lingyi; Peng, Yuexing; Wang, Peng; Li, Yonghui
2016-09-22
The bandwidth shortage has motivated the exploration of the millimeter wave (mmWave) frequency spectrum for future communication networks. To compensate for the severe propagation attenuation in the mmWave band, massive antenna arrays can be adopted at both the transmitter and receiver to provide large array gains via directional beamforming. To achieve such array gains, channel estimation (CE) with high resolution and low latency is of great importance for mmWave communications. However, classic super-resolution subspace CE methods such as multiple signal classification (MUSIC) and estimation of signal parameters via rotation invariant technique (ESPRIT) cannot be applied here due to RF chain constraints. In this paper, an enhanced CE algorithm is developed for the off-grid problem when quantizing the angles of mmWave channel in the spatial domain where off-grid problem refers to the scenario that angles do not lie on the quantization grids with high probability, and it results in power leakage and severe reduction of the CE performance. A new model is first proposed to formulate the off-grid problem. The new model divides the continuously-distributed angle into a quantized discrete grid part, referred to as the integral grid angle, and an offset part, termed fractional off-grid angle. Accordingly, an iterative off-grid turbo CE (IOTCE) algorithm is proposed to renew and upgrade the CE between the integral grid part and the fractional off-grid part under the Turbo principle. By fully exploiting the sparse structure of mmWave channels, the integral grid part is estimated by a soft-decoding based compressed sensing (CS) method called improved turbo compressed channel sensing (ITCCS). It iteratively updates the soft information between the linear minimum mean square error (LMMSE) estimator and the sparsity combiner. Monte Carlo simulations are presented to evaluate the performance of the proposed method, and the results show that it enhances the angle detection resolution greatly.
Distributed Coding of Compressively Sensed Sources
NASA Astrophysics Data System (ADS)
Goukhshtein, Maxim
In this work we propose a new method for compressing multiple correlated sources with a very low-complexity encoder in the presence of side information. Our approach uses ideas from compressed sensing and distributed source coding. At the encoder, syndromes of the quantized compressively sensed sources are generated and transmitted. The decoder uses side information to predict the compressed sources. The predictions are then used to recover the quantized measurements via a two-stage decoding process consisting of bitplane prediction and syndrome decoding. Finally, guided by the structure of the sources and the side information, the sources are reconstructed from the recovered measurements. As a motivating example, we consider the compression of multispectral images acquired on board satellites, where resources, such as computational power and memory, are scarce. Our experimental results exhibit a significant improvement in the rate-distortion trade-off when compared against approaches with similar encoder complexity.
[Theoretical model study about the application risk of high risk medical equipment].
Shang, Changhao; Yang, Fenghui
2014-11-01
Research for establishing a risk monitoring theoretical model of high risk medical equipment at applying site. Regard the applying site as a system which contains some sub-systems. Every sub-system consists of some risk estimating indicators. After quantizing of each indicator, the quantized values are multiplied with corresponding weight and then the products are accumulated. Hence, the risk estimating value of each subsystem is attained. Follow the calculating method, the risk estimating values of each sub-system are multiplied with corresponding weights and then the product is accumulated. The cumulative sum is the status indicator of the high risk medical equipment at applying site. The status indicator reflects the applying risk of the medical equipment at applying site. Establish a risk monitoring theoretical model of high risk medical equipment at applying site. The model can monitor the applying risk of high risk medical equipment at applying site dynamically and specially.
Study of communications data compression methods
NASA Technical Reports Server (NTRS)
Jones, H. W.
1978-01-01
A simple monochrome conditional replenishment system was extended to higher compression and to higher motion levels, by incorporating spatially adaptive quantizers and field repeating. Conditional replenishment combines intraframe and interframe compression, and both areas are investigated. The gain of conditional replenishment depends on the fraction of the image changing, since only changed parts of the image need to be transmitted. If the transmission rate is set so that only one fourth of the image can be transmitted in each field, greater change fractions will overload the system. A computer simulation was prepared which incorporated (1) field repeat of changes, (2) a variable change threshold, (3) frame repeat for high change, and (4) two mode, variable rate Hadamard intraframe quantizers. The field repeat gives 2:1 compression in moving areas without noticeable degradation. Variable change threshold allows some flexibility in dealing with varying change rates, but the threshold variation must be limited for acceptable performance.
Pfeiffer, P.; Egusquiza, I. L.; Di Ventra, M.; ...
2016-07-06
Technology based on memristors, resistors with memory whose resistance depends on the history of the crossing charges, has lately enhanced the classical paradigm of computation with neuromorphic architectures. However, in contrast to the known quantized models of passive circuit elements, such as inductors, capacitors or resistors, the design and realization of a quantum memristor is still missing. Here, we introduce the concept of a quantum memristor as a quantum dissipative device, whose decoherence mechanism is controlled by a continuous-measurement feedback scheme, which accounts for the memory. Indeed, we provide numerical simulations showing that memory effects actually persist in the quantummore » regime. Our quantization method, specifically designed for superconducting circuits, may be extended to other quantum platforms, allowing for memristor-type constructions in different quantum technologies. As a result, the proposed quantum memristor is then a building block for neuromorphic quantum computation and quantum simulations of non-Markovian systems.« less
Godino-Llorente, J I; Gómez-Vilda, P
2004-02-01
It is well known that vocal and voice diseases do not necessarily cause perceptible changes in the acoustic voice signal. Acoustic analysis is a useful tool to diagnose voice diseases being a complementary technique to other methods based on direct observation of the vocal folds by laryngoscopy. Through the present paper two neural-network based classification approaches applied to the automatic detection of voice disorders will be studied. Structures studied are multilayer perceptron and learning vector quantization fed using short-term vectors calculated accordingly to the well-known Mel Frequency Coefficient cepstral parameterization. The paper shows that these architectures allow the detection of voice disorders--including glottic cancer--under highly reliable conditions. Within this context, the Learning Vector quantization methodology demonstrated to be more reliable than the multilayer perceptron architecture yielding 96% frame accuracy under similar working conditions.
A family of chaotic pure analog coding schemes based on baker's map function
NASA Astrophysics Data System (ADS)
Liu, Yang; Li, Jing; Lu, Xuanxuan; Yuen, Chau; Wu, Jun
2015-12-01
This paper considers a family of pure analog coding schemes constructed from dynamic systems which are governed by chaotic functions—baker's map function and its variants. Various decoding methods, including maximum likelihood (ML), minimum mean square error (MMSE), and mixed ML-MMSE decoding algorithms, have been developed for these novel encoding schemes. The proposed mirrored baker's and single-input baker's analog codes perform a balanced protection against the fold error (large distortion) and weak distortion and outperform the classical chaotic analog coding and analog joint source-channel coding schemes in literature. Compared to the conventional digital communication system, where quantization and digital error correction codes are used, the proposed analog coding system has graceful performance evolution, low decoding latency, and no quantization noise. Numerical results show that under the same bandwidth expansion, the proposed analog system outperforms the digital ones over a wide signal-to-noise (SNR) range.