A double commutant theorem for Murray–von Neumann algebras
Liu, Zhe
2012-01-01
Murray–von Neumann algebras are algebras of operators affiliated with finite von Neumann algebras. In this article, we study commutativity and affiliation of self-adjoint operators (possibly unbounded). We show that a maximal abelian self-adjoint subalgebra of the Murray–von Neumann algebra associated with a finite von Neumann algebra is the Murray–von Neumann algebra , where is a maximal abelian self-adjoint subalgebra of and, in addition, is . We also prove that the Murray–von Neumann algebra with the center of is the center of the Murray–von Neumann algebra . Von Neumann’s celebrated double commutant theorem characterizes von Neumann algebras as those for which , where , the commutant of , is the set of bounded operators on the Hilbert space that commute with all operators in . At the end of this article, we present a double commutant theorem for Murray–von Neumann algebras. PMID:22543165
A note on derivations of Murray-von Neumann algebras.
Kadison, Richard V; Liu, Zhe
2014-02-11
A Murray-von Neumann algebra is the algebra of operators affiliated with a finite von Neumann algebra. In this article, we first present a brief introduction to the theory of derivations of operator algebras from both the physical and mathematical points of view. We then describe our recent work on derivations of Murray-von Neumann algebras. We show that the "extended derivations" of a Murray-von Neumann algebra, those that map the associated finite von Neumann algebra into itself, are inner. In particular, we prove that the only derivation that maps a Murray-von Neumann algebra associated with a factor of type II1 into that factor is 0. Those results are extensions of Singer's seminal result answering a question of Kaplansky, as applied to von Neumann algebras: The algebra may be noncommutative and may even contain unbounded elements.
A note on derivations of Murray–von Neumann algebras
Kadison, Richard V.; Liu, Zhe
2014-01-01
A Murray–von Neumann algebra is the algebra of operators affiliated with a finite von Neumann algebra. In this article, we first present a brief introduction to the theory of derivations of operator algebras from both the physical and mathematical points of view. We then describe our recent work on derivations of Murray–von Neumann algebras. We show that the “extended derivations” of a Murray–von Neumann algebra, those that map the associated finite von Neumann algebra into itself, are inner. In particular, we prove that the only derivation that maps a Murray–von Neumann algebra associated with a factor of type II1 into that factor is 0. Those results are extensions of Singer’s seminal result answering a question of Kaplansky, as applied to von Neumann algebras: The algebra may be noncommutative and may even contain unbounded elements. PMID:24469831
NASA Astrophysics Data System (ADS)
Lupher, Tracy
2003-12-01
Some people may be surprised to learn that John von Neumann's work on the foundations of quantum physics went far beyond what is contained within the pages of his Mathematical Foundations of Quantum Mechanics (MFQM) (von Neumann, 1955). However, this narrow focus often ignores von Neumann's later work on quantum logic and what are now called in his honor, von Neumann algebras. This volume honoring von Neumann's contributions to physics is unique in that, while it contains 12 papers that examine various aspects of von Neumann's work, it also contains two of his previously unpublished papers and some of his previously unpublished correspondence.
Locally Compact Quantum Groups. A von Neumann Algebra Approach
NASA Astrophysics Data System (ADS)
Van Daele, Alfons
2014-08-01
In this paper, we give an alternative approach to the theory of locally compact quantum groups, as developed by Kustermans and Vaes. We start with a von Neumann algebra and a comultiplication on this von Neumann algebra. We assume that there exist faithful left and right Haar weights. Then we develop the theory within this von Neumann algebra setting. In [Math. Scand. 92 (2003), 68-92] locally compact quantum groups are also studied in the von Neumann algebraic context. This approach is independent of the original C^*-algebraic approach in the sense that the earlier results are not used. However, this paper is not really independent because for many proofs, the reader is referred to the original paper where the C^*-version is developed. In this paper, we give a completely self-contained approach. Moreover, at various points, we do things differently. We have a different treatment of the antipode. It is similar to the original treatment in [Ann. Sci. & #201;cole Norm. Sup. (4) 33 (2000), 837-934]. But together with the fact that we work in the von Neumann algebra framework, it allows us to use an idea from [Rev. Roumaine Math. Pures Appl. 21 (1976), 1411-1449] to obtain the uniqueness of the Haar weights in an early stage. We take advantage of this fact when deriving the other main results in the theory. We also give a slightly different approach to duality. Finally, we collect, in a systematic way, several important formulas. In an appendix, we indicate very briefly how the C^*-approach and the von Neumann algebra approach eventually yield the same objects. The passage from the von Neumann algebra setting to the C^*-algebra setting is more or less standard. For the other direction, we use a new method. It is based on the observation that the Haar weights on the C^*-algebra extend to weights on the double dual with central support and that all these supports are the same. Of course, we get the von Neumann algebra by cutting down the double dual with this unique support projection in the center. All together, we see that there are many advantages when we develop the theory of locally compact quantum groups in the von Neumann algebra framework, rather than in the C^*-algebra framework. It is not only simpler, the theory of weights on von Neumann algebras is better known and one needs very little to go from the C^*-algebras to the von Neumann algebras. Moreover, in many cases when constructing examples, the von Neumann algebra with the coproduct is constructed from the very beginning and the Haar weights are constructed as weights on this von Neumann algebra (using left Hilbert algebra theory). This paper is written in a concise way. In many cases, only indications for the proofs of the results are given. This information should be enough to see that these results are correct. We will give more details in forthcoming paper, which will be expository, aimed at non-specialists. See also [Bull. Kerala Math. Assoc. (2005), 153-177] for an 'expanded' version of the appendix.
Clarifying the link between von Neumann and thermodynamic entropies
NASA Astrophysics Data System (ADS)
Deville, Alain; Deville, Yannick
2013-01-01
The state of a quantum system being described by a density operator ρ, quantum statistical mechanics calls the quantity - kTr( ρln ρ), introduced by von Neumann, its von Neumann or statistical entropy. A 1999 Shenker's paper initiated a debate about its link with the entropy of phenomenological thermodynamics. Referring to Gibbs's and von Neumann's founding texts, we replace von Neumann's 1932 contribution in its historical context, after Gibbs's 1902 treatise and before the creation of the information entropy concept, which places boundaries into the debate. Reexamining von Neumann's reasoning, we stress that the part of his reasoning implied in the debate mainly uses thermodynamics, not quantum mechanics, and identify two implicit postulates. We thoroughly examine Shenker's and ensuing papers, insisting upon the presence of open thermodynamical subsystems, imposing us the use of the chemical potential concept. We briefly mention Landau's approach to the quantum entropy. On the whole, it is shown that von Neumann's viewpoint is right, and why Shenker's claim that von Neumann entropy "is not the quantum-mechanical correlate of thermodynamic entropy" can't be retained.
Von Neumann's impossibility proof: Mathematics in the service of rhetorics
NASA Astrophysics Data System (ADS)
Dieks, Dennis
2017-11-01
According to what has become a standard history of quantum mechanics, in 1932 von Neumann persuaded the physics community that hidden variables are impossible as a matter of principle, after which leading proponents of the Copenhagen interpretation put the situation to good use by arguing that the completeness of quantum mechanics was undeniable. This state of affairs lasted, so the story continues, until Bell in 1966 exposed von Neumann's proof as obviously wrong. The realization that von Neumann's proof was fallacious then rehabilitated hidden variables and made serious foundational research possible again. It is often added in recent accounts that von Neumann's error had been spotted almost immediately by Grete Hermann, but that her discovery was of no effect due to the dominant Copenhagen Zeitgeist. We shall attempt to tell a story that is more historically accurate and less ideologically charged. Most importantly, von Neumann never claimed to have shown the impossibility of hidden variables tout court, but argued that hidden-variable theories must possess a structure that deviates fundamentally from that of quantum mechanics. Both Hermann and Bell appear to have missed this point; moreover, both raised unjustified technical objections to the proof. Von Neumann's argument was basically that hidden-variables schemes must violate the ;quantum principle; that physical quantities are to be represented by operators in a Hilbert space. As a consequence, hidden-variables schemes, though possible in principle, necessarily exhibit a certain kind of contextuality. As we shall illustrate, early reactions to Bohm's theory are in agreement with this account. Leading physicists pointed out that Bohm's theory has the strange feature that pre-existing particle properties do not generally reveal themselves in measurements, in accordance with von Neumann's result. They did not conclude that the ;impossible was done; and that von Neumann had been shown wrong.
Optimal projection method determination by Logdet Divergence and perturbed von-Neumann Divergence.
Jiang, Hao; Ching, Wai-Ki; Qiu, Yushan; Cheng, Xiao-Qing
2017-12-14
Positive semi-definiteness is a critical property in kernel methods for Support Vector Machine (SVM) by which efficient solutions can be guaranteed through convex quadratic programming. However, a lot of similarity functions in applications do not produce positive semi-definite kernels. We propose projection method by constructing projection matrix on indefinite kernels. As a generalization of the spectrum method (denoising method and flipping method), the projection method shows better or comparable performance comparing to the corresponding indefinite kernel methods on a number of real world data sets. Under the Bregman matrix divergence theory, we can find suggested optimal λ in projection method using unconstrained optimization in kernel learning. In this paper we focus on optimal λ determination, in the pursuit of precise optimal λ determination method in unconstrained optimization framework. We developed a perturbed von-Neumann divergence to measure kernel relationships. We compared optimal λ determination with Logdet Divergence and perturbed von-Neumann Divergence, aiming at finding better λ in projection method. Results on a number of real world data sets show that projection method with optimal λ by Logdet divergence demonstrate near optimal performance. And the perturbed von-Neumann Divergence can help determine a relatively better optimal projection method. Projection method ia easy to use for dealing with indefinite kernels. And the parameter embedded in the method can be determined through unconstrained optimization under Bregman matrix divergence theory. This may provide a new way in kernel SVMs for varied objectives.
Von Neumann was not a Quantum Bayesian.
Stacey, Blake C
2016-05-28
Wikipedia has claimed for over 3 years now that John von Neumann was the 'first quantum Bayesian'. In context, this reads as stating that von Neumann inaugurated QBism, the approach to quantum theory promoted by Fuchs, Mermin and Schack. This essay explores how such a claim is, historically speaking, unsupported. © 2016 The Author(s).
Von Neumann entropy in a Rashba-Dresselhaus nanodot; dynamical electronic spin-orbit entanglement
NASA Astrophysics Data System (ADS)
Safaiee, Rosa; Golshan, Mohammad Mehdi
2017-06-01
The main purpose of the present article is to report the characteristics of von Neumann entropy, thereby, the electronic hybrid entanglement, in the heterojunction of two semiconductors, with due attention to the Rashba and Dresselhaus spin-orbit interactions. To this end, we cast the von Neumann entropy in terms of spin polarization and compute its time evolution; with a vast span of applications. It is assumed that gate potentials are applied to the heterojunction, providing a two dimensional parabolic confining potential (forming an isotropic nanodot at the junction), as well as means of controlling the spin-orbit couplings. The spin degeneracy is also removed, even at electronic zero momentum, by the presence of an external magnetic field which, in turn, leads to the appearance of Landau states. We then proceed by computing the time evolution of the corresponding von Neumann entropy from a separable (spin-polarized) initial state. The von Neumann entropy, as we show, indicates that electronic hybrid entanglement does occur between spin and two-dimensional Landau levels. Our results also show that von Neumann entropy, as well as the degree of spin-orbit entanglement, periodically collapses and revives. The characteristics of such behavior; period, amplitude, etc., are shown to be determined from the controllable external agents. Moreover, it is demonstrated that the phenomenon of collapse-revivals' in the behavior of von Neumann entropy, equivalently, electronic hybrid entanglement, is accompanied by plateaus (of great importance in quantum computation schemes) whose durations are, again, controlled by the external elements. Along these lines, we also make a comparison between effects of the two spin-orbit couplings on the entanglement (von Neumann entropy) characteristics. The finer details of the electronic hybrid entanglement, which may be easily verified through spin polarization measurements, are also accreted and discussed. The novel results of the present article, with potent applications in the field of quantum information processing, provide a deeper understanding of the electronic von Neumann entropy and hybrid entanglement that occurs in two-dimensional nanodots.
Valence bond and von Neumann entanglement entropy in Heisenberg ladders.
Kallin, Ann B; González, Iván; Hastings, Matthew B; Melko, Roger G
2009-09-11
We present a direct comparison of the recently proposed valence bond entanglement entropy and the von Neumann entanglement entropy on spin-1/2 Heisenberg systems using quantum Monte Carlo and density-matrix renormalization group simulations. For one-dimensional chains we show that the valence bond entropy can be either less or greater than the von Neumann entropy; hence, it cannot provide a bound on the latter. On ladder geometries, simulations with up to seven legs are sufficient to indicate that the von Neumann entropy in two dimensions obeys an area law, even though the valence bond entanglement entropy has a multiplicative logarithmic correction.
Molecular quantum control landscapes in von Neumann time-frequency phase space
NASA Astrophysics Data System (ADS)
Ruetzel, Stefan; Stolzenberger, Christoph; Fechner, Susanne; Dimler, Frank; Brixner, Tobias; Tannor, David J.
2010-10-01
Recently we introduced the von Neumann representation as a joint time-frequency description for femtosecond laser pulses and suggested its use as a basis for pulse shaping experiments. Here we use the von Neumann basis to represent multidimensional molecular control landscapes, providing insight into the molecular dynamics. We present three kinds of time-frequency phase space scanning procedures based on the von Neumann formalism: variation of intensity, time-frequency phase space position, and/or the relative phase of single subpulses. The shaped pulses produced are characterized via Fourier-transform spectral interferometry. Quantum control is demonstrated on the laser dye IR140 elucidating a time-frequency pump-dump mechanism.
Molecular quantum control landscapes in von Neumann time-frequency phase space.
Ruetzel, Stefan; Stolzenberger, Christoph; Fechner, Susanne; Dimler, Frank; Brixner, Tobias; Tannor, David J
2010-10-28
Recently we introduced the von Neumann representation as a joint time-frequency description for femtosecond laser pulses and suggested its use as a basis for pulse shaping experiments. Here we use the von Neumann basis to represent multidimensional molecular control landscapes, providing insight into the molecular dynamics. We present three kinds of time-frequency phase space scanning procedures based on the von Neumann formalism: variation of intensity, time-frequency phase space position, and/or the relative phase of single subpulses. The shaped pulses produced are characterized via Fourier-transform spectral interferometry. Quantum control is demonstrated on the laser dye IR140 elucidating a time-frequency pump-dump mechanism.
(Never) Mind your p's and q's: Von Neumann versus Jordan on the foundations of quantum theory
NASA Astrophysics Data System (ADS)
Duncan, A.; Janssen, M.
2013-03-01
In 1927, in two papers entitled "On a new foundation [Neue Begründung] of quantum mechanics," Pascual Jordan presented his version of what came to be known as the Dirac-Jordan statistical transformation theory. Jordan and Paul Dirac arrived at essentially the same theory independently of one another at around the same time. Later in 1927, partly in response to Jordan and Dirac and avoiding the mathematical difficulties facing their approach, John von Neumann developed the modern Hilbert space formalism of quantum mechanics. We focus on Jordan and von Neumann. Central to the formalisms of both are expressions for conditional probabilities of finding some value for one quantity given the value of another. Beyond that Jordan and von Neumann had very different views about the appropriate formulation of problems in quantum mechanics. For Jordan, unable to let go of the analogy to classical mechanics, the solution of such problems required the identification of sets of canonically conjugate variables, i.e., p's and q's. For von Neumann, not constrained by the analogy to classical mechanics, it required only the identification of a maximal set of commuting operators with simultaneous eigenstates. He had no need for p's and q's. Jordan and von Neumann also stated the characteristic new rules for probabilities in quantum mechanics somewhat differently. Jordan and Dirac were the first to state those rules in full generality. Von Neumann rephrased them and, in a paper published a few months later, sought to derive them from more basic considerations. In this paper we reconstruct the central arguments of these 1927 papers by Jordan and von Neumann and of a paper on Jordan's approach by Hilbert, von Neumann, and Nordheim. We highlight those elements in these papers that bring out the gradual loosening of the ties between the new quantum formalism and classical mechanics. This paper was written as part of a joint project in the history of quantum physics of the Max Planck Institut für Wissenschaftsgeschichte and the Fritz-Haber-Institut in Berlin.
Cheng, Szu-Cheng; Jheng, Shih-Da
2016-08-22
This paper reports a novel type of vortex lattice, referred to as a bubble crystal, which was discovered in rapidly rotating Bose gases with long-range interactions. Bubble crystals differ from vortex lattices which possess a single quantum flux per unit cell, while atoms in bubble crystals are clustered periodically and surrounded by vortices. No existing model is able to describe the vortex structure of bubble crystals; however, we identified a mathematical lattice, which is a subset of coherent states and exists periodically in the physical space. This lattice is called a von Neumann lattice, and when it possesses a single vortex per unit cell, it presents the same geometrical structure as an Abrikosov lattice. In this report, we extend the von Neumann lattice to one with an integral number of flux quanta per unit cell and demonstrate that von Neumann lattices well reproduce the translational properties of bubble crystals. Numerical simulations confirm that, as a generalized vortex, a von Neumann lattice can be physically realized using vortex lattices in rapidly rotating Bose gases with dipole interatomic interactions.
Measurements in Quantum Mechanics and von NEUMANN's Model
NASA Astrophysics Data System (ADS)
Mello, Pier A.; Johansen, Lars M.
2010-12-01
Many textbooks on Quantum Mechanics are not very precise as to the meaning of making a measurement: as a consequence, they frequently make assertions which are not based on a dynamical description of the measurement process. A model proposed by von Neumann allows a dynamical description of measurement in Quantum Mechanics, including the measuring instrument in the formalism. In this article we apply von Neumann's model to illustrate the measurement of an observable by means of a measuring instrument and show how various results, which are sometimens postulated without a dynamical basis, actually emerge. We also investigate the more complex, intriguing and fundamental problem of two successive measurements in Quantum Mechanics, extending von Neumann's model to two measuring instruments. We present a description which allows obtaining, in a unified way, various results that have been given in the literature.
A learnable parallel processing architecture towards unity of memory and computing
NASA Astrophysics Data System (ADS)
Li, H.; Gao, B.; Chen, Z.; Zhao, Y.; Huang, P.; Ye, H.; Liu, L.; Liu, X.; Kang, J.
2015-08-01
Developing energy-efficient parallel information processing systems beyond von Neumann architecture is a long-standing goal of modern information technologies. The widely used von Neumann computer architecture separates memory and computing units, which leads to energy-hungry data movement when computers work. In order to meet the need of efficient information processing for the data-driven applications such as big data and Internet of Things, an energy-efficient processing architecture beyond von Neumann is critical for the information society. Here we show a non-von Neumann architecture built of resistive switching (RS) devices named “iMemComp”, where memory and logic are unified with single-type devices. Leveraging nonvolatile nature and structural parallelism of crossbar RS arrays, we have equipped “iMemComp” with capabilities of computing in parallel and learning user-defined logic functions for large-scale information processing tasks. Such architecture eliminates the energy-hungry data movement in von Neumann computers. Compared with contemporary silicon technology, adder circuits based on “iMemComp” can improve the speed by 76.8% and the power dissipation by 60.3%, together with a 700 times aggressive reduction in the circuit area.
A learnable parallel processing architecture towards unity of memory and computing.
Li, H; Gao, B; Chen, Z; Zhao, Y; Huang, P; Ye, H; Liu, L; Liu, X; Kang, J
2015-08-14
Developing energy-efficient parallel information processing systems beyond von Neumann architecture is a long-standing goal of modern information technologies. The widely used von Neumann computer architecture separates memory and computing units, which leads to energy-hungry data movement when computers work. In order to meet the need of efficient information processing for the data-driven applications such as big data and Internet of Things, an energy-efficient processing architecture beyond von Neumann is critical for the information society. Here we show a non-von Neumann architecture built of resistive switching (RS) devices named "iMemComp", where memory and logic are unified with single-type devices. Leveraging nonvolatile nature and structural parallelism of crossbar RS arrays, we have equipped "iMemComp" with capabilities of computing in parallel and learning user-defined logic functions for large-scale information processing tasks. Such architecture eliminates the energy-hungry data movement in von Neumann computers. Compared with contemporary silicon technology, adder circuits based on "iMemComp" can improve the speed by 76.8% and the power dissipation by 60.3%, together with a 700 times aggressive reduction in the circuit area.
NASA Technical Reports Server (NTRS)
Noever, David A.
2000-01-01
The effects of gravity in influencing the theoretical limit for bubble lattice coarsening and aging behavior, otherwise called von Neumann's law, is examined theoretically and experimentally. Preliminary microgravity results will be discussed.
The smooth entropy formalism for von Neumann algebras
NASA Astrophysics Data System (ADS)
Berta, Mario; Furrer, Fabian; Scholz, Volkher B.
2016-01-01
We discuss information-theoretic concepts on infinite-dimensional quantum systems. In particular, we lift the smooth entropy formalism as introduced by Renner and collaborators for finite-dimensional systems to von Neumann algebras. For the smooth conditional min- and max-entropy, we recover similar characterizing properties and information-theoretic operational interpretations as in the finite-dimensional case. We generalize the entropic uncertainty relation with quantum side information of Tomamichel and Renner and discuss applications to quantum cryptography. In particular, we prove the possibility to perform privacy amplification and classical data compression with quantum side information modeled by a von Neumann algebra.
The smooth entropy formalism for von Neumann algebras
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berta, Mario, E-mail: berta@caltech.edu; Furrer, Fabian, E-mail: furrer@eve.phys.s.u-tokyo.ac.jp; Scholz, Volkher B., E-mail: scholz@phys.ethz.ch
2016-01-15
We discuss information-theoretic concepts on infinite-dimensional quantum systems. In particular, we lift the smooth entropy formalism as introduced by Renner and collaborators for finite-dimensional systems to von Neumann algebras. For the smooth conditional min- and max-entropy, we recover similar characterizing properties and information-theoretic operational interpretations as in the finite-dimensional case. We generalize the entropic uncertainty relation with quantum side information of Tomamichel and Renner and discuss applications to quantum cryptography. In particular, we prove the possibility to perform privacy amplification and classical data compression with quantum side information modeled by a von Neumann algebra.
Universal Local Symmetries and Nonsuperposition in Classical Mechanics
NASA Astrophysics Data System (ADS)
Gozzi, Ennio; Pagani, Carlo
2010-10-01
In the Hilbert space formulation of classical mechanics, pioneered by Koopman and von Neumann, there are potentially more observables than in the standard approach to classical mechanics. In this Letter, we show that actually many of those extra observables are not invariant under a set of universal local symmetries which appear once the Koopman and von Neumann formulation is extended to include the evolution of differential forms. Because of their noninvariance, those extra observables have to be removed. This removal makes the superposition of states in the Koopman and von Neumann formulation, and as a consequence also in classical mechanics, impossible.
Generalization of von Neumann analysis for a model of two discrete half-spaces: The acoustic case
Haney, M.M.
2007-01-01
Evaluating the performance of finite-difference algorithms typically uses a technique known as von Neumann analysis. For a given algorithm, application of the technique yields both a dispersion relation valid for the discrete time-space grid and a mathematical condition for stability. In practice, a major shortcoming of conventional von Neumann analysis is that it can be applied only to an idealized numerical model - that of an infinite, homogeneous whole space. Experience has shown that numerical instabilities often arise in finite-difference simulations of wave propagation at interfaces with strong material contrasts. These interface instabilities occur even though the conventional von Neumann stability criterion may be satisfied at each point of the numerical model. To address this issue, I generalize von Neumann analysis for a model of two half-spaces. I perform the analysis for the case of acoustic wave propagation using a standard staggered-grid finite-difference numerical scheme. By deriving expressions for the discrete reflection and transmission coefficients, I study under what conditions the discrete reflection and transmission coefficients become unbounded. I find that instabilities encountered in numerical modeling near interfaces with strong material contrasts are linked to these cases and develop a modified stability criterion that takes into account the resulting instabilities. I test and verify the stability criterion by executing a finite-difference algorithm under conditions predicted to be stable and unstable. ?? 2007 Society of Exploration Geophysicists.
Toward an Extension of Decision Analysis to Competitive Situations.
1985-12-01
order to deal with competition may ease the use of non- von Neumann-Morgenstern utility. This leads to our secondary goal of questioning expected...While von WInterfeldt [1980] attempted a 5 (more detailed analysis using three separate decision trees, one for each side In the dispute, he felt that...rationality generally used In game theory derives from the same roots as the calculated rationality of Decision Analysis, von Neumann and
Construction and Analysis of Multi-Rate Partitioned Runge-Kutta Methods
2012-06-01
ANALYSIS OF MULTI-RATE PARTITIONED RUNGE-KUTTA METHODS by Patrick R. Mugg June 2012 Thesis Advisor: Francis Giraldo Second Reader: Hong...COVERED Master’s Thesis 4. TITLE AND SUBTITLE Construction and Analysis of Multi-Rate Partitioned Runge-Kutta Methods 5. FUNDING NUMBERS 6. AUTHOR...The most widely known and used procedure for analyzing stability is the Von Neumann Method , such that Von Neumann’s stability analysis looks at
Introduction to Digital Logic Systems for Energy Monitoring and Control Systems.
1985-05-01
computer were first set down by Charles Babbage in 1830. An additional criteria was proposed by Von Neumann in 1947. These criteria state: (1) An input means...criteria requirements as set down by Babbage and Von Neumann. The computer equipment ("hardware") and internal operating system ("software
Fuglede–Kadison determinant: theme and variations
de la Harpe, Pierre
2013-01-01
We review the definition of determinants for finite von Neumann algebras, due to Fuglede and Kadison [Fuglede B, Kadison R (1952) Ann Math 55:520–530], and a generalization for appropriate groups of invertible elements in Banach algebras, from a paper by Skandalis and the author (1984). After some discussion of K-theory and Whitehead torsion, we indicate the relevance of these determinants to the study of -torsion in topology. Contents are as follows:1. The classical setting.2. On von Neumann algebras and traces.3. Fuglede–Kadison determinant for finite von Neumann algebras.4. Motivating question.5. Brief reminder of , , , and Bott periodicity.6. Revisiting the Fuglede–Kadison and other determinants.7. On Whitehead torsion.8. A few lines on -torsion. PMID:24082099
NASA Astrophysics Data System (ADS)
Balsara, Dinshaw S.; Käppeli, Roger
2017-05-01
In this paper we focus on the numerical solution of the induction equation using Runge-Kutta Discontinuous Galerkin (RKDG)-like schemes that are globally divergence-free. The induction equation plays a role in numerical MHD and other systems like it. It ensures that the magnetic field evolves in a divergence-free fashion; and that same property is shared by the numerical schemes presented here. The algorithms presented here are based on a novel DG-like method as it applies to the magnetic field components in the faces of a mesh. (I.e., this is not a conventional DG algorithm for conservation laws.) The other two novel building blocks of the method include divergence-free reconstruction of the magnetic field and multidimensional Riemann solvers; both of which have been developed in recent years by the first author. Since the method is linear, a von Neumann stability analysis is carried out in two-dimensions to understand its stability properties. The von Neumann stability analysis that we develop in this paper relies on transcribing from a modal to a nodal DG formulation in order to develop discrete evolutionary equations for the nodal values. These are then coupled to a suitable Runge-Kutta timestepping strategy so that one can analyze the stability of the entire scheme which is suitably high order in space and time. We show that our scheme permits CFL numbers that are comparable to those of traditional RKDG schemes. We also analyze the wave propagation characteristics of the method and show that with increasing order of accuracy the wave propagation becomes more isotropic and free of dissipation for a larger range of long wavelength modes. This makes a strong case for investing in higher order methods. We also use the von Neumann stability analysis to show that the divergence-free reconstruction and multidimensional Riemann solvers are essential algorithmic ingredients of a globally divergence-free RKDG-like scheme. Numerical accuracy analyses of the RKDG-like schemes are presented and compared with the accuracy of PNPM schemes. It is found that PNPM retrieve much of the accuracy of the RKDG-like schemes while permitting a larger CFL number.
Networked Workstations and Parallel Processing Utilizing Functional Languages
1993-03-01
program . This frees the programmer to concentrate on what the program is to do, not how the program is...traditional ’von Neumann’ architecture uses a timer based (e.g., the program counter), sequentially pro- grammed, single processor approach to problem...traditional ’von Neumann’ architecture uses a timer based (e.g., the program counter), sequentially programmed , single processor approach to
Quantum leaps in philosophy of mind: Reply to Bourget'scritique
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stapp, Henry P.
2004-07-26
David Bourget has raised some conceptual and technical objections to my development of von Neumann's treatment of the Copenhagen idea that the purely physical process described by the Schroedinger equation must be supplemented by a psychophysical process called the choice of the experiment by Bohr and Process 1 by von Neumann. I answer here each of Bourget's objections.
Nonlinear reflection of shock shear waves in soft elastic media.
Pinton, Gianmarco; Coulouvrat, François; Gennisson, Jean-Luc; Tanter, Mickaël
2010-02-01
For fluids, the theoretical investigation of shock wave reflection has a good agreement with experiments when the incident shock Mach number is large. But when it is small, theory predicts that Mach reflections are physically unrealistic, which contradicts experimental evidence. This von Neumann paradox is investigated for shear shock waves in soft elastic solids with theory and simulations. The nonlinear elastic wave equation is approximated by a paraxial wave equation with a cubic nonlinear term. This equation is solved numerically with finite differences and the Godunov scheme. Three reflection regimes are observed. Theory is developed for shock propagation by applying the Rankine-Hugoniot relations and entropic constraints. A characteristic parameter relating diffraction and non-linearity is introduced and its theoretical values are shown to match numerical observations. The numerical solution is then applied to von Neumann reflection, where curved reflected and Mach shocks are observed. Finally, the case of weak von Neumann reflection, where there is no reflected shock, is examined. The smooth but non-monotonic transition between these three reflection regimes, from linear Snell-Descartes to perfect grazing case, provides a solution to the acoustical von Neumann paradox for the shear wave equation. This transition is similar to the quadratic non-linearity in fluids.
Gregory Bateson and the mathematicians: from interdisciplinary interaction to societal functions.
Heims, S P
1977-04-01
An instance of fruitful cross-disciplinary contacts is examined in detail. The ideas involved include (1) the double-blind hypothesis for schizophrenia, (2) the critique of game theory from the viewpoint of anthropology and psychiatry, and (3) the application of concepts of communication theory and theory of logical types to an interpretation of psychoanalytic practice. The protagonists of the interchange are Gregory Bateson and the two mathematicians Norbert Wiener and John von Neumann; the date, March 1946. This interchange and its sequels are described. While the interchanges between Bateson and Wiener were fruitful, those between Bateson and von Neumann were much less so. The latter two held conflicting premises concerning what is significant in science; Bateson's and Wiener's were compatible. In 1946, Wiener suggested that information and communication might be appropriate central concepts for psychoanalytic theory--a vague general idea which Bateson (with Ruesch) related to contemporary clinical practice. For Bateson, Wiener, and von Neumann, the cross-disciplinary interactions foreshadowed a shift in activities and new roles in society, to which the post World War II period was conducive. Von Neumann became a high-level government advisor; Wiener, an interpreter of science and technology for the general public; and Bateson a counter-culture figure.
Interpolatability distinguishes LOCC from separable von Neumann measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Childs, Andrew M.; Leung, Debbie; Mančinska, Laura
2013-11-15
Local operations with classical communication (LOCC) and separable operations are two classes of quantum operations that play key roles in the study of quantum entanglement. Separable operations are strictly more powerful than LOCC, but no simple explanation of this phenomenon is known. We show that, in the case of von Neumann measurements, the ability to interpolate measurements is an operational principle that sets apart LOCC and separable operations.
ERIC Educational Resources Information Center
Reggini, Horacio C.
The first article, "LOGO and von Neumann Ideas," deals with the creation of new procedures based on procedures defined and stored in memory as LOGO lists of lists. This representation, which enables LOGO procedures to construct, modify, and run other LOGO procedures, is compared with basic computer concepts first formulated by John von…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loubenets, Elena R.
We prove the existence for each Hilbert space of the two new quasi hidden variable (qHV) models, statistically noncontextual and context-invariant, reproducing all the von Neumann joint probabilities via non-negative values of real-valued measures and all the quantum product expectations—via the qHV (classical-like) average of the product of the corresponding random variables. In a context-invariant model, a quantum observable X can be represented by a variety of random variables satisfying the functional condition required in quantum foundations but each of these random variables equivalently models X under all joint von Neumann measurements, regardless of their contexts. The proved existence ofmore » this model negates the general opinion that, in terms of random variables, the Hilbert space description of all the joint von Neumann measurements for dimH≥3 can be reproduced only contextually. The existence of a statistically noncontextual qHV model, in particular, implies that every N-partite quantum state admits a local quasi hidden variable model introduced in Loubenets [J. Math. Phys. 53, 022201 (2012)]. The new results of the present paper point also to the generality of the quasi-classical probability model proposed in Loubenets [J. Phys. A: Math. Theor. 45, 185306 (2012)].« less
Entropy Production and Non-Equilibrium Steady States
NASA Astrophysics Data System (ADS)
Suzuki, Masuo
2013-01-01
The long-term issue of entropy production in transport phenomena is solved by separating the symmetry of the non-equilibrium density matrix ρ(t) in the von Neumann equation, as ρ(t) = ρs(t) + ρa(t) with the symmetric part ρs(t) and antisymmetric part ρa(t). The irreversible entropy production (dS/dt)irr is given in M. Suzuki, Physica A 390(2011)1904 by (dS/dt)irr = Tr( {H}(dρ s{(t)/dt))}/T for the Hamiltonian {H} of the relevant system. The general formulation of the extended von Neumann equation with energy supply and heat extraction is reviewed from the author's paper (M. S.,Physica A391(2012)1074). irreversibility; entropy production; transport phenomena; electric conduction; thermal conduction; linear response; Kubo formula; steady state; non-equilibrium density matrix; energy supply; symmetry-separated von Neumann equation; unboundedness.
Quantitative conditions for time evolution in terms of the von Neumann equation
NASA Astrophysics Data System (ADS)
Wang, WenHua; Cao, HuaiXin; Chen, ZhengLi; Wang, Lie
2018-07-01
The adiabatic theorem describes the time evolution of the pure state and gives an adiabatic approximate solution to the Schödinger equation by choosing a single eigenstate of the Hamiltonian as the initial state. In quantum systems, states are divided into pure states (unite vectors) and mixed states (density matrices, i.e., positive operators with trace one). Accordingly, mixed states have their own corresponding time evolution, which is described by the von Neumann equation. In this paper, we discuss the quantitative conditions for the time evolution of mixed states in terms of the von Neumann equation. First, we introduce the definitions for uniformly slowly evolving and δ-uniformly slowly evolving with respect to mixed states, then we present a necessary and sufficient condition for the Hamiltonian of the system to be uniformly slowly evolving and we obtain some upper bounds for the adiabatic approximate error. Lastly, we illustrate our results in an example.
Structure and Reversibility of 2D von Neumann Cellular Automata Over Triangular Lattice
NASA Astrophysics Data System (ADS)
Uguz, Selman; Redjepov, Shovkat; Acar, Ecem; Akin, Hasan
2017-06-01
Even though the fundamental main structure of cellular automata (CA) is a discrete special model, the global behaviors at many iterative times and on big scales could be a close, nearly a continuous, model system. CA theory is a very rich and useful phenomena of dynamical model that focuses on the local information being relayed to the neighboring cells to produce CA global behaviors. The mathematical points of the basic model imply the computable values of the mathematical structure of CA. After modeling the CA structure, an important problem is to be able to move forwards and backwards on CA to understand their behaviors in more elegant ways. A possible case is when CA is to be a reversible one. In this paper, we investigate the structure and the reversibility of two-dimensional (2D) finite, linear, triangular von Neumann CA with null boundary case. It is considered on ternary field ℤ3 (i.e. 3-state). We obtain their transition rule matrices for each special case. For given special triangular information (transition) rule matrices, we prove which triangular linear 2D von Neumann CAs are reversible or not. It is known that the reversibility cases of 2D CA are generally a much challenged problem. In the present study, the reversibility problem of 2D triangular, linear von Neumann CA with null boundary is resolved completely over ternary field. As far as we know, there is no structure and reversibility study of von Neumann 2D linear CA on triangular lattice in the literature. Due to the main CA structures being sufficiently simple to investigate in mathematical ways, and also very complex to obtain in chaotic systems, it is believed that the present construction can be applied to many areas related to these CA using any other transition rules.
NASA Astrophysics Data System (ADS)
Kawamori, Eiichirou
2017-09-01
A transition from Langmuir wave turbulence (LWT) to coherent Langmuir wave supercontinuum (LWSC) is identified in one-dimensional particle-in-cell simulations as the emergence of a broad frequency band showing significant temporal coherence of a wave field accompanied by a decrease in the von Neumann entropy of classical wave fields. The concept of the von Neumann entropy is utilized for evaluation of the phase-randomizing degree of the classical wave fields, together with introduction of the density matrix of the wave fields. The transition from LWT to LWSC takes place when the energy per one plasmon (one wave quantum) exceeds a certain threshold. The coherent nature, which Langmuir wave systems acquire through the transition, is created by four wave mixings of the plasmons. The emergence of temporal coherence and the decrease in the phase randomization are considered as the development of long-range order and spontaneous symmetry breaking, respectively, indicating that the LWT-LWSC transition is a second order phase transition phenomenon.
NASA Astrophysics Data System (ADS)
Yao, K. L.; Li, Y. C.; Sun, X. Z.; Liu, Q. M.; Qin, Y.; Fu, H. H.; Gao, G. Y.
2005-10-01
By using the density matrix renormalization group (DMRG) method for the one-dimensional (1D) Hubbard model, we have studied the von Neumann entropy of a quantum system, which describes the entanglement of the system block and the rest of the chain. It is found that there is a close relation between the entanglement entropy and properties of the system. The hole-doping can alter the charge charge and spin spin interactions, resulting in charge polarization along the chain. By comparing the results before and after the doping, we find that doping favors increase of the von Neumann entropy and thus also favors the exchange of information along the chain. Furthermore, we calculated the spin and entropy distribution in external magnetic filed. It is confirmed that both the charge charge and the spin spin interactions affect the exchange of information along the chain, making the entanglement entropy redistribute.
Koopman-von Neumann formulation of classical Yang-Mills theories: I
NASA Astrophysics Data System (ADS)
Carta, P.; Gozzi, E.; Mauro, D.
2006-03-01
In this paper we present the Koopman-von Neumann (KvN) formulation of classical non-Abelian gauge field theories. In particular we shall explore the functional (or classical path integral) counterpart of the KvN method. In the quantum path integral quantization of Yang-Mills theories concepts like gauge-fixing and Faddeev-Popov determinant appear in a quite natural way. We will prove that these same objects are needed also in this classical path integral formulation for Yang-Mills theories. We shall also explore the classical path integral counterpart of the BFV formalism and build all the associated universal and gauge charges. These last are quite different from the analog quantum ones and we shall show the relation between the two. This paper lays the foundation of this formalism which, due to the many auxiliary fields present, is rather heavy. Applications to specific topics outlined in the paper will appear in later publications.
Spin torque oscillator neuroanalog of von Neumann's microwave computer.
Hoppensteadt, Frank
2015-10-01
Frequency and phase of neural activity play important roles in the behaving brain. The emerging understanding of these roles has been informed by the design of analog devices that have been important to neuroscience, among them the neuroanalog computer developed by O. Schmitt and A. Hodgkin in the 1930s. Later J. von Neumann, in a search for high performance computing using microwaves, invented a logic machine based on crystal diodes that can perform logic functions including binary arithmetic. Described here is an embodiment of his machine using nano-magnetics. Electrical currents through point contacts on a ferromagnetic thin film can create oscillations in the magnetization of the film. Under natural conditions these properties of a ferromagnetic thin film may be described by a nonlinear Schrödinger equation for the film's magnetization. Radiating solutions of this system are referred to as spin waves, and communication within the film may be by spin waves or by directed graphs of electrical connections. It is shown here how to formulate a STO logic machine, and by computer simulation how this machine can perform several computations simultaneously using multiplexing of inputs, that this system can evaluate iterated logic functions, and that spin waves may communicate frequency, phase and binary information. Neural tissue and the Schmitt-Hodgkin, von Neumann and STO devices share a common bifurcation structure, although these systems operate on vastly different space and time scales; namely, all may exhibit Andronov-Hopf bifurcations. This suggests that neural circuits may be capable of the computational functionality as described by von Neumann. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Time dependence of Hawking radiation entropy
NASA Astrophysics Data System (ADS)
Page, Don N.
2013-09-01
If a black hole starts in a pure quantum state and evaporates completely by a unitary process, the von Neumann entropy of the Hawking radiation initially increases and then decreases back to zero when the black hole has disappeared. Here numerical results are given for an approximation to the time dependence of the radiation entropy under an assumption of fast scrambling, for large nonrotating black holes that emit essentially only photons and gravitons. The maximum of the von Neumann entropy then occurs after about 53.81% of the evaporation time, when the black hole has lost about 40.25% of its original Bekenstein-Hawking (BH) entropy (an upper bound for its von Neumann entropy) and then has a BH entropy that equals the entropy in the radiation, which is about 59.75% of the original BH entropy 4πM02, or about 7.509M02 ≈ 6.268 × 1076(M0/Msolar)2, using my 1976 calculations that the photon and graviton emission process into empty space gives about 1.4847 times the BH entropy loss of the black hole. Results are also given for black holes in initially impure states. If the black hole starts in a maximally mixed state, the von Neumann entropy of the Hawking radiation increases from zero up to a maximum of about 119.51% of the original BH entropy, or about 15.018M02 ≈ 1.254 × 1077(M0/Msolar)2, and then decreases back down to 4πM02 = 1.049 × 1077(M0/Msolar)2.
Korobov, A
2009-03-01
Discrete random tessellations appear not infrequently in describing nucleation and growth transformations. Generally, several non-Euclidean metrics are possible in this case. Previously [A. Korobov, Phys. Rev. B 76, 085430 (2007)] continual analogs of such tessellations have been studied. Here one of the simplest discrete varieties of the Kolmogorov-Johnson-Mehl-Avrami model, namely, the model with von Neumann neighborhoods, has been examined per se, i.e., without continualization. The tessellation is uniform in the sense that domain boundaries consist of tiles. Similarities and distinctions between discrete and continual models are discussed.
Contiguity and Entire Separability of States on von Neumann Algebras
NASA Astrophysics Data System (ADS)
Haliullin, Samigulla
2017-12-01
We introduce the notions of the contiguity and entirely separability for two sequences of states on von Neumann algebras. The ultraproducts technique allows us to reduce the study of the contiguity to investigation of the equivalence for two states. Here we apply the Ocneanu ultraproduct and the Groh-Raynaud ultraproduct (see Ocneanu (1985), Groh (J. Operator Theory, 11, 2, 395-404 1984), Raynaud (J. Operator Theory, 48, 1, 41-68, 2002), Ando and Haagerup (J. Funct. Anal., 266, 12, 6842-6913, 2014)), as well as the technique developed in Mushtari and Haliullin (Lobachevskii J. Math., 35, 2, 138-146, 2014).
Time dependence of Hawking radiation entropy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Page, Don N., E-mail: profdonpage@gmail.com
2013-09-01
If a black hole starts in a pure quantum state and evaporates completely by a unitary process, the von Neumann entropy of the Hawking radiation initially increases and then decreases back to zero when the black hole has disappeared. Here numerical results are given for an approximation to the time dependence of the radiation entropy under an assumption of fast scrambling, for large nonrotating black holes that emit essentially only photons and gravitons. The maximum of the von Neumann entropy then occurs after about 53.81% of the evaporation time, when the black hole has lost about 40.25% of its originalmore » Bekenstein-Hawking (BH) entropy (an upper bound for its von Neumann entropy) and then has a BH entropy that equals the entropy in the radiation, which is about 59.75% of the original BH entropy 4πM{sub 0}{sup 2}, or about 7.509M{sub 0}{sup 2} ≈ 6.268 × 10{sup 76}(M{sub 0}/M{sub s}un){sup 2}, using my 1976 calculations that the photon and graviton emission process into empty space gives about 1.4847 times the BH entropy loss of the black hole. Results are also given for black holes in initially impure states. If the black hole starts in a maximally mixed state, the von Neumann entropy of the Hawking radiation increases from zero up to a maximum of about 119.51% of the original BH entropy, or about 15.018M{sub 0}{sup 2} ≈ 1.254 × 10{sup 77}(M{sub 0}/M{sub s}un){sup 2}, and then decreases back down to 4πM{sub 0}{sup 2} = 1.049 × 10{sup 77}(M{sub 0}/M{sub s}un){sup 2}.« less
Measurement theory in local quantum physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Okamura, Kazuya, E-mail: okamura@math.cm.is.nagoya-u.ac.jp; Ozawa, Masanao, E-mail: ozawa@is.nagoya-u.ac.jp
In this paper, we aim to establish foundations of measurement theory in local quantum physics. For this purpose, we discuss a representation theory of completely positive (CP) instruments on arbitrary von Neumann algebras. We introduce a condition called the normal extension property (NEP) and establish a one-to-one correspondence between CP instruments with the NEP and statistical equivalence classes of measuring processes. We show that every CP instrument on an atomic von Neumann algebra has the NEP, extending the well-known result for type I factors. Moreover, we show that every CP instrument on an injective von Neumann algebra is approximated bymore » CP instruments with the NEP. The concept of posterior states is also discussed to show that the NEP is equivalent to the existence of a strongly measurable family of posterior states for every normal state. Two examples of CP instruments without the NEP are obtained from this result. It is thus concluded that in local quantum physics not every CP instrument represents a measuring process, but in most of physically relevant cases every CP instrument can be realized by a measuring process within arbitrary error limits, as every approximately finite dimensional von Neumann algebra on a separable Hilbert space is injective. To conclude the paper, the concept of local measurement in algebraic quantum field theory is examined in our framework. In the setting of the Doplicher-Haag-Roberts and Doplicher-Roberts theory describing local excitations, we show that an instrument on a local algebra can be extended to a local instrument on the global algebra if and only if it is a CP instrument with the NEP, provided that the split property holds for the net of local algebras.« less
NASA Astrophysics Data System (ADS)
Gerd, Niestegge
2010-12-01
In the quantum mechanical Hilbert space formalism, the probabilistic interpretation is a later ad-hoc add-on, more or less enforced by the experimental evidence, but not motivated by the mathematical model itself. A model involving a clear probabilistic interpretation from the very beginning is provided by the quantum logics with unique conditional probabilities. It includes the projection lattices in von Neumann algebras and here probability conditionalization becomes identical with the state transition of the Lüders-von Neumann measurement process. This motivates the definition of a hierarchy of five compatibility and comeasurability levels in the abstract setting of the quantum logics with unique conditional probabilities. Their meanings are: the absence of quantum interference or influence, the existence of a joint distribution, simultaneous measurability, and the independence of the final state after two successive measurements from the sequential order of these two measurements. A further level means that two elements of the quantum logic (events) belong to the same Boolean subalgebra. In the general case, the five compatibility and comeasurability levels appear to differ, but they all coincide in the common Hilbert space formalism of quantum mechanics, in von Neumann algebras, and in some other cases.
Gravity Effects in Diffusive Coarsening of Bubble Lattices: von Neumann's Law
NASA Technical Reports Server (NTRS)
Noever, David A.
2000-01-01
von Neumann modelled the evolution of two-dimensional soap froths as a purely diffusive phenomenon; the area growth of a given cell was found to depend only on the geometry of the bubble lattice. In the model, hexagons are stable, pentagons shrink and heptagons grow. The simplest equivalent to the area growth law is / approximately t(sub beta). The result depends on assuming (1) an incompressible gas; (2) bubble walls which meet at 120 deg and (3) constant wall thickness and curvature. Each assumption is borne out in experiments except the last one: bubble wall thickness between connecting cells varies in unit gravity because of gravity drainage. The bottom part of the soap membrane is thickened, the top part is thinned, such that gas diffusion across the membrane shows a complex dependence on gravity. As a result, experimental tests of von Neumann's law have been influenced by effects of gravity; fluid behavior along cell borders can give non-uniform wall thicknesses and thus alter the effective area and gas diffusion rates between adjacent bubbles. For area plotted as a function of time, Glazier (J.A. Glazier, S.P. Gross, and I. Stavans, Phys. Rev. A. 36, 306 (1987); J. Stavans, J.A, Glazier, Phys. Rev. Lett. 62, 1318 (1989).) suggest that in some cases their failure to observe von Neumann's predicted growth exponent ((sup beta)theor(sup =1; beta)exp(sup =0.70 + 0.10)) may have been the result of such "fluid drainage onto the lower glass plate". Additional experiments which varied plate spacing gave different beta exponents in a fashion consistent with this suggestion. During preliminary long duration experiments (approximately 100 h) aboard Spacelab-J, a low-gravity test of froth coarsening has examined (1) power law scaling of von Neumann's law (beta values) in the appropriate diffusive limits; (2) new bubble lattice dynamics such as greater fluid wetting behavior on froth membranes in low gravity; and (3) explicit relations for the gravity dependence of the second moment (or disorder parameter) governing the geometric spread in cell-sidedness around the mean of perfect hexagonal filling. By reducing the gravity-induced distortion in lattice wall thickness, the diffusion-limited regime of bubble coarsening becomes available for performing critical tests of network dynamics.
Zelovich, Tamar; Hansen, Thorsten; Liu, Zhen-Fei; ...
2017-03-02
A parameter-free version of the recently developed driven Liouville-von Neumann equation [T. Zelovich et al., J. Chem. Theory Comput. 10(8), 2927-2941 (2014)] for electronic transport calculations in molecular junctions is presented. The single driving rate, appearing as a fitting parameter in the original methodology, is replaced by a set of state-dependent broadening factors applied to the different single-particle lead levels. These broadening factors are extracted explicitly from the self-energy of the corresponding electronic reservoir and are fully transferable to any junction incorporating the same lead model. Furthermore, the performance of the method is demonstrated via tight-binding and extended Hückel calculationsmore » of simple junction models. Our analytic considerations and numerical results indicate that the developed methodology constitutes a rigorous framework for the design of "black-box" algorithms to simulate electron dynamics in open quantum systems out of equilibrium.« less
A novel method for the measurement of the von Neumann spike in detonating high explosives
NASA Astrophysics Data System (ADS)
Sollier, A.; Bouyer, V.; Hébert, P.; Doucet, M.
2016-06-01
We present detonation wave profiles measured in T2 (97 wt. % TATB) and TX1 (52 wt. % TATB and 45 wt. % HMX) high explosives. The experiments consisted in initiating a detonation wave in a 15 mm diameter cylinder of explosive using an explosive wire detonator and an explosive booster. Free surface velocity wave profiles were measured at the explosive/air interface using a Photon Doppler Velocimetry system. We demonstrate that a comparison of these free surface wave profiles with those measured at explosive/window interfaces in similar conditions allows to bracket the von Neumann spike in a narrow range. For T2, our measurements show that the spike pressure lies between 35.9 and 40.1 GPa, whereas for TX1, it lies between 42.3 and 47.0 GPa. The numerical simulations performed in support to these measurements show that they can be used to calibrate reactive burn models and also to check the accuracy of the detonation products equation of state at low pressure.
Experimental Measurements of the Chemical Reaction Zone of Detonating Liquid Explosives
NASA Astrophysics Data System (ADS)
Bouyer, Viviane; Sheffield, Stephen A.; Dattelbaum, Dana M.; Gustavsen, Richard L.; Stahl, David B.; Doucet, Michel
2009-06-01
We have a joint project between CEA-DAM Le Ripault and Los Alamos National Laboratory (LANL) to study the chemical reaction zone in detonating high explosives using several different laser velocimetry techniques. The short temporal duration of the features (von Neumann spike and sonic locus) of the reaction zone make these measurements difficult. Here, we report results obtained from using and PDV (photon Doppler velocimetry) methods to measure the particle velocity history at a detonating HE (nitromethane)/PMMA interface. Experiments done at CEA were high-explosive-plane-wave initiated and those at LANL were gas-gun-projectile initiated with a detonation run of about 6 charge diameters in all experiments, in either glass or brass confinement. Excellent agreement of the interface particle velocity measurements at both Laboratories were obtained even though the initiation systems and the velocimetry systems were different. Some differences were observed in the von Neumann spike height because of the approximately 2 nanosecond time resolution of the techniques -- in some or all cases the spike top was truncated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zelovich, Tamar; Hansen, Thorsten; Liu, Zhen-Fei
A parameter-free version of the recently developed driven Liouville-von Neumann equation [T. Zelovich et al., J. Chem. Theory Comput. 10(8), 2927-2941 (2014)] for electronic transport calculations in molecular junctions is presented. The single driving rate, appearing as a fitting parameter in the original methodology, is replaced by a set of state-dependent broadening factors applied to the different single-particle lead levels. These broadening factors are extracted explicitly from the self-energy of the corresponding electronic reservoir and are fully transferable to any junction incorporating the same lead model. Furthermore, the performance of the method is demonstrated via tight-binding and extended Hückel calculationsmore » of simple junction models. Our analytic considerations and numerical results indicate that the developed methodology constitutes a rigorous framework for the design of "black-box" algorithms to simulate electron dynamics in open quantum systems out of equilibrium.« less
The second law of thermodynamics under unitary evolution and external operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ikeda, Tatsuhiko N., E-mail: ikeda@cat.phys.s.u-tokyo.ac.jp; Physics Department, Boston University, Boston, MA 02215; Sakumichi, Naoyuki
The von Neumann entropy cannot represent the thermodynamic entropy of equilibrium pure states in isolated quantum systems. The diagonal entropy, which is the Shannon entropy in the energy eigenbasis at each instant of time, is a natural generalization of the von Neumann entropy and applicable to equilibrium pure states. We show that the diagonal entropy is consistent with the second law of thermodynamics upon arbitrary external unitary operations. In terms of the diagonal entropy, thermodynamic irreversibility follows from the facts that quantum trajectories under unitary evolution are restricted by the Hamiltonian dynamics and that the external operation is performed withoutmore » reference to the microscopic state of the system.« less
NASA Astrophysics Data System (ADS)
Krsolarlak, Ilona
We analyze a certain class of von Neumann algebras generated by selfadjoint elements , for satisfying the general commutation relations:
On-chip phase-change photonic memory and computing
NASA Astrophysics Data System (ADS)
Cheng, Zengguang; Ríos, Carlos; Youngblood, Nathan; Wright, C. David; Pernice, Wolfram H. P.; Bhaskaran, Harish
2017-08-01
The use of photonics in computing is a hot topic of interest, driven by the need for ever-increasing speed along with reduced power consumption. In existing computing architectures, photonic data storage would dramatically improve the performance by reducing latencies associated with electrical memories. At the same time, the rise of `big data' and `deep learning' is driving the quest for non-von Neumann and brain-inspired computing paradigms. To succeed in both aspects, we have demonstrated non-volatile multi-level photonic memory avoiding the von Neumann bottleneck in the existing computing paradigm and a photonic synapse resembling the biological synapses for brain-inspired computing using phase-change materials (Ge2Sb2Te5).
Experimental Measurements of the Chemical Reaction Zone of Detonating Liquid Explosives
NASA Astrophysics Data System (ADS)
Bouyer, Viviane; Sheffield, Stephen A.; Dattelbaum, Dana M.; Gustavsen, Richard L.; Stahl, David B.; Doucet, Michel; Decaris, Lionel
2009-12-01
We have a joint project between CEA-DAM Le Ripault and Los Alamos National Laboratory (LANL) to study the chemical reaction zone in detonating high explosives using several different laser velocimetry techniques. The short temporal duration of the von Neumann spike and early part of the reaction zone make these measurements difficult. Here, we report results obtained from detonation experiments using VISAR (velocity interferometer system for any reflector) and PDV (photon Doppler velocimetry) methods to measure the particle velocity history at a detonating nitromethane/PMMA interface. Experiments done at CEA were high-explosive-plane-wave initiated and those at LANL were gas-gun-projectile initiated with a detonation run of about 6 charge diameters in all experiments. The experiments had either glass or brass confinement. Excellent agreement of the interface particle velocity measurements at both Laboratories were obtained even though the initiation methods and the velocimetry systems were somewhat different. Some differences were observed in the peak particle velocity because of the ˜2 ns time resolution of the techniques—in all cases the peak was lower than the expected von Neumann spike. This is thought to be because the measurements were not high enough time resolution to resolve the spike.
NASA Astrophysics Data System (ADS)
Kadowaki, Tadashi
2018-02-01
We propose a method to interpolate dynamics of von Neumann and classical master equations with an arbitrary mixing parameter to investigate the thermal effects in quantum dynamics. The two dynamics are mixed by intervening to continuously modify their solutions, thus coupling them indirectly instead of directly introducing a coupling term. This maintains the quantum system in a pure state even after the introduction of thermal effects and obtains not only a density matrix but also a state vector representation. Further, we demonstrate that the dynamics of a two-level system can be rewritten as a set of standard differential equations, resulting in quantum dynamics that includes thermal relaxation. These equations are equivalent to the optical Bloch equations at the weak coupling and asymptotic limits, implying that the dynamics cause thermal effects naturally. Numerical simulations of ferromagnetic and frustrated systems support this idea. Finally, we use this method to study thermal effects in quantum annealing, revealing nontrivial performance improvements for a spin glass model over a certain range of annealing time. This result may enable us to optimize the annealing time of real annealing machines.
Complexion of forces in an anisotropic self-gravitating system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kandrup, H.E.
Chandrasekhar and von Neumann developed a completely stochastic formalism to analyze the complexion of forces acting upon a test star situated in an infinite, homogeneous distribution of field stars. This formalism is generalized here to allow for more realistic inhomogeneous and anisotropic systems. It is demonstrated that the forces acting upon a test star decompose ''naturally'' into the incoherent sum of a mean force associated with the average spatial inhomogeneity and a fluctuating force associated with stochastic deviations from these mean conditions. Moreover, as in the special case considered by Chandrasekhar and von Neumann, one can apparently associate the fluctuatingmore » forces with the effects of particularly proximate field stars, thereby motivating the ''nearest neighbor'' interpretation first introduced by Chandrasekhar.« less
All quantum observables in a hidden-variable model must commute simultaneously
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malley, James D.
Under a standard set of assumptions for a hidden-variable model for quantum events we show that all observables must commute simultaneously. This seems to be an ultimate statement about the inapplicability of the usual hidden-variable model for quantum events. And, despite Bell's complaint that a key condition of von Neumann's was quite unrealistic, we show that these conditions, under which von Neumann produced the first no-go proof, are entirely equivalent to those introduced by Bell and Kochen and Specker. As these conditions are also equivalent to those under which the Bell-Clauster-Horne inequalities are derived, we see that the experimental violationsmore » of the inequalities demonstrate only that quantum observables do not commute.« less
NASA Astrophysics Data System (ADS)
Houdayer, Cyril; Isono, Yusuke
2016-12-01
We investigate the asymptotic structure of (possibly type III) crossed product von Neumann algebras {M = B rtimes Γ} arising from arbitrary actions {Γ \\curvearrowright B} of bi-exact discrete groups (e.g. free groups) on amenable von Neumann algebras. We prove a spectral gap rigidity result for the central sequence algebra {N' \\cap M^ω} of any nonamenable von Neumann subalgebra with normal expectation {N subset M}. We use this result to show that for any strongly ergodic essentially free nonsingular action {Γ \\curvearrowright (X, μ)} of any bi-exact countable discrete group on a standard probability space, the corresponding group measure space factor {L^∞(X) rtimes Γ} has no nontrivial central sequence. Using recent results of Boutonnet et al. (Local spectral gap in simple Lie groups and applications, 2015), we construct, for every {0 < λ ≤ 1}, a type {III_λ} strongly ergodic essentially free nonsingular action {F_∞ \\curvearrowright (X_λ, μ_λ)} of the free group {{F}_∞} on a standard probability space so that the corresponding group measure space type {III_λ} factor {L^∞(X_λ, μ_λ) rtimes F_∞} has no nontrivial central sequence by our main result. In particular, we obtain the first examples of group measure space type {III} factors with no nontrivial central sequence.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou Huanqiang; School of Physical Sciences, University of Queensland, Brisbane, Queensland 4072; Barthel, Thomas
We investigate boundary critical phenomena from a quantum-information perspective. Bipartite entanglement in the ground state of one-dimensional quantum systems is quantified using the Renyi entropy S{sub {alpha}}, which includes the von Neumann entropy ({alpha}{yields}1) and the single-copy entanglement ({alpha}{yields}{infinity}) as special cases. We identify the contribution of the boundaries to the Renyi entropy, and show that there is an entanglement loss along boundary renormalization group (RG) flows. This property, which is intimately related to the Affleck-Ludwig g theorem, is a consequence of majorization relations between the spectra of the reduced density matrix along the boundary RG flows. We also pointmore » out that the bulk contribution to the single-copy entanglement is half of that to the von Neumann entropy, whereas the boundary contribution is the same.« less
The Analytic Methods of Operations Research
1977-01-01
stock market behavior (Fama, 1970), but few other applications . A 2*1 - --- 41 12. QUEUEING THEORY The study of congestion in service...Behavior," by T. von Neumann and 0. MHrgenstern, and an esoteric j - 2 paperbrtk by Charnes. Cooper, and Henderson on the optimal mixing of peanuKs and...2nd-order conditions, then i X is also globally optimal . This enables one to use local exploration to lead to the global
Analyzing Von Neumann machines using decentralized symmetries
NASA Astrophysics Data System (ADS)
Fang, Jie
2013-10-01
The artificial intelligence method to e-business is defined not only by the study of fiber-optic cables, but also by the unproven need for vacuum tubes. Given the current status of virtual archetypes, theorists clearly desire the exploration of semaphores, which embodies the compelling principles of cryptoanalysis. We present an algorithm for probabilistic theory (Buck), which we use to disprove that write-back caches can be made decentralized, lossless, and reliable.
Complex absorbing potential based Lorentzian fitting scheme and time dependent quantum transport.
Xie, Hang; Kwok, Yanho; Jiang, Feng; Zheng, Xiao; Chen, GuanHua
2014-10-28
Based on the complex absorbing potential (CAP) method, a Lorentzian expansion scheme is developed to express the self-energy. The CAP-based Lorentzian expansion of self-energy is employed to solve efficiently the Liouville-von Neumann equation of one-electron density matrix. The resulting method is applicable for both tight-binding and first-principles models and is used to simulate the transient currents through graphene nanoribbons and a benzene molecule sandwiched between two carbon-atom chains.
John von Neumann and Klaus Fuchs: an Unlikely Collaboration
NASA Astrophysics Data System (ADS)
Bernstein, Jeremy
2010-03-01
I discuss the origin of the idea of making a fusion (hydrogen) bomb and the physics involved in it, and then turn to the design proposed for one by the unlikely collaborators John von Neumann and Klaus Fuchs in a patent application they filed at Los Alamos in May 1946, which Fuchs passed on to the Russians in March 1948, and which with substantial modifications was tested on the island of Eberiru on the Eniwetok atoll in the South Pacific on May 8, 1951. This test showed that the fusion of deuterium and tritium nuclei could be ignited, but that the ignition would not propagate because the heat produced was rapidly radiated away. Meanwhile, Stanislaw Ulam and C.J. Everett had shown that Edward Teller’s Classical Super could not work, and at the end of December 1950, Ulam had conceived the idea of super compression, using the energy of a fission bomb to compress the fusion fuel to such a high density that it would be opaque to the radiation produced. Once Teller understood this, he invented a greatly improved, new method of compression using radiation, which then became the heart of the Ulam-Teller bomb design, which was tested, also in the South Pacific, on November 1, 1952. The Russians have freely acknowledged that Fuchs gave them the fission bomb, but they have insisted that no one gave them the fusion bomb, which grew out of design involving a fission bomb surrounded by alternating layers of fusion and fission fuels, and which they tested on November 22, 1955. Part of the irony of this story is that neither the American nor the Russian hydrogen-bomb programs made any use of the brilliant design that von Neumann and Fuchs had conceived as early as 1946, which could have changed the entire course of development of both programs.
Programmable hardware for reconfigurable computing systems
NASA Astrophysics Data System (ADS)
Smith, Stephen
1996-10-01
In 1945 the work of J. von Neumann and H. Goldstein created the principal architecture for electronic computation that has now lasted fifty years. Nevertheless alternative architectures have been created that have computational capability, for special tasks, far beyond that feasible with von Neumann machines. The emergence of high capacity programmable logic devices has made the realization of these architectures practical. The original ENIAC and EDVAC machines were conceived to solve special mathematical problems that were far from today's concept of 'killer applications.' In a similar vein programmable hardware computation is being used today to solve unique mathematical problems. Our programmable hardware activity is focused on the research and development of novel computational systems based upon the reconfigurability of our programmable logic devices. We explore our programmable logic architectures and their implications for programmable hardware. One programmable hardware board implementation is detailed.
Quantum Discord for d⊗2 Systems
Ma, Zhihao; Chen, Zhihua; Fanchini, Felipe Fernandes; Fei, Shao-Ming
2015-01-01
We present an analytical solution for classical correlation, defined in terms of linear entropy, in an arbitrary system when the second subsystem is measured. We show that the optimal measurements used in the maximization of the classical correlation in terms of linear entropy, when used to calculate the quantum discord in terms of von Neumann entropy, result in a tight upper bound for arbitrary systems. This bound agrees with all known analytical results about quantum discord in terms of von Neumann entropy and, when comparing it with the numerical results for 106 two-qubit random density matrices, we obtain an average deviation of order 10−4. Furthermore, our results give a way to calculate the quantum discord for arbitrary n-qubit GHZ and W states evolving under the action of the amplitude damping noisy channel. PMID:26036771
NASA Astrophysics Data System (ADS)
Blanchard, Philippe; Hellmich, Mario; Ługiewicz, Piotr; Olkiewicz, Robert
Quantum mechanics is the greatest revision of our conception of the character of the physical world since Newton. Consequently, David Hilbert was very interested in quantum mechanics. He and John von Neumann discussed it frequently during von Neumann's residence in Göttingen. He published in 1932 his book Mathematical Foundations of Quantum Mechanics. In Hilbert's opinion it was the first exposition of quantum mechanics in a mathematically rigorous way. The pioneers of quantum mechanics, Heisenberg and Dirac, neither had use for rigorous mathematics nor much interest in it. Conceptually, quantum theory as developed by Bohr and Heisenberg is based on the positivism of Mach as it describes only observable quantities. It first emerged as a result of experimental data in the form of statistical observations of quantum noise, the basic concept of quantum probability.
The von Neumann model of measurement in quantum mechanics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mello, Pier A.
2014-01-08
We describe how to obtain information on a quantum-mechanical system by coupling it to a probe and detecting some property of the latter, using a model introduced by von Neumann, which describes the interaction of the system proper with the probe in a dynamical way. We first discuss single measurements, where the system proper is coupled to one probe with arbitrary coupling strength. The goal is to obtain information on the system detecting the probe position. We find the reduced density operator of the system, and show how Lüders rule emerges as the limiting case of strong coupling. The vonmore » Neumann model is then generalized to two probes that interact successively with the system proper. Now we find information on the system by detecting the position-position and momentum-position correlations of the two probes. The so-called 'Wigner's formula' emerges in the strong-coupling limit, while 'Kirkwood's quasi-probability distribution' is found as the weak-coupling limit of the above formalism. We show that successive measurements can be used to develop a state-reconstruction scheme. Finally, we find a generalized transform of the state and the observables based on the notion of successive measurements.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levay, Peter; Nagy, Szilvia; Pipek, Janos
An elementary formula for the von Neumann and Renyi entropies describing quantum correlations in two-fermionic systems having four single-particle states is presented. An interesting geometric structure of fermionic entanglement is revealed. A connection with the generalized Pauli principle is established.
Accuracy of topological entanglement entropy on finite cylinders.
Jiang, Hong-Chen; Singh, Rajiv R P; Balents, Leon
2013-09-06
Topological phases are unique states of matter which support nonlocal excitations which behave as particles with fractional statistics. A universal characterization of gapped topological phases is provided by the topological entanglement entropy (TEE). We study the finite size corrections to the TEE by focusing on systems with a Z2 topological ordered state using density-matrix renormalization group and perturbative series expansions. We find that extrapolations of the TEE based on the Renyi entropies with a Renyi index of n≥2 suffer from much larger finite size corrections than do extrapolations based on the von Neumann entropy. In particular, when the circumference of the cylinder is about ten times the correlation length, the TEE obtained using von Neumann entropy has an error of order 10(-3), while for Renyi entropies it can even exceed 40%. We discuss the relevance of these findings to previous and future searches for topological ordered phases, including quantum spin liquids.
Bellomo, Guido; Bosyk, Gustavo M; Holik, Federico; Zozor, Steeve
2017-11-07
Based on the problem of quantum data compression in a lossless way, we present here an operational interpretation for the family of quantum Rényi entropies. In order to do this, we appeal to a very general quantum encoding scheme that satisfies a quantum version of the Kraft-McMillan inequality. Then, in the standard situation, where one is intended to minimize the usual average length of the quantum codewords, we recover the known results, namely that the von Neumann entropy of the source bounds the average length of the optimal codes. Otherwise, we show that by invoking an exponential average length, related to an exponential penalization over large codewords, the quantum Rényi entropies arise as the natural quantities relating the optimal encoding schemes with the source description, playing an analogous role to that of von Neumann entropy.
Theory and Modeling of Liquid Explosive Detonation
NASA Astrophysics Data System (ADS)
Tarver, Craig M.; Urtiew, Paul A.
2010-10-01
The current understanding of the detonation reaction zones of liquid explosives is discussed in this article. The physical and chemical processes that precede and follow exothermic chemical reaction within the detonation reaction zone are discussed within the framework of the nonequilibrium Zeldovich-von Neumann-Doring (NEZND) theory of self-sustaining detonation. Nonequilibrium chemical and physical processes cause finite time duration induction zones before exothermic chemical energy release occurs. This separation between the leading shock wave front and the chemical energy release needed to sustain it results in shock wave amplification and the subsequent formation of complex three-dimensional cellular structures in all liquid detonation waves. To develop a practical Zeldovich-von Neumann-Doring (ZND) reactive flow model for liquid detonation, experimental data on reaction zone structure, confined failure diameter, unconfined failure diameter, and failure wave velocity in the Dremin-Trofimov test for detonating nitromethane are calculated using the ignition and growth reactive flow model.
Infinite index extensions of local nets and defects
NASA Astrophysics Data System (ADS)
Del Vecchio, Simone; Giorgetti, Luca
The subfactor theory provides a tool to analyze and construct extensions of Quantum Field Theories, once the latter are formulated as local nets of von Neumann algebras. We generalize some of the results of [62] to the case of extensions with infinite Jones index. This case naturally arises in physics, the canonical examples are given by global gauge theories with respect to a compact (non-finite) group of internal symmetries. Building on the works of Izumi-Longo-Popa [44] and Fidaleo-Isola [30], we consider generalized Q-systems (of intertwiners) for a semidiscrete inclusion of properly infinite von Neumann algebras, which generalize ordinary Q-systems introduced by Longo [58] to the infinite index case. We characterize inclusions which admit generalized Q-systems of intertwiners and define a braided product among the latter, hence we construct examples of QFTs with defects (phase boundaries) of infinite index, extending the family of boundaries in the grasp of [7].
Measurement-induced randomness and state-merging
NASA Astrophysics Data System (ADS)
Chakrabarty, Indranil; Deshpande, Abhishek; Chatterjee, Sourav
In this work we introduce the randomness which is truly quantum mechanical in nature arising as an act of measurement. For a composite classical system, we have the joint entropy to quantify the randomness present in the total system and that happens to be equal to the sum of the entropy of one subsystem and the conditional entropy of the other subsystem, given we know the first system. The same analogy carries over to the quantum setting by replacing the Shannon entropy by the von Neumann entropy. However, if we replace the conditional von Neumann entropy by the average conditional entropy due to measurement, we find that it is different from the joint entropy of the system. We call this difference Measurement Induced Randomness (MIR) and argue that this is unique of quantum mechanical systems and there is no classical counterpart to this. In other words, the joint von Neumann entropy gives only the total randomness that arises because of the heterogeneity of the mixture and we show that it is not the total randomness that can be generated in the composite system. We generalize this quantity for N-qubit systems and show that it reduces to quantum discord for two-qubit systems. Further, we show that it is exactly equal to the change in the cost quantum state merging that arises because of the measurement. We argue that for quantum information processing tasks like state merging, the change in the cost as a result of discarding prior information can also be viewed as a rise of randomness due to measurement.
Robra, Bernt-Peter
2018-02-19
The Salomon-Neumann-Medal 2017 of the German Society for Social Medicine and Prevention (DGSMP) was awarded to Bernt-Peter Robra, Institute for Social Medicine and Health Economics (ISMG) of the Otto von Guericke University Magdeburg. The person and scientific merits of Manfred Pflanz are valued and topics of the masterplan2020-process are highlighted, that offer chances for developments in medicine and public health. © Georg Thieme Verlag KG Stuttgart · New York.
Diagonal couplings of quantum Markov chains
NASA Astrophysics Data System (ADS)
Kümmerer, Burkhard; Schwieger, Kay
2016-05-01
In this paper we extend the coupling method from classical probability theory to quantum Markov chains on atomic von Neumann algebras. In particular, we establish a coupling inequality, which allow us to estimate convergence rates by analyzing couplings. For a given tensor dilation we construct a self-coupling of a Markov operator. It turns out that the coupling is a dual version of the extended dual transition operator studied by Gohm et al. We deduce that this coupling is successful if and only if the dilation is asymptotically complete.
Weather Forecasting From Woolly Art to Solid Science
NASA Astrophysics Data System (ADS)
Lynch, P.
THE PREHISTORY OF SCIENTIFIC FORECASTING Vilhelm Bjerknes Lewis Fry Richardson Richardson's Forecast THE BEGINNING OF MODERN NUMERICAL WEATHER PREDICTION John von Neumann and the Meteorology Project The ENIAC Integrations The Barotropic Model Primitive Equation Models NUMERICAL WEATHER PREDICTION TODAY ECMWF HIRLAM CONCLUSIONS REFERENCES
PIMS: Memristor-Based Processing-in-Memory-and-Storage.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cook, Jeanine
Continued progress in computing has augmented the quest for higher performance with a new quest for higher energy efficiency. This has led to the re-emergence of Processing-In-Memory (PIM) ar- chitectures that offer higher density and performance with some boost in energy efficiency. Past PIM work either integrated a standard CPU with a conventional DRAM to improve the CPU- memory link, or used a bit-level processor with Single Instruction Multiple Data (SIMD) control, but neither matched the energy consumption of the memory to the computation. We originally proposed to develop a new architecture derived from PIM that more effectively addressed energymore » efficiency for high performance scientific, data analytics, and neuromorphic applications. We also originally planned to implement a von Neumann architecture with arithmetic/logic units (ALUs) that matched the power consumption of an advanced storage array to maximize energy efficiency. Implementing this architecture in storage was our original idea, since by augmenting storage (in- stead of memory), the system could address both in-memory computation and applications that accessed larger data sets directly from storage, hence Processing-in-Memory-and-Storage (PIMS). However, as our research matured, we discovered several things that changed our original direc- tion, the most important being that a PIM that implements a standard von Neumann-type archi- tecture results in significant energy efficiency improvement, but only about a O(10) performance improvement. In addition to this, the emergence of new memory technologies moved us to propos- ing a non-von Neumann architecture, called Superstrider, implemented not in storage, but in a new DRAM technology called High Bandwidth Memory (HBM). HBM is a stacked DRAM tech- nology that includes a logic layer where an architecture such as Superstrider could potentially be implemented.« less
Cruikshank, Benjamin; Jacobs, Kurt
2017-07-21
von Neumann's classic "multiplexing" method is unique in achieving high-threshold fault-tolerant classical computation (FTCC), but has several significant barriers to implementation: (i) the extremely complex circuits required by randomized connections, (ii) the difficulty of calculating its performance in practical regimes of both code size and logical error rate, and (iii) the (perceived) need for large code sizes. Here we present numerical results indicating that the third assertion is false, and introduce a novel scheme that eliminates the two remaining problems while retaining a threshold very close to von Neumann's ideal of 1/6. We present a simple, highly ordered wiring structure that vastly reduces the circuit complexity, demonstrates that randomization is unnecessary, and provides a feasible method to calculate the performance. This in turn allows us to show that the scheme requires only moderate code sizes, vastly outperforms concatenation schemes, and under a standard error model a unitary implementation realizes universal FTCC with an accuracy threshold of p<5.5%, in which p is the error probability for 3-qubit gates. FTCC is a key component in realizing measurement-free protocols for quantum information processing. In view of this, we use our scheme to show that all-unitary quantum circuits can reproduce any measurement-based feedback process in which the asymptotic error probabilities for the measurement and feedback are (32/63)p≈0.51p and 1.51p, respectively.
NASA Astrophysics Data System (ADS)
Xie, Hang; Jiang, Feng; Tian, Heng; Zheng, Xiao; Kwok, Yanho; Chen, Shuguang; Yam, ChiYung; Yan, YiJing; Chen, Guanhua
2012-07-01
Basing on our hierarchical equations of motion for time-dependent quantum transport [X. Zheng, G. H. Chen, Y. Mo, S. K. Koo, H. Tian, C. Y. Yam, and Y. J. Yan, J. Chem. Phys. 133, 114101 (2010), 10.1063/1.3475566], we develop an efficient and accurate numerical algorithm to solve the Liouville-von-Neumann equation. We solve the real-time evolution of the reduced single-electron density matrix at the tight-binding level. Calculations are carried out to simulate the transient current through a linear chain of atoms, with each represented by a single orbital. The self-energy matrix is expanded in terms of multiple Lorentzian functions, and the Fermi distribution function is evaluated via the Padè spectrum decomposition. This Lorentzian-Padè decomposition scheme is employed to simulate the transient current. With sufficient Lorentzian functions used to fit the self-energy matrices, we show that the lead spectral function and the dynamics response can be treated accurately. Compared to the conventional master equation approaches, our method is much more efficient as the computational time scales cubically with the system size and linearly with the simulation time. As a result, the simulations of the transient currents through systems containing up to one hundred of atoms have been carried out. As density functional theory is also an effective one-particle theory, the Lorentzian-Padè decomposition scheme developed here can be generalized for first-principles simulation of realistic systems.
Driven-dissipative quantum Monte Carlo method for open quantum systems
NASA Astrophysics Data System (ADS)
Nagy, Alexandra; Savona, Vincenzo
2018-05-01
We develop a real-time full configuration-interaction quantum Monte Carlo approach to model driven-dissipative open quantum systems with Markovian system-bath coupling. The method enables stochastic sampling of the Liouville-von Neumann time evolution of the density matrix thanks to a massively parallel algorithm, thus providing estimates of observables on the nonequilibrium steady state. We present the underlying theory and introduce an initiator technique and importance sampling to reduce the statistical error. Finally, we demonstrate the efficiency of our approach by applying it to the driven-dissipative two-dimensional X Y Z spin-1/2 model on a lattice.
Ergodic theorem, ergodic theory, and statistical mechanics
Moore, Calvin C.
2015-01-01
This perspective highlights the mean ergodic theorem established by John von Neumann and the pointwise ergodic theorem established by George Birkhoff, proofs of which were published nearly simultaneously in PNAS in 1931 and 1932. These theorems were of great significance both in mathematics and in statistical mechanics. In statistical mechanics they provided a key insight into a 60-y-old fundamental problem of the subject—namely, the rationale for the hypothesis that time averages can be set equal to phase averages. The evolution of this problem is traced from the origins of statistical mechanics and Boltzman's ergodic hypothesis to the Ehrenfests' quasi-ergodic hypothesis, and then to the ergodic theorems. We discuss communications between von Neumann and Birkhoff in the Fall of 1931 leading up to the publication of these papers and related issues of priority. These ergodic theorems initiated a new field of mathematical-research called ergodic theory that has thrived ever since, and we discuss some of recent developments in ergodic theory that are relevant for statistical mechanics. PMID:25691697
Weak values, 'negative probability', and the uncertainty principle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sokolovski, D.
2007-10-15
A quantum transition can be seen as a result of interference between various pathways (e.g., Feynman paths), which can be labeled by a variable f. An attempt to determine the value of f without destroying the coherence between the pathways produces a weak value of f. We show f to be an average obtained with an amplitude distribution which can, in general, take negative values, which, in accordance with the uncertainty principle, need not contain information about the actual range of f which contributes to the transition. It is also demonstrated that the moments of such alternating distributions have amore » number of unusual properties which may lead to a misinterpretation of the weak-measurement results. We provide a detailed analysis of weak measurements with and without post-selection. Examples include the double-slit diffraction experiment, weak von Neumann and von Neumann-like measurements, traversal time for an elastic collision, phase time, and local angular momentum.« less
Entanglement entropy between virtual and real excitations in quantum electrodynamics
NASA Astrophysics Data System (ADS)
Ardenghi, Juan Sebastián
2018-05-01
The aim of this work is to introduce the entanglement entropy of real and virtual excitations of fermion and photon fields. By rewriting the generating functional of quantum electrodynamics theory as an inner product between quantum operators, it is possible to obtain quantum density operators representing the propagation of real and virtual particles. These operators are partial traces, where the degrees of freedom traced out are unobserved excitations. Then the von Neumann definition of entropy can be applied to these quantum operators and in particular, for the partial traces taken over by the internal or external degrees of freedom. A universal behavior is obtained for the entanglement entropy for different quantum fields at zeroth order in the coupling constant. In order to obtain numerical results at different orders in the perturbation expansion, the Bloch-Nordsieck model is considered, where it is shown that for some particular values of the electric charge, the von Neumann entropy increases or decreases with respect to the noninteracting case.
Generalized Entanglement Entropy and Holography
NASA Astrophysics Data System (ADS)
Obregón, O.
2018-04-01
A nonextensive statistical mechanics entropy that depends only on the probability distribution is proposed in the framework of superstatistics. It is based on a Γ(χ 2) distribution that depends on β and also on pl . The corresponding modified von Neumann entropy is constructed; it is shown that it can also be obtained from a generalized Replica trick. We address the question whether the generalized entanglement entropy can play a role in the gauge/gravity duality. We pay attention to 2dCFT and their gravity duals. The correction terms to the von Neumann entropy result more relevant than the usual UV (for c = 1) ones and also than those due to the area dependent AdS 3 entropy which result comparable to the UV ones. Then the correction terms due to the new entropy would modify the Ryu-Takayanagi identification between the CFT entanglement entropy and the AdS entropy in a different manner than the UV ones or than the corrections to the AdS 3 area dependent entropy.
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung
1987-01-01
The validity of the modified equation stability analysis introduced by Warming and Hyett was investigated. It is shown that the procedure used in the derivation of the modified equation is flawed and generally leads to invalid results. Moreover, the interpretation of the modified equation as the exact partial differential equation solved by a finite-difference method generally cannot be justified even if spatial periodicity is assumed. For a two-level scheme, due to a series of mathematical quirks, the connection between the modified equation approach and the von Neuman method established by Warming and Hyett turns out to be correct despite its questionable original derivation. However, this connection is only partially valid for a scheme involving more than two time levels. In the von Neumann analysis, the complex error multiplication factor associated with a wave number generally has (L-1) roots for an L-level scheme. It is shown that the modified equation provides information about only one of these roots.
A Planar Calculus for Infinite Index Subfactors
NASA Astrophysics Data System (ADS)
Penneys, David
2013-05-01
We develop an analog of Jones' planar calculus for II 1-factor bimodules with arbitrary left and right von Neumann dimension. We generalize to bimodules Burns' results on rotations and extremality for infinite index subfactors. These results are obtained without Jones' basic construction and the resulting Jones projections.
Towards a wave theory of charged beam transport: A collection of thoughts
NASA Technical Reports Server (NTRS)
Dattoli, G.; Mari, C.; Torre, A.
1992-01-01
We formulate in a rigorous way a wave theory of charged beam linear transport. The Wigner distribution function is introduced and provides the link with classical mechanics. Finally, the von Neumann equation is shown to coincide with the Liouville equation for the nonlinear transport.
Two Fundamental Issues in Multiprocessing.
1987-10-01
Structural Model of a Multiprocessor 6 Figure 5: Operational Model of a Multiprocessor 7 Figure 6: The von Neumann Processor (from Gajski and Peir [201) 10...Computer Society, June, 1983. 20. Gajski , D. D. & J-K. Peir. "Essential Issues in Multiprocessor Systems". Computer 18, 6 (June 1985), 9-27. 21. Gurd
NASA Astrophysics Data System (ADS)
Romanchuk, V. A.; Lukashenko, V. V.
2018-05-01
The technique of functioning of a control system by a computing cluster based on neurocomputers is proposed. Particular attention is paid to the method of choosing the structure of the computing cluster due to the fact that the existing methods are not effective because of a specialized hardware base - neurocomputers, which are highly parallel computer devices with an architecture different from the von Neumann architecture. A developed algorithm for choosing the computational structure of a cloud cluster is described, starting from the direction of data transfer in the flow control graph of the program and its adjacency matrix.
The Martians of Science - Five Physicists Who Changed the Twentieth Century
NASA Astrophysics Data System (ADS)
Hargittai, István
2006-07-01
If science has the equivalent of a Bloomsbury group, it is the five men born at the turn of the twentieth century in Budapest: Theodore von Kármán, Leo Szilard, Eugene Wigner, John von Neumann, and Edward Teller. From Hungary to Germany to the United States, they remained friends and continued to work together and influence each other throughout their lives. As a result, their work was integral to some of the most important scientific and political developments of the twentieth century. They were an extraordinary group of talents: Wigner won a Nobel Prize in theoretical physics; Szilard was the first to see that a chain reaction based on neutrons was possible, initiated the Manhattan Project, but left physics to try to restrict nuclear arms; von Neumann could solve difficult problems in his head and developed the modern computer for more complex problems; von Kármán became the first director of NASA's Jet Propulsion Laboratory, providing the scientific basis for the U.S. Air Force; and Teller was the father of the hydrogen bomb, whose name is now synonymous with the controversial "Star Wars" initiative of the 1980s. Each was fiercely opinionated, politically active, and fought against all forms of totalitarianism. István Hargittai, as a young Hungarian physical chemist, was able to get to know some of these great men in their later years, and the depth of information and human interest in The Martians of Science is the result of his personal relationships with the subjects, their families, and their contemporaries.
Accessible Information Without Disturbing Partially Known Quantum States on a von Neumann Algebra
NASA Astrophysics Data System (ADS)
Kuramochi, Yui
2018-04-01
This paper addresses the problem of how much information we can extract without disturbing a statistical experiment, which is a family of partially known normal states on a von Neumann algebra. We define the classical part of a statistical experiment as the restriction of the equivalent minimal sufficient statistical experiment to the center of the outcome space, which, in the case of density operators on a Hilbert space, corresponds to the classical probability distributions appearing in the maximal decomposition by Koashi and Imoto (Phys. Rev. A 66, 022,318 2002). We show that we can access by a Schwarz or completely positive channel at most the classical part of a statistical experiment if we do not disturb the states. We apply this result to the broadcasting problem of a statistical experiment. We also show that the classical part of the direct product of statistical experiments is the direct product of the classical parts of the statistical experiments. The proof of the latter result is based on the theorem that the direct product of minimal sufficient statistical experiments is also minimal sufficient.
Entropy for quantum pure states and quantum H theorem
NASA Astrophysics Data System (ADS)
Han, Xizhi; Wu, Biao
2015-06-01
We construct a complete set of Wannier functions that are localized at both given positions and momenta. This allows us to introduce the quantum phase space, onto which a quantum pure state can be mapped unitarily. Using its probability distribution in quantum phase space, we define an entropy for a quantum pure state. We prove an inequality regarding the long-time behavior of our entropy's fluctuation. For a typical initial state, this inequality indicates that our entropy can relax dynamically to a maximized value and stay there most of time with small fluctuations. This result echoes the quantum H theorem proved by von Neumann [Zeitschrift für Physik 57, 30 (1929), 10.1007/BF01339852]. Our entropy is different from the standard von Neumann entropy, which is always zero for quantum pure states. According to our definition, a system always has bigger entropy than its subsystem even when the system is described by a pure state. As the construction of the Wannier basis can be implemented numerically, the dynamical evolution of our entropy is illustrated with an example.
Laguerre-polynomial-weighted squeezed vacuum: generation and its properties of entanglement
NASA Astrophysics Data System (ADS)
Ye, Wei; Zhang, Kuizheng; Zhang, Haoliang; Xu, Xuexiang; Hu, Liyun
2018-02-01
We theoretically prepare a kind of two-mode entangled non-Gaussian state generated by combining quantum catalysis and parametric-down amplifier operated on the two-mode squeezing vacuum state. We then investigate the entanglement properties by examining Von Neumann entropy, EPR correlation, squeezing effect and the fidelity of teleportation. It is shown that only Von Neumann entropy can be enhanced by both single- and two-mode catalysis in a small squeezing region, while the other properties can be enhanced only by two-mode catalysis including symmetrical and asymmetrical cases. A comparison among these properties shows that the squeezing and the EPR correlation definitely lead to the improvement of both the entanglement and the fidelity, and the region of enhanced fidelity can be seen as a sub-region of the enhanced entanglement which indicates that the entanglement is not always beneficial for the fidelity. In addition, the effect of photon-loss after catalysis on the fidelity is considered and the symmetrical two-photon catalysis may present better behavior than the symmetrical single-photon case against the decoherence in a certain region.
Spatial patterns and scale freedom in Prisoner's Dilemma cellular automata with Pavlovian strategies
NASA Astrophysics Data System (ADS)
Fort, H.; Viola, S.
2005-01-01
A cellular automaton in which cells represent agents playing the Prisoner's Dilemma (PD) game following the simple 'win—stay, lose—shift' strategy is studied. Individuals with binary behaviour, such that they can either cooperate (C) or defect (D), play repeatedly with their neighbours (Von Neumann's and Moore's neighbourhoods). Their utilities in each round of the game are given by a rescaled pay-off matrix described by a single parameter τ, which measures the ratio of temptation to defect to reward for cooperation. Depending on the region of the parameter space τ, the system self-organizes—after a transient—into dynamical equilibrium states characterized by different definite fractions of C agents \\bar {c}_\\infty (two states for the von Neumann neighbourhood and four for the Moore neighbourhood). For some ranges of τ the cluster size distributions, the power spectra P(f) and the perimeter-area curves follow power law scalings. Percolation below threshold is also found for D agent clusters. We also analyse the asynchronous dynamics version of this model and compare results.
QmeQ 1.0: An open-source Python package for calculations of transport through quantum dot devices
NASA Astrophysics Data System (ADS)
Kiršanskas, Gediminas; Pedersen, Jonas Nyvold; Karlström, Olov; Leijnse, Martin; Wacker, Andreas
2017-12-01
QmeQ is an open-source Python package for numerical modeling of transport through quantum dot devices with strong electron-electron interactions using various approximate master equation approaches. The package provides a framework for calculating stationary particle or energy currents driven by differences in chemical potentials or temperatures between the leads which are tunnel coupled to the quantum dots. The electronic structures of the quantum dots are described by their single-particle states and the Coulomb matrix elements between the states. When transport is treated perturbatively to lowest order in the tunneling couplings, the possible approaches are Pauli (classical), first-order Redfield, and first-order von Neumann master equations, and a particular form of the Lindblad equation. When all processes involving two-particle excitations in the leads are of interest, the second-order von Neumann approach can be applied. All these approaches are implemented in QmeQ. We here give an overview of the basic structure of the package, give examples of transport calculations, and outline the range of applicability of the different approximate approaches.
Atomic switch: atom/ion movement controlled devices for beyond von-neumann computers.
Hasegawa, Tsuyoshi; Terabe, Kazuya; Tsuruoka, Tohru; Aono, Masakazu
2012-01-10
An atomic switch is a nanoionic device that controls the diffusion of metal ions/atoms and their reduction/oxidation processes in the switching operation to form/annihilate a conductive path. Since metal atoms can provide a highly conductive channel even if their cluster size is in the nanometer scale, atomic switches may enable downscaling to smaller than the 11 nm technology node, which is a great challenge for semiconductor devices. Atomic switches also possess novel characteristics, such as high on/off ratios, very low power consumption and non-volatility. The unique operating mechanisms of these devices have enabled the development of various types of atomic switch, such as gap-type and gapless-type two-terminal atomic switches and three-terminal atomic switches. Novel functions, such as selective volatile/nonvolatile, synaptic, memristive, and photo-assisted operations have been demonstrated. Such atomic switch characteristics can not only improve the performance of present-day electronic systems, but also enable development of new types of electronic systems, such as beyond von- Neumann computers. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
The Photon Shell Game and the Quantum von Neumann Architecture with Superconducting Circuits
NASA Astrophysics Data System (ADS)
Mariantoni, Matteo
2012-02-01
Superconducting quantum circuits have made significant advances over the past decade, allowing more complex and integrated circuits that perform with good fidelity. We have recently implemented a machine comprising seven quantum channels, with three superconducting resonators, two phase qubits, and two zeroing registers. I will explain the design and operation of this machine, first showing how a single microwave photon | 1 > can be prepared in one resonator and coherently transferred between the three resonators. I will also show how more exotic states such as double photon states | 2 > and superposition states | 0 >+ | 1 > can be shuffled among the resonators as well [1]. I will then demonstrate how this machine can be used as the quantum-mechanical analog of the von Neumann computer architecture, which for a classical computer comprises a central processing unit and a memory holding both instructions and data. The quantum version comprises a quantum central processing unit (quCPU) that exchanges data with a quantum random-access memory (quRAM) integrated on one chip, with instructions stored on a classical computer. I will also present a proof-of-concept demonstration of a code that involves all seven quantum elements: (1), Preparing an entangled state in the quCPU, (2), writing it to the quRAM, (3), preparing a second state in the quCPU, (4), zeroing it, and, (5), reading out the first state stored in the quRAM [2]. Finally, I will demonstrate that the quantum von Neumann machine provides one unit cell of a two-dimensional qubit-resonator array that can be used for surface code quantum computing. This will allow the realization of a scalable, fault-tolerant quantum processor with the most forgiving error rates to date. [4pt] [1] M. Mariantoni et al., Nature Physics 7, 287-293 (2011.)[0pt] [2] M. Mariantoni et al., Science 334, 61-65 (2011).
Non-Markovian optimal sideband cooling
NASA Astrophysics Data System (ADS)
Triana, Johan F.; Pachon, Leonardo A.
2018-04-01
Optimal control theory is applied to sideband cooling of nano-mechanical resonators. The formulation described here makes use of exact results derived by means of the path-integral approach of quantum dynamics, so that no approximation is invoked. It is demonstrated that the intricate interplay between time-dependent fields and structured thermal bath may lead to improve results of the sideband cooling by an order of magnitude. Cooling is quantified by means of the mean number of phonons of the mechanical modes as well as by the von Neumann entropy. Potencial extension to non-linear systems, by means of semiclassical methods, is briefly discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prabhu, R.; Usha Devi, A. R.; Inspire Institute Inc., McLean, Virginia 22101
2007-10-15
We employ conditional Tsallis q entropies to study the separability of symmetric one parameter W and GHZ multiqubit mixed states. The strongest limitation on separability is realized in the limit q{yields}{infinity}, and is found to be much superior to the condition obtained using the von Neumann conditional entropy (q=1 case). Except for the example of two qubit and three qubit symmetric states of GHZ family, the q-conditional entropy method leads to sufficient--but not necessary--conditions on separability.
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung
2005-01-01
As part of the continuous development of the space-time conservation element and solution element (CE-SE) method, recently a set of so call ed "Courant number insensitive schemes" has been proposed. The key advantage of these new schemes is that the numerical dissipation associa ted with them generally does not increase as the Courant number decre ases. As such, they can be applied to problems with large Courant number disparities (such as what commonly occurs in Navier-Stokes problem s) without incurring excessive numerical dissipation.
Extracting joint weak values with local, single-particle measurements.
Resch, K J; Steinberg, A M
2004-04-02
Weak measurement is a new technique which allows one to describe the evolution of postselected quantum systems. It appears to be useful for resolving a variety of thorny quantum paradoxes, particularly when used to study properties of pairs of particles. Unfortunately, such nonlocal or joint observables often prove difficult to measure directly in practice (for instance, in optics-a common testing ground for this technique-strong photon-photon interactions would be needed to implement an appropriate von Neumann interaction). Here we derive a general, experimentally feasible, method for extracting these joint weak values from correlations between single-particle observables.
Entropy-driven phase transitions of entanglement
NASA Astrophysics Data System (ADS)
Facchi, Paolo; Florio, Giuseppe; Parisi, Giorgio; Pascazio, Saverio; Yuasa, Kazuya
2013-05-01
We study the behavior of bipartite entanglement at fixed von Neumann entropy. We look at the distribution of the entanglement spectrum, that is, the eigenvalues of the reduced density matrix of a quantum system in a pure state. We report the presence of two continuous phase transitions, characterized by different entanglement spectra, which are deformations of classical eigenvalue distributions.
ERIC Educational Resources Information Center
von Davier, Matthias
2018-01-01
This article critically reviews how diagnostic models have been conceptualized and how they compare to other approaches used in educational measurement. In particular, certain assumptions that have been taken for granted and used as defining characteristics of diagnostic models are reviewed and it is questioned whether these assumptions are the…
Schwarzschild, Martin (1912-97)
NASA Astrophysics Data System (ADS)
Murdin, P.
2000-11-01
Astrophysicist, born in Potsdam, Germany, the son of KARL SCHWARZSCHILD, left Germany, became professor at Princeton University. Working with John von Neumann, Schwarzschild used the powers of the newly developed electronic digital computers to work on the theory of stellar structure and evolution. He uncovered phenomena in red giant stars, including how they evolve off the main sequence in the H...
On the Restricted Toda and c-KdV Flows of Neumann Type
NASA Astrophysics Data System (ADS)
Zhou, RuGuang; Qiao, ZhiJun
2000-09-01
It is proven that on a symplectic submanifold the restricted c-KdV flow is just the interpolating Hamiltonian flow of invariant for the restricted Toda flow, which is an integrable symplectic map of Neumann type. They share the common Lax matrix, dynamical r-matrix and system of involutive conserved integrals. Furthermore, the procedure of separation of variables is considered for the restricted c-KdV flow of Neumann type. The project supported by the Chinese National Basic Research Project "Nonlinear Science" and the Doctoral Programme Foundation of Institution of High Education of China. The first author also thanks the National Natural Science Foundation of China (19801031) and "Qinglan Project" of Jiangsu Province of China; and the second author also thanks the Alexander von Humboldt Fellowships, Deutschland, the Special Grant of Excellent Ph. D Thesis of China, the Science & Technology Foundation (Youth Talent Foundation) and the Science Research Foundation of Education Committee of Liaoning Province of China.
Methods of Reverberation Mapping. I. Time-lag Determination by Measures of Randomness
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chelouche, Doron; Pozo-Nuñez, Francisco; Zucker, Shay, E-mail: doron@sci.haifa.ac.il, E-mail: francisco.pozon@gmail.com, E-mail: shayz@post.tau.ac.il
A class of methods for measuring time delays between astronomical time series is introduced in the context of quasar reverberation mapping, which is based on measures of randomness or complexity of the data. Several distinct statistical estimators are considered that do not rely on polynomial interpolations of the light curves nor on their stochastic modeling, and do not require binning in correlation space. Methods based on von Neumann’s mean-square successive-difference estimator are found to be superior to those using other estimators. An optimized von Neumann scheme is formulated, which better handles sparsely sampled data and outperforms current implementations of discretemore » correlation function methods. This scheme is applied to existing reverberation data of varying quality, and consistency with previously reported time delays is found. In particular, the size–luminosity relation of the broad-line region in quasars is recovered with a scatter comparable to that obtained by other works, yet with fewer assumptions made concerning the process underlying the variability. The proposed method for time-lag determination is particularly relevant for irregularly sampled time series, and in cases where the process underlying the variability cannot be adequately modeled.« less
Trotter's limit formula for the Schrödinger equation with singular potential
NASA Astrophysics Data System (ADS)
Nathanson, Ekaterina S.; Jørgensen, Palle E. T.
2017-12-01
We discuss the Schrödinger equation with singular potentials. Our focus is non-relativistic Schrödinger operators H with scalar potentials V defined on R d, hence covering such quantum systems as atoms, molecules, and subatomic particles whether free, bound, or localized. By a "singular potential" V, we refer to the case when the corresponding Schrödinger operators H, with their natural minimal domain in L2(R d), are not essentially self-adjoint. Since V is assumed real valued, the corresponding Hermitian symmetric operator H commutes with the conjugation in L2(R d), and so (by von Neumann's theorem), H has deficiency indices (n, n). The case of singular potentials V refers to when n > 0. Hence, by von Neumann's theory, we know the full variety of all the self-adjoint extensions. Since the Trotter formula is restricted to the case when n = 0, and here n > 0, two questions arise: (i) existence of the Trotter limit and (ii) the nature of this limit. We answer (i) affirmatively. Our answer to (ii) is that when n > 0, the Trotter limit is a strongly continuous contraction semigroup; so it is not time-reversible.
Border-Crossing Model for the Diffusive Coarsening of Wet Foams
NASA Astrophysics Data System (ADS)
Durian, Douglas; Schimming, Cody
For dry foams, the transport of gas from small high-pressure bubbles to large low-pressure bubbles is dominated by diffusion across the thin soap films separating neighboring bubbles. For wetter foams, the film areas become smaller as the Plateau borders and vertices inflate with liquid. So-called ``border-blocking'' models can explain some features of wet-foam coarsening based on the presumption that the inflated borders totally block the gas flux; however, this approximation dramatically fails in the wet/unjamming limit where the bubbles become close-packed spheres. Here, we account for the ever-present border-crossing flux by a new length scale defined by the average gradient of gas concentration inside the borders. We argue that it is proportional to the geometric average of film and border thicknesses, and we verify this scaling and the numerical prefactor by numerical solution of the diffusion equation. Then we show how the dA / dt =K0 (n - 6) von Neumann law is modified by the appearance of terms that depend on bubble size and shape as well as the concentration gradient length scale. Finally, we use the modified von Neumann law to compute the growth rate of the average bubble, which is not constant.
Nonideal detonation regimes in low density explosives
NASA Astrophysics Data System (ADS)
Ershov, A. P.; Kashkarov, A. O.; Pruuel, E. R.; Satonkina, N. P.; Sil'vestrov, V. V.; Yunoshev, A. S.; Plastinin, A. V.
2016-02-01
Measurements using Velocity Interferometer System for Any Reflector (VISAR) were performed for three high explosives at densities slightly above the natural loose-packed densities. The velocity histories at the explosive/window interface demonstrate that the grain size of the explosives plays an important role. Fine-grained materials produced rather smooth records with reduced von Neumann spike amplitudes. For commercial coarse-grained specimens, the chemical spike (if detectable) was more pronounced. This difference can be explained as a manifestation of partial burn up. In fine-grained explosives, which are more sensitive, the reaction can proceed partly within the compression front, which leads to a lower initial shock amplitude. The reaction zone was shorter in fine-grained materials because of higher density of hot spots. The noise level was generally higher for the coarse-grained explosives, which is a natural stochastic effect of the highly non-uniform flow of the heterogeneous medium. These results correlate with our previous data of electrical conductivity diagnostics. Instead of the classical Zel'dovich-von Neumann-Döring profiles, violent oscillations around the Chapman-Jouguet level were observed in about half of the shots using coarse-grained materials. We suggest that these unusual records may point to a different detonation wave propagation mechanism.
Structure and coarsening at the surface of a dry three-dimensional aqueous foam.
Roth, A E; Chen, B G; Durian, D J
2013-12-01
We utilize total-internal reflection to isolate the two-dimensional surface foam formed at the planar boundary of a three-dimensional sample. The resulting images of surface Plateau borders are consistent with Plateau's laws for a truly two-dimensional foam. Samples are allowed to coarsen into a self-similar scaling state where statistical distributions appear independent of time, except for an overall scale factor. There we find that statistical measures of side number distributions, size-topology correlations, and bubble shapes are all very similar to those for two-dimensional foams. However, the size number distribution is slightly broader, and the shapes are slightly more elongated. A more obvious difference is that T2 processes now include the creation of surface bubbles, due to rearrangement in the bulk, and von Neumann's law is dramatically violated for individual bubbles. But nevertheless, our most striking finding is that von Neumann's law appears to holds on average, namely, the average rate of area change for surface bubbles appears to be proportional to the number of sides minus six, but with individual bubbles showing a wide distribution of deviations from this average behavior.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Potok, Thomas; Schuman, Catherine; Patton, Robert
The White House and Department of Energy have been instrumental in driving the development of a neuromorphic computing program to help the United States continue its lead in basic research into (1) Beyond Exascale—high performance computing beyond Moore’s Law and von Neumann architectures, (2) Scientific Discovery—new paradigms for understanding increasingly large and complex scientific data, and (3) Emerging Architectures—assessing the potential of neuromorphic and quantum architectures. Neuromorphic computing spans a broad range of scientific disciplines from materials science to devices, to computer science, to neuroscience, all of which are required to solve the neuromorphic computing grand challenge. In our workshopmore » we focus on the computer science aspects, specifically from a neuromorphic device through an application. Neuromorphic devices present a very different paradigm to the computer science community from traditional von Neumann architectures, which raises six major questions about building a neuromorphic application from the device level. We used these fundamental questions to organize the workshop program and to direct the workshop panels and discussions. From the white papers, presentations, panels, and discussions, there emerged several recommendations on how to proceed.« less
NASA Astrophysics Data System (ADS)
Röpke, G.
2018-01-01
One of the fundamental problems in physics that are not yet rigorously solved is the statistical mechanics of nonequilibrium processes. An important contribution to describing irreversible behavior starting from reversible Hamiltonian dynamics was given by D. N. Zubarev, who invented the method of the nonequilibrium statistical operator. We discuss this approach, in particular, the extended von Neumann equation, and as an example consider the electrical conductivity of a system of charged particles. We consider the selection of the set of relevant observables. We show the relation between kinetic theory and linear response theory. Using thermodynamic Green's functions, we present a systematic treatment of correlation functions, but the convergence needs investigation. We compare different expressions for the conductivity and list open questions.
Classical and quantum entropy of parton distributions
NASA Astrophysics Data System (ADS)
Hagiwara, Yoshikazu; Hatta, Yoshitaka; Xiao, Bo-Wen; Yuan, Feng
2018-05-01
We introduce the semiclassical Wehrl entropy for the nucleon as a measure of complexity of the multiparton configuration in phase space. This gives a new perspective on the nucleon tomography. We evaluate the entropy in the small-x region and compare with the quantum von Neumann entropy. We also argue that the growth of entropy at small x is eventually slowed down due to the Pomeron loop effect.
Metrics for Uncertainty in Organizational Decision-Making
2006-06-01
measurement and computational agents. Computational Economics : A Perspective from Computational Intelligence book. S.- H. Chen, Jain, Lakhmi, & Tai...change and development." Annual Review of Psychology 50: 361-386. Von Neumann, J., and Morgenstern, O. (1953). Theory of games and economic ...2006 Interviews versus Field data MI MPU Hanford/HAB (CR: cooperation) Savannah River Site/SAB (MR: competition) ER ER about 7.1% in 2002 ER
Quantum entanglement of a harmonic oscillator with an electromagnetic field.
Makarov, Dmitry N
2018-05-29
At present, there are many methods for obtaining quantum entanglement of particles with an electromagnetic field. Most methods have a low probability of quantum entanglement and not an exact theoretical apparatus based on an approximate solution of the Schrodinger equation. There is a need for new methods for obtaining quantum-entangled particles and mathematically accurate studies of such methods. In this paper, a quantum harmonic oscillator (for example, an electron in a magnetic field) interacting with a quantized electromagnetic field is considered. Based on the exact solution of the Schrodinger equation for this system, it is shown that for certain parameters there can be a large quantum entanglement between the electron and the electromagnetic field. Quantum entanglement is analyzed on the basis of a mathematically exact expression for the Schmidt modes and the Von Neumann entropy.
Realisation of all 16 Boolean logic functions in a single magnetoresistance memory cell
NASA Astrophysics Data System (ADS)
Gao, Shuang; Yang, Guang; Cui, Bin; Wang, Shouguo; Zeng, Fei; Song, Cheng; Pan, Feng
2016-06-01
Stateful logic circuits based on next-generation nonvolatile memories, such as magnetoresistance random access memory (MRAM), promise to break the long-standing von Neumann bottleneck in state-of-the-art data processing devices. For the successful commercialisation of stateful logic circuits, a critical step is realizing the best use of a single memory cell to perform logic functions. In this work, we propose a method for implementing all 16 Boolean logic functions in a single MRAM cell, namely a magnetoresistance (MR) unit. Based on our experimental results, we conclude that this method is applicable to any MR unit with a double-hump-like hysteresis loop, especially pseudo-spin-valve magnetic tunnel junctions with a high MR ratio. Moreover, after simply reversing the correspondence between voltage signals and output logic values, this method could also be applicable to any MR unit with a double-pit-like hysteresis loop. These results may provide a helpful solution for the final commercialisation of MRAM-based stateful logic circuits in the near future.Stateful logic circuits based on next-generation nonvolatile memories, such as magnetoresistance random access memory (MRAM), promise to break the long-standing von Neumann bottleneck in state-of-the-art data processing devices. For the successful commercialisation of stateful logic circuits, a critical step is realizing the best use of a single memory cell to perform logic functions. In this work, we propose a method for implementing all 16 Boolean logic functions in a single MRAM cell, namely a magnetoresistance (MR) unit. Based on our experimental results, we conclude that this method is applicable to any MR unit with a double-hump-like hysteresis loop, especially pseudo-spin-valve magnetic tunnel junctions with a high MR ratio. Moreover, after simply reversing the correspondence between voltage signals and output logic values, this method could also be applicable to any MR unit with a double-pit-like hysteresis loop. These results may provide a helpful solution for the final commercialisation of MRAM-based stateful logic circuits in the near future. Electronic supplementary information (ESI) available. See DOI: 10.1039/c6nr03169b
Critical behavior of dissipative two-dimensional spin lattices
NASA Astrophysics Data System (ADS)
Rota, R.; Storme, F.; Bartolo, N.; Fazio, R.; Ciuti, C.
2017-04-01
We explore critical properties of two-dimensional lattices of spins interacting via an anisotropic Heisenberg Hamiltonian that are subject to incoherent spin flips. We determine the steady-state solution of the master equation for the density matrix via the corner-space renormalization method. We investigate the finite-size scaling and critical exponent of the magnetic linear susceptibility associated with a dissipative ferromagnetic transition. We show that the von Neumann entropy increases across the critical point, revealing a strongly mixed character of the ferromagnetic phase. Entanglement is witnessed by the quantum Fisher information, which exhibits a critical behavior at the transition point, showing that quantum correlations play a crucial role in the transition.
NASA Astrophysics Data System (ADS)
Komachi, Mamoru; Kudo, Taku; Shimbo, Masashi; Matsumoto, Yuji
Bootstrapping has a tendency, called semantic drift, to select instances unrelated to the seed instances as the iteration proceeds. We demonstrate the semantic drift of Espresso-style bootstrapping has the same root as the topic drift of Kleinberg's HITS, using a simplified graph-based reformulation of bootstrapping. We confirm that two graph-based algorithms, the von Neumann kernels and the regularized Laplacian, can reduce the effect of semantic drift in the task of word sense disambiguation (WSD) on Senseval-3 English Lexical Sample Task. Proposed algorithms achieve superior performance to Espresso and previous graph-based WSD methods, even though the proposed algorithms have less parameters and are easy to calibrate.
Memristor-Based Synapse Design and Training Scheme for Neuromorphic Computing Architecture
2012-06-01
system level built upon the conventional Von Neumann computer architecture [2][3]. Developing the neuromorphic architecture at chip level by...SCHEME FOR NEUROMORPHIC COMPUTING ARCHITECTURE 5a. CONTRACT NUMBER FA8750-11-2-0046 5b. GRANT NUMBER N/A 5c. PROGRAM ELEMENT NUMBER 62788F 6...creation of memristor-based neuromorphic computing architecture. Rather than the existing crossbar-based neuron network designs, we focus on memristor
Software Techniques for Non-Von Neumann Architectures
1990-01-01
Commtopo programmable Benes net.; hypercubic lattice for QCD Control CENTRALIZED Assign STATIC Memory :SHARED Synch UNIVERSAL Max-cpu 566 Proessor...boards (each = 4 floating point units, 2 multipliers) Cpu-size 32-bit floating point chips Perform 11.4 Gflops Market quantum chromodynamics ( QCD ...functions there should exist a capability to define hierarchies and lattices of complex objects. A complex object can be made up of a set of simple objects
Quantification of correlations in quantum many-particle systems.
Byczuk, Krzysztof; Kuneš, Jan; Hofstetter, Walter; Vollhardt, Dieter
2012-02-24
We introduce a well-defined and unbiased measure of the strength of correlations in quantum many-particle systems which is based on the relative von Neumann entropy computed from the density operator of correlated and uncorrelated states. The usefulness of this general concept is demonstrated by quantifying correlations of interacting electrons in the Hubbard model and in a series of transition-metal oxides using dynamical mean-field theory.
Nonvolatile Memory Materials for Neuromorphic Intelligent Machines.
Jeong, Doo Seok; Hwang, Cheol Seong
2018-04-18
Recent progress in deep learning extends the capability of artificial intelligence to various practical tasks, making the deep neural network (DNN) an extremely versatile hypothesis. While such DNN is virtually built on contemporary data centers of the von Neumann architecture, physical (in part) DNN of non-von Neumann architecture, also known as neuromorphic computing, can remarkably improve learning and inference efficiency. Particularly, resistance-based nonvolatile random access memory (NVRAM) highlights its handy and efficient application to the multiply-accumulate (MAC) operation in an analog manner. Here, an overview is given of the available types of resistance-based NVRAMs and their technological maturity from the material- and device-points of view. Examples within the strategy are subsequently addressed in comparison with their benchmarks (virtual DNN in deep learning). A spiking neural network (SNN) is another type of neural network that is more biologically plausible than the DNN. The successful incorporation of resistance-based NVRAM in SNN-based neuromorphic computing offers an efficient solution to the MAC operation and spike timing-based learning in nature. This strategy is exemplified from a material perspective. Intelligent machines are categorized according to their architecture and learning type. Also, the functionality and usefulness of NVRAM-based neuromorphic computing are addressed. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
From Quantum Fields to Local Von Neumann Algebras
NASA Astrophysics Data System (ADS)
Borchers, H. J.; Yngvason, Jakob
The subject of the paper is an old problem of the general theory of quantized fields: When can the unbounded operators of a Wightman field theory be associated with local algebras of bounded operators in the sense of Haag? The paper reviews and extends previous work on this question, stressing its connections with a noncommutive generalization of the classical Hamburger moment problem. Necessary and sufficient conditions for the existence of a local net of von Neumann algebras corresponding to a given Wightman field are formulated in terms of strengthened versions of the usual positivity property of Wightman functionals. The possibility that the local net has to be defined in an enlarged Hilbert space cannot be ruled out in general. Under additional hypotheses, e.g., if the field operators obey certain energy bounds, such an extension of the Hilbert space is not necessary, however. In these cases a fairly simple condition for the existence of a local net can be given involving the concept of “central positivity” introduced by Powers. The analysis presented here applies to translationally covariant fields with an arbitrary number of components, whereas Lorentz covariance is not needed. The paper contains also a brief discussion of an approach to noncommutative moment problems due to Dubois-Violette, and concludes with some remarks on modular theory for algebras of unbounded operators.
Quench action and Rényi entropies in integrable systems
NASA Astrophysics Data System (ADS)
Alba, Vincenzo; Calabrese, Pasquale
2017-09-01
Entropy is a fundamental concept in equilibrium statistical mechanics, yet its origin in the nonequilibrium dynamics of isolated quantum systems is not fully understood. A strong consensus is emerging around the idea that the stationary thermodynamic entropy is the von Neumann entanglement entropy of a large subsystem embedded in an infinite system. Also motivated by cold-atom experiments, here we consider the generalization to Rényi entropies. We develop a new technique to calculate the diagonal Rényi entropy in the quench action formalism. In the spirit of the replica treatment for the entanglement entropy, the diagonal Rényi entropies are generalized free energies evaluated over a thermodynamic macrostate which depends on the Rényi index and, in particular, is not the same state describing von Neumann entropy. The technical reason for this perhaps surprising result is that the evaluation of the moments of the diagonal density matrix shifts the saddle point of the quench action. An interesting consequence is that different Rényi entropies encode information about different regions of the spectrum of the postquench Hamiltonian. Our approach provides a very simple proof of the long-standing issue that, for integrable systems, the diagonal entropy is half of the thermodynamic one and it allows us to generalize this result to the case of arbitrary Rényi entropy.
Thermodynamic analogies in economics and finance: instability of markets
NASA Astrophysics Data System (ADS)
McCauley, Joseph L.
2003-11-01
Interest in thermodynamic analogies in economics is older than the idea of von Neumann to look for market entropy in liquidity, advice that was not taken in any thermodynamic analogy presented so far in the literature. In this paper, we go further and use a standard strategy from trading theory to pinpoint why thermodynamic analogies necessarily fail to describe financial markets, in spite of the presence of liquidity as the underlying basis for market entropy. Market liquidity of frequently traded assets does play the role of the ‘heat bath‘, as anticipated by von Neumann, but we are able to identify the no-arbitrage condition geometrically as an assumption of translational and rotational invariance rather than (as finance theorists would claim) an equilibrium condition. We then use the empirical market distribution to introduce an asset's entropy and discuss the underlying reason why real financial markets cannot behave thermodynamically: financial markets are unstable, they do not approach statistical equilibrium, nor are there any available topological invariants on which to base a purely formal statistical mechanics. After discussing financial markets, we finally generalize our result by proposing that the idea of Adam Smith's Invisible Hand is a falsifiable proposition: we suggest how to test nonfinancial markets empirically for the stabilizing action of The Invisible Hand.
Ultrafast shock compression of an oxygen-balanced mixture of nitromethane and hydrogen peroxide.
Armstrong, Michael R; Zaug, Joseph M; Grant, Christian D; Crowhurst, Jonathan C; Bastea, Sorin
2014-08-14
We apply ultrafast optical interferometry to measure the Hugoniot of an oxygen-balanced mixture of nitromethane and hydrogen peroxide (NM/HP) and compare with Hugoniot data for pure nitromethane (NM) and a 90% hydrogen peroxide/water mixture (HP), as well as theoretical predictions. We observe a 2.1% percent mean pairwise difference between the measured shockwave speed (at the measured piston speed) in unreacted NM/HP and the corresponding "universal" liquid Hugoniot, which is larger than the average standard deviation of our data, 1.4%. Unlike the Hugoniots of both HP and NM, in which measured shock speeds deviate to values greater than the unreacted Hugoniot for piston speeds larger than the respective reaction thresholds, in the NM/HP mixture we observe shock speed deviations to values lower than the unreacted Hugoniot well below the von Neumann pressure (≈28 GPa). Although the trend should reverse for high enough piston speeds, the initial behavior is unexpected. Possible explanations range from mixing effects to a complex index of refraction in the reacted solution. If this is indeed a signature of chemical initiation, it would suggest that the process may not be kinetically limited (on a ~100 ps time scale) between the initiation threshold and the von Neumann pressure.
In-situ, In-Memory Stateful Vector Logic Operations based on Voltage Controlled Magnetic Anisotropy.
Jaiswal, Akhilesh; Agrawal, Amogh; Roy, Kaushik
2018-04-10
Recently, the exponential increase in compute requirements demanded by emerging applications like artificial intelligence, Internet of things, etc. have rendered the state-of-art von-Neumann machines inefficient in terms of energy and throughput owing to the well-known von-Neumann bottleneck. A promising approach to mitigate the bottleneck is to do computations as close to the memory units as possible. One extreme possibility is to do in-situ Boolean logic computations by using stateful devices. Stateful devices are those that can act both as a compute engine and storage device, simultaneously. We propose such stateful, vector, in-memory operations using voltage controlled magnetic anisotropy (VCMA) effect in magnetic tunnel junctions (MTJ). Our proposal is based on the well known manufacturable 1-transistor - 1-MTJ bit-cell and does not require any modifications in the bit-cell circuit or the magnetic device. Instead, we leverage the very physics of the VCMA effect to enable stateful computations. Specifically, we exploit the voltage asymmetry of the VCMA effect to construct stateful IMP (implication) gate and use the precessional switching dynamics of the VCMA devices to propose a massively parallel NOT operation. Further, we show that other gates like AND, OR, NAND, NOR, NIMP (complement of implication) can be implemented using multi-cycle operations.
Applications of quantum entropy to statistics
NASA Astrophysics Data System (ADS)
Silver, R. N.; Martz, H. F.
This paper develops two generalizations of the maximum entropy (ME) principle. First, Shannon classical entropy is replaced by von Neumann quantum entropy to yield a broader class of information divergences (or penalty functions) for statistics applications. Negative relative quantum entropy enforces convexity, positivity, non-local extensivity and prior correlations such as smoothness. This enables the extension of ME methods from their traditional domain of ill-posed in-verse problems to new applications such as non-parametric density estimation. Second, given a choice of information divergence, a combination of ME and Bayes rule is used to assign both prior and posterior probabilities. Hyperparameters are interpreted as Lagrange multipliers enforcing constraints. Conservation principles are proposed to act statistical regularization and other hyperparameters, such as conservation of information and smoothness. ME provides an alternative to hierarchical Bayes methods.
Siegmund-Schultze, Reinhard
2008-01-01
The paper discusses several still unsettled and not systematically investigated questions concerning the situation of Jewish scientists, among them mathematicians, in the Republic of Weimar. Contemporary statements by the well-known leftist and liberal journalists Carl von Ossietzky (1932) and Rudolf Olden (1934) are used to describe the general political situation. A wide-spread feeling of a social and political crisis and changes and perturbations in international scientific communication provide explanatory background for the conditions within academia in the 1920s. A comparison of appointments of Jewish mathematicians to full professorships before and after World War I does not give significant differences. Attitudes of Jewish mathematicians such as Felix Bernstein, Richard Courant, Emil Julius Gumbel, Edmund Landau, Richard von Mises, Johann von Neumann and Adolf A. Fraenkel, but also of non-Jewish mathematicians such as Felix Klein, Walther von Dyck and Theodor Vahlen will be discussed, providing some unpublished material. One statement by Felix Klein (1920), which shows his undecided stance with respect to the problem of anti-Semitism, and an excerpt from Richard von Mises' diary (1933), where he reflects on his status as a Jewish mathematician and as a refugee, are particularly valuable as points of reference for necessary further research.
Numerical simulation of KdV equation by finite difference method
NASA Astrophysics Data System (ADS)
Yokus, A.; Bulut, H.
2018-05-01
In this study, the numerical solutions to the KdV equation with dual power nonlinearity by using the finite difference method are obtained. Discretize equation is presented in the form of finite difference operators. The numerical solutions are secured via the analytical solution to the KdV equation with dual power nonlinearity which is present in the literature. Through the Fourier-Von Neumann technique and linear stable, we have seen that the FDM is stable. Accuracy of the method is analyzed via the L2 and L_{∞} norm errors. The numerical, exact approximations and absolute error are presented in tables. We compare the numerical solutions with the exact solutions and this comparison is supported with the graphic plots. Under the choice of suitable values of parameters, the 2D and 3D surfaces for the used analytical solution are plotted.
Assigning values to intermediate health states for cost-utility analysis: theory and practice.
Cohen, B J
1996-01-01
Cost-utility analysis (CUA) was developed to guide the allocation of health care resources under a budget constraint. As the generally stated goal of CUA is to maximize aggregate health benefits, the philosophical underpinning of this method is classic utilitarianism. Utilitarianism has been criticized as a basis for social choice because of its emphasis on the net sum of benefits without regard to the distribution of benefits. For example, it has been argued that absolute priority should be given to the worst off when making social choices affecting basic needs. Application of classic utilitarianism requires use of strength-of-preference utilities, assessed under conditions of certainty, to assign quality-adjustment factors to intermediate health states. The two methods commonly used to measure strength-of-preference utility, categorical scaling and time tradeoff, produce rankings that systematically give priority to those who are better off. Alternatively, von Neumann-Morgenstern utilities, assessed under conditions of uncertainty, could be used to assign values to intermediate health states. The theoretical basis for this would be Harsanyi's proposal that social choice be made under the hypothetical assumption that one had an equal chance of being anyone in society. If this proposal is accepted, as well as the expected-utility axioms applied to both individual choice and social choice, the preferred societal arrangement is that with the highest expected von Neumann-Morgenstern utility. In the presence of risk aversion, this will give some priority to the worst-off relative to classic utilitarianism. Another approach is to raise the values obtained by time-tradeoff assessments to a power a between 0 and 1. This would explicitly give priority to the worst off, with the degree of priority increasing as a decreases. Results could be presented over a range of a. The results of CUA would then provide useful information to those holding a range of philosophical points of view.
Design of Arithmetic Circuits for Complex Binary Number System
NASA Astrophysics Data System (ADS)
Jamil, Tariq
2011-08-01
Complex numbers play important role in various engineering applications. To represent these numbers efficiently for storage and manipulation, a (-1+j)-base complex binary number system (CBNS) has been proposed in the literature. In this paper, designs of nibble-size arithmetic circuits (adder, subtractor, multiplier, divider) have been presented. These circuits can be incorporated within von Neumann and associative dataflow processors to achieve higher performance in both sequential and parallel computing paradigms.
NASA Astrophysics Data System (ADS)
Hamhalter, Jan; Turilova, Ekaterina
2014-10-01
It is shown that any order isomorphism between the structures of unital associative JB subalgebras of JB algebras is given naturally by a partially linear Jordan isomorphism. The same holds for nonunital subalgebras and order isomorphisms preserving the unital subalgebra. Finally, we recover usual action of time evolution group on a von Neumann factor from group of automorphisms of the structure of Abelian subalgebras.
The Modeling, Simulation and Comparison of Interconnection Networks for Parallel Processing.
1987-12-01
performs better at a lower hardware cost than do the single stage cube and mesh networks. As a result, the designer of a paralll pro- cessing system is...attempted, and in most cases succeeded, in designing and implementing faster. more powerful systems. Due to design innovations and technological advances...largely to the computational complexity of the algorithms executed. In the von Neumann machine, instructions must be executed in a sequential manner. Design
A Reconstructed Discontinuous Galerkin Method for the Euler Equations on Arbitrary Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong Luo; Luqing Luo; Robert Nourgaliev
2012-11-01
A reconstruction-based discontinuous Galerkin (RDG(P1P2)) method, a variant of P1P2 method, is presented for the solution of the compressible Euler equations on arbitrary grids. In this method, an in-cell reconstruction, designed to enhance the accuracy of the discontinuous Galerkin method, is used to obtain a quadratic polynomial solution (P2) from the underlying linear polynomial (P1) discontinuous Galerkin solution using a least-squares method. The stencils used in the reconstruction involve only the von Neumann neighborhood (face-neighboring cells) and are compact and consistent with the underlying DG method. The developed RDG method is used to compute a variety of flow problems onmore » arbitrary meshes to demonstrate its accuracy, efficiency, robustness, and versatility. The numerical results indicate that this RDG(P1P2) method is third-order accurate, and outperforms the third-order DG method (DG(P2)) in terms of both computing costs and storage requirements.« less
Self-adjoint elliptic operators with boundary conditions on not closed hypersurfaces
NASA Astrophysics Data System (ADS)
Mantile, Andrea; Posilicano, Andrea; Sini, Mourad
2016-07-01
The theory of self-adjoint extensions of symmetric operators is used to construct self-adjoint realizations of a second-order elliptic differential operator on Rn with linear boundary conditions on (a relatively open part of) a compact hypersurface. Our approach allows to obtain Kreĭn-like resolvent formulae where the reference operator coincides with the ;free; operator with domain H2 (Rn); this provides an useful tool for the scattering problem from a hypersurface. Concrete examples of this construction are developed in connection with the standard boundary conditions, Dirichlet, Neumann, Robin, δ and δ‧-type, assigned either on a (n - 1) dimensional compact boundary Γ = ∂ Ω or on a relatively open part Σ ⊂ Γ. Schatten-von Neumann estimates for the difference of the powers of resolvents of the free and the perturbed operators are also proven; these give existence and completeness of the wave operators of the associated scattering systems.
An efficient method for quantum transport simulations in the time domain
NASA Astrophysics Data System (ADS)
Wang, Y.; Yam, C.-Y.; Frauenheim, Th.; Chen, G. H.; Niehaus, T. A.
2011-11-01
An approximate method based on adiabatic time dependent density functional theory (TDDFT) is presented, that allows for the description of the electron dynamics in nanoscale junctions under arbitrary time dependent external potentials. The density matrix of the device region is propagated according to the Liouville-von Neumann equation. The semi-infinite leads give rise to dissipative terms in the equation of motion which are calculated from first principles in the wide band limit. In contrast to earlier ab initio implementations of this formalism, the Hamiltonian is here approximated in the spirit of the density functional based tight-binding (DFTB) method. Results are presented for two prototypical molecular devices and compared to full TDDFT calculations. The temporal profile of the current traces is qualitatively well captured by the DFTB scheme. Steady state currents show considerable variations, both in comparison of approximate and full TDDFT, but also among TDDFT calculations with different basis sets.
Supersymmetric symplectic quantum mechanics
NASA Astrophysics Data System (ADS)
de Menezes, Miralvo B.; Fernandes, M. C. B.; Martins, Maria das Graças R.; Santana, A. E.; Vianna, J. D. M.
2018-02-01
Symplectic Quantum Mechanics SQM considers a non-commutative algebra of functions on a phase space Γ and an associated Hilbert space HΓ to construct a unitary representation for the Galilei group. From this unitary representation the Schrödinger equation is rewritten in phase space variables and the Wigner function can be derived without the use of the Liouville-von Neumann equation. In this article we extend the methods of supersymmetric quantum mechanics SUSYQM to SQM. With the purpose of applications in quantum systems, the factorization method of the quantum mechanical formalism is then set within supersymmetric SQM. A hierarchy of simpler hamiltonians is generated leading to new computation tools for solving the eigenvalue problem in SQM. We illustrate the results by computing the states and spectra of the problem of a charged particle in a homogeneous magnetic field as well as the corresponding Wigner function.
Ultimate computing. Biomolecular consciousness and nano Technology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hameroff, S.R.
1987-01-01
The book advances the premise that the cytoskeleton is the cell's nervous system, the biological controller/computer. If indeed cytoskeletal dynamics in the nanoscale (billionth meter, billionth second) are the texture of intracellular information processing, emerging ''NanoTechnologies'' (scanning tunneling microscopy, Feynman machines, von Neumann replicators, etc.) should enable direct monitoring, decoding and interfacing between biological and technological information devices. This in turn could result in important biomedical applications and perhaps a merger of mind and machine: Ultimate Computing.
Private algebras in quantum information and infinite-dimensional complementarity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crann, Jason, E-mail: jason-crann@carleton.ca; Laboratoire de Mathématiques Paul Painlevé–UMR CNRS 8524, UFR de Mathématiques, Université Lille 1–Sciences et Technologies, 59655 Villeneuve d’Ascq Cédex; Kribs, David W., E-mail: dkribs@uoguelph.ca
We introduce a generalized framework for private quantum codes using von Neumann algebras and the structure of commutants. This leads naturally to a more general notion of complementary channel, which we use to establish a generalized complementarity theorem between private and correctable subalgebras that applies to both the finite and infinite-dimensional settings. Linear bosonic channels are considered and specific examples of Gaussian quantum channels are given to illustrate the new framework together with the complementarity theorem.
Poincaré resonances and the limits of trajectory dynamics.
Petrosky, T; Prigogine, I
1993-01-01
In previous papers we have shown that the elimination of the resonance divergences in large Poincare systems leads to complex irreducible spectral representations for the Liouville-von Neumann operator. Complex means that time symmetry is broken and irreducibility means that this representation is implementable only by statistical ensembles and not by trajectories. We consider in this paper classical potential scattering. Our theory applies to persistent scattering. Numerical simulations show quantitative agreement with our predictions. PMID:11607428
Entangled Dynamics in Macroscopic Quantum Tunneling of Bose-Einstein Condensates
NASA Astrophysics Data System (ADS)
Alcala, Diego A.; Glick, Joseph A.; Carr, Lincoln D.
2017-05-01
Tunneling of a quasibound state is a nonsmooth process in the entangled many-body case. Using time-evolving block decimation, we show that repulsive (attractive) interactions speed up (slow down) tunneling. While the escape time scales exponentially with small interactions, the maximization time of the von Neumann entanglement entropy between the remaining quasibound and escaped atoms scales quadratically. Stronger interactions require higher-order corrections. Entanglement entropy is maximized when about half the atoms have escaped.
Quantum incompatibility of channels with general outcome operator algebras
NASA Astrophysics Data System (ADS)
Kuramochi, Yui
2018-04-01
A pair of quantum channels is said to be incompatible if they cannot be realized as marginals of a single channel. This paper addresses the general structure of the incompatibility of completely positive channels with a fixed quantum input space and with general outcome operator algebras. We define a compatibility relation for such channels by identifying the composite outcome space as the maximal (projective) C*-tensor product of outcome algebras. We show theorems that characterize this compatibility relation in terms of the concatenation and conjugation of channels, generalizing the recent result for channels with quantum outcome spaces. These results are applied to the positive operator valued measures (POVMs) by identifying each of them with the corresponding quantum-classical (QC) channel. We also give a characterization of the maximality of a POVM with respect to the post-processing preorder in terms of the conjugate channel of the QC channel. We consider another definition of compatibility of normal channels by identifying the composite outcome space with the normal tensor product of the outcome von Neumann algebras. We prove that for a given normal channel, the class of normally compatible channels is upper bounded by a special class of channels called tensor conjugate channels. We show the inequivalence of the C*- and normal compatibility relations for QC channels, which originates from the possibility and impossibility of copying operations for commutative von Neumann algebras in C*- and normal compatibility relations, respectively.
Dynamical Correspondence in a Generalized Quantum Theory
NASA Astrophysics Data System (ADS)
Niestegge, Gerd
2015-05-01
In order to figure out why quantum physics needs the complex Hilbert space, many attempts have been made to distinguish the C*-algebras and von Neumann algebras in more general classes of abstractly defined Jordan algebras (JB- and JBW-algebras). One particularly important distinguishing property was identified by Alfsen and Shultz and is the existence of a dynamical correspondence. It reproduces the dual role of the selfadjoint operators as observables and generators of dynamical groups in quantum mechanics. In the paper, this concept is extended to another class of nonassociative algebras, arising from recent studies of the quantum logics with a conditional probability calculus and particularly of those that rule out third-order interference. The conditional probability calculus is a mathematical model of the Lüders-von Neumann quantum measurement process, and third-order interference is a property of the conditional probabilities which was discovered by Sorkin (Mod Phys Lett A 9:3119-3127, 1994) and which is ruled out by quantum mechanics. It is shown then that the postulates that a dynamical correspondence exists and that the square of any algebra element is positive still characterize, in the class considered, those algebras that emerge from the selfadjoint parts of C*-algebras equipped with the Jordan product. Within this class, the two postulates thus result in ordinary quantum mechanics using the complex Hilbert space or, vice versa, a genuine generalization of quantum theory must omit at least one of them.
Causal holographic information does not satisfy the linearized quantum focusing condition
NASA Astrophysics Data System (ADS)
Fu, Zicao; Marolf, Donald; Qi, Marvin
2018-04-01
The Hubeny-Rangamani causal holographic information (CHI) defined by a region R of a holographic quantum field theory (QFT) is a modern version of the idea that the area of event horizons might be related to an entropy. Here the event horizon lives in a dual gravitational bulk theory with Newton's constant G bulk, and the relation involves a factor of 4 G bulk. The fact that CHI is bounded below by the von Neumann entropy S suggests that CHI is coarse-grained. Its properties could thus differ markedly from those of S. In particular, recent results imply that when d ≤ 4 holographic QFTs are perturbatively coupled to d-dimensional gravity, the combined system satisfies the so-called quantum focusing condition (QFC) at leading order in the new gravitational coupling G d when the QFT entropy is taken to be that of von Neumann. However, by studying states dual to spherical bulk (anti-de Sitter) Schwarschild black holes in the conformal frame for which the boundary is a (2 + 1)-dimensional de Sitter space, we find the QFC defined by CHI is violated even when perturbing about a Killing horizon and using a single null congruence. Since it is known that a generalized second law (GSL) holds in this context, our work demonstrates that the QFC is not required in order for an entropy, or an entropy-like quantity, to satisfy such a GSL.
Concepts and Relations in Neurally Inspired In Situ Concept-Based Computing
van der Velde, Frank
2016-01-01
In situ concept-based computing is based on the notion that conceptual representations in the human brain are “in situ.” In this way, they are grounded in perception and action. Examples are neuronal assemblies, whose connection structures develop over time and are distributed over different brain areas. In situ concepts representations cannot be copied or duplicated because that will disrupt their connection structure, and thus the meaning of these concepts. Higher-level cognitive processes, as found in language and reasoning, can be performed with in situ concepts by embedding them in specialized neurally inspired “blackboards.” The interactions between the in situ concepts and the blackboards form the basis for in situ concept computing architectures. In these architectures, memory (concepts) and processing are interwoven, in contrast with the separation between memory and processing found in Von Neumann architectures. Because the further development of Von Neumann computing (more, faster, yet power limited) is questionable, in situ concept computing might be an alternative for concept-based computing. In situ concept computing will be illustrated with a recently developed BABI reasoning task. Neurorobotics can play an important role in the development of in situ concept computing because of the development of in situ concept representations derived in scenarios as needed for reasoning tasks. Neurorobotics would also benefit from power limited and in situ concept computing. PMID:27242504
Concepts and Relations in Neurally Inspired In Situ Concept-Based Computing.
van der Velde, Frank
2016-01-01
In situ concept-based computing is based on the notion that conceptual representations in the human brain are "in situ." In this way, they are grounded in perception and action. Examples are neuronal assemblies, whose connection structures develop over time and are distributed over different brain areas. In situ concepts representations cannot be copied or duplicated because that will disrupt their connection structure, and thus the meaning of these concepts. Higher-level cognitive processes, as found in language and reasoning, can be performed with in situ concepts by embedding them in specialized neurally inspired "blackboards." The interactions between the in situ concepts and the blackboards form the basis for in situ concept computing architectures. In these architectures, memory (concepts) and processing are interwoven, in contrast with the separation between memory and processing found in Von Neumann architectures. Because the further development of Von Neumann computing (more, faster, yet power limited) is questionable, in situ concept computing might be an alternative for concept-based computing. In situ concept computing will be illustrated with a recently developed BABI reasoning task. Neurorobotics can play an important role in the development of in situ concept computing because of the development of in situ concept representations derived in scenarios as needed for reasoning tasks. Neurorobotics would also benefit from power limited and in situ concept computing.
Maximum and minimum entropy states yielding local continuity bounds
NASA Astrophysics Data System (ADS)
Hanson, Eric P.; Datta, Nilanjana
2018-04-01
Given an arbitrary quantum state (σ), we obtain an explicit construction of a state ρɛ * ( σ ) [respectively, ρ * , ɛ ( σ ) ] which has the maximum (respectively, minimum) entropy among all states which lie in a specified neighborhood (ɛ-ball) of σ. Computing the entropy of these states leads to a local strengthening of the continuity bound of the von Neumann entropy, i.e., the Audenaert-Fannes inequality. Our bound is local in the sense that it depends on the spectrum of σ. The states ρɛ * ( σ ) and ρ * , ɛ (σ) depend only on the geometry of the ɛ-ball and are in fact optimizers for a larger class of entropies. These include the Rényi entropy and the minimum- and maximum-entropies, providing explicit formulas for certain smoothed quantities. This allows us to obtain local continuity bounds for these quantities as well. In obtaining this bound, we first derive a more general result which may be of independent interest, namely, a necessary and sufficient condition under which a state maximizes a concave and Gâteaux-differentiable function in an ɛ-ball around a given state σ. Examples of such a function include the von Neumann entropy and the conditional entropy of bipartite states. Our proofs employ tools from the theory of convex optimization under non-differentiable constraints, in particular Fermat's rule, and majorization theory.
NASA Astrophysics Data System (ADS)
Schimming, C. D.; Durian, D. J.
2017-09-01
For dry foams, the transport of gas from small high-pressure bubbles to large low-pressure bubbles is dominated by diffusion across the thin soap films separating neighboring bubbles. For wetter foams, the film areas become smaller as the Plateau borders and vertices inflate with liquid. So-called "border-blocking" models can explain some features of wet-foam coarsening based on the presumption that the inflated borders totally block the gas flux; however, this approximation dramatically fails in the wet or unjamming limit where the bubbles become close-packed spheres and coarsening proceeds even though there are no films. Here, we account for the ever-present border-crossing flux by a new length scale defined by the average gradient of gas concentration inside the borders. We compute that it is proportional to the geometric average of film and border thicknesses, and we verify this scaling by numerical solution of the diffusion equation. We similarly consider transport across inflated vertices and surface Plateau borders in quasi-two-dimensional foams. And we show how the d A /d t =K0(n -6 ) von Neumann law is modified by the appearance of terms that depend on bubble size and shape as well as the concentration gradient length scales. Finally, we use the modified von Neumann law to compute the growth rate of the average bubble area, which is not constant.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alchorn, A L
Thank you for your interest in the activities of the Lawrence Livermore National Laboratory Computation Directorate. This collection of articles from the Laboratory's Science & Technology Review highlights the most significant computational projects, achievements, and contributions during 2002. In 2002, LLNL marked the 50th anniversary of its founding. Scientific advancement in support of our national security mission has always been the core of the Laboratory. So that researchers could better under and predict complex physical phenomena, the Laboratory has pushed the limits of the largest, fastest, most powerful computers in the world. In the late 1950's, Edward Teller--one of themore » LLNL founders--proposed that the Laboratory commission a Livermore Advanced Research Computer (LARC) built to Livermore's specifications. He tells the story of being in Washington, DC, when John Von Neumann asked to talk about the LARC. He thought Teller wanted too much memory in the machine. (The specifications called for 20-30,000 words.) Teller was too smart to argue with him. Later Teller invited Von Neumann to the Laboratory and showed him one of the design codes being prepared for the LARC. He asked Von Neumann for suggestions on fitting the code into 10,000 words of memory, and flattered him about ''Labbies'' not being smart enough to figure it out. Von Neumann dropped his objections, and the LARC arrived with 30,000 words of memory. Memory, and how close memory is to the processor, is still of interest to us today. Livermore's first supercomputer was the Remington-Rand Univac-1. It had 5600 vacuum tubes and was 2 meters wide by 4 meters long. This machine was commonly referred to as a 1 KFlop machine [E+3]. Skip ahead 50 years. The ASCI White machine at the Laboratory today, produced by IBM, is rated at a peak performance of 12.3 TFlops or E+13. We've improved computer processing power by 10 orders of magnitude in 50 years, and I do not believe there's any reason to think we won't improve another 10 orders of magnitude in the next 50 years. For years I have heard talk of hitting the physical limits of Moore's Law, but new technologies will take us into the next phase of computer processing power such as 3-D chips, molecular computing, quantum computing, and more. Big computers are icons or symbols of the culture and larger infrastructure that exists at LLNL to guide scientific discovery and engineering development. We have dealt with balance issues for 50 years and will continue to do so in our quest for a digital proxy of the properties of matter at extremely high temperatures and pressures. I believe that the next big computational win will be the merger of high-performance computing with information management. We already create terabytes--soon to be petabytes--of data. Efficiently storing, finding, visualizing and extracting data and turning that into knowledge which aids decision-making and scientific discovery is an exciting challenge. In the meantime, please enjoy this retrospective on computational physics, computer science, advanced software technologies, and applied mathematics performed by programs and researchers at LLNL during 2002. It offers a glimpse into the stimulating world of computational science in support of the national missions and homeland defense.« less
The Voronoi Implicit Interface Method for computing multiphase physics.
Saye, Robert I; Sethian, James A
2011-12-06
We introduce a numerical framework, the Voronoi Implicit Interface Method for tracking multiple interacting and evolving regions (phases) whose motion is determined by complex physics (fluids, mechanics, elasticity, etc.), intricate jump conditions, internal constraints, and boundary conditions. The method works in two and three dimensions, handles tens of thousands of interfaces and separate phases, and easily and automatically handles multiple junctions, triple points, and quadruple points in two dimensions, as well as triple lines, etc., in higher dimensions. Topological changes occur naturally, with no surgery required. The method is first-order accurate at junction points/lines, and of arbitrarily high-order accuracy away from such degeneracies. The method uses a single function to describe all phases simultaneously, represented on a fixed Eulerian mesh. We test the method's accuracy through convergence tests, and demonstrate its applications to geometric flows, accurate prediction of von Neumann's law for multiphase curvature flow, and robustness under complex fluid flow with surface tension and large shearing forces.
Design and Testing of an H2/O2 Predetonator for a Simulated Rotating Detonation Engine Channel
2013-03-01
Diameter PDE Pulse Detonation Engines RDE Rotating Detonation Engine WPAFB Wright Patterson Air Force Base ZND Zeldovich, von Neumann and Doring xv...DESIGN AND TESTING OF AN H2/O2 PREDETONATOR FOR A SIMULATED ROTATING DETONATION ENGINE CHANNEL THESIS Stephen J. Miller, 2Lt, USAF AFIT-ENY-13-M-23...RELEASE; DISTRIBUTION UNLIMITED AFIT-ENY-13-M-23 DESIGN AND TESTING OF AN H2/O2 PREDETONATOR FOR A SIMULATED ROTATING DETONATION ENGINE CHANNEL Stephen
Entanglement and purity of two-mode Gaussian states in noisy channels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Serafini, Alessio; Illuminati, Fabrizio; De Siena, Silvio
2004-02-01
We study the evolution of purity, entanglement, and total correlations of general two-mode continuous variable Gaussian states in arbitrary uncorrelated Gaussian environments. The time evolution of purity, von Neumann entropy, logarithmic negativity, and mutual information is analyzed for a wide range of initial conditions. In general, we find that a local squeezing of the bath leads to a faster degradation of purity and entanglement, while it can help to preserve the mutual information between the modes.
Branciard, Cyril; Gisin, Nicolas
2011-07-08
The simulation of quantum correlations with finite nonlocal resources, such as classical communication, gives a natural way to quantify their nonlocality. While multipartite nonlocal correlations appear to be useful resources, very little is known on how to simulate multipartite quantum correlations. We present a protocol that reproduces tripartite Greenberger-Horne-Zeilinger correlations with bounded communication: 3 bits in total turn out to be sufficient to simulate all equatorial Von Neumann measurements on the tripartite Greenberger-Horne-Zeilinger state.
Detonation Reaction Zones in Condensed Explosives
NASA Astrophysics Data System (ADS)
Tarver, Craig M.
2006-07-01
Experimental measurements using nanosecond time resolved embedded gauges and laser interferometric techniques, combined with Non-Equilibrium Zeldovich - von Neumann - Doling (NEZND) theory and Ignition and Growth reactive flow hydrodynamic modeling, have revealed the average pressure/particle velocity states attained in reaction zones of self-sustaining detonation waves in several solid and liquid explosives. The time durations of these reaction zone processes are discussed for explosives based on pentaerythritol tetranitrate (PETN), nitromethane, octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine (HMX), triaminitrinitrobenzene(TATB) and trinitrotoluene (TNT).
Software Deficiency Issues Confronting the Utilization of ’Non-von Neumann’ Architectures
1989-01-01
upon work done by Charles Babbage nearly 100 years before. Hence, the " Babbage " machine that was designed in the 1820’s and 1830’s is generally...functionality. For example, consider the world’s first computer designer, Charles Babbage , who primarily designed in the 1820’s what is considered to be the...primarily considered as direct descendants of ideas that were devised in the 1930’s, these ideas were basically rediscoveries of what Charles Babbage
NASA Technical Reports Server (NTRS)
Pedley, M. D.; Bishop, C. V.; Benz, F. J.; Bennett, C. A.; Mcclenagan, R. D.
1988-01-01
The detonation velocity and cell widths for hydrazine decomposition were measured over a wide range of temperatures and pressures. The detonation velocity in pure hydrazine was within 5 percent of the calculated C-J velocity. The detonation cell width measurements were interpreted using the Zeldovich-Doering-von Neumann model with a detailed reaction mechanism for hydrazine decomposition. Excellent agreement with experimental data for pure hydrazine was obtained using the empirical relation that detonation cell width was equal to 29 times the kinetically calculated reaction zone length.
Effect of slip-area scaling on the earthquake frequency-magnitude relationship
NASA Astrophysics Data System (ADS)
Senatorski, Piotr
2017-06-01
The earthquake frequency-magnitude relationship is considered in the maximum entropy principle (MEP) perspective. The MEP suggests sampling with constraints as a simple stochastic model of seismicity. The model is based on the von Neumann's acceptance-rejection method, with b-value as the parameter that breaks symmetry between small and large earthquakes. The Gutenberg-Richter law's b-value forms a link between earthquake statistics and physics. Dependence between b-value and the rupture area vs. slip scaling exponent is derived. The relationship enables us to explain observed ranges of b-values for different types of earthquakes. Specifically, different b-value ranges for tectonic and induced, hydraulic fracturing seismicity is explained in terms of their different triggering mechanisms: by the applied stress increase and fault strength reduction, respectively.
Experimental Detection of Quantum Channel Capacities.
Cuevas, Álvaro; Proietti, Massimiliano; Ciampini, Mario Arnolfo; Duranti, Stefano; Mataloni, Paolo; Sacchi, Massimiliano F; Macchiavello, Chiara
2017-09-08
We present an efficient experimental procedure that certifies nonvanishing quantum capacities for qubit noisy channels. Our method is based on the use of a fixed bipartite entangled state, where the system qubit is sent to the channel input. A particular set of local measurements is performed at the channel output and the ancilla qubit mode, obtaining lower bounds to the quantum capacities for any unknown channel with no need of quantum process tomography. The entangled qubits have a Bell state configuration and are encoded in photon polarization. The lower bounds are found by estimating the Shannon and von Neumann entropies at the output using an optimized basis, whose statistics is obtained by measuring only the three observables σ_{x}⊗σ_{x}, σ_{y}⊗σ_{y}, and σ_{z}⊗σ_{z}.
Nonvolatile reconfigurable sequential logic in a HfO2 resistive random access memory array.
Zhou, Ya-Xiong; Li, Yi; Su, Yu-Ting; Wang, Zhuo-Rui; Shih, Ling-Yi; Chang, Ting-Chang; Chang, Kuan-Chang; Long, Shi-Bing; Sze, Simon M; Miao, Xiang-Shui
2017-05-25
Resistive random access memory (RRAM) based reconfigurable logic provides a temporal programmable dimension to realize Boolean logic functions and is regarded as a promising route to build non-von Neumann computing architecture. In this work, a reconfigurable operation method is proposed to perform nonvolatile sequential logic in a HfO 2 -based RRAM array. Eight kinds of Boolean logic functions can be implemented within the same hardware fabrics. During the logic computing processes, the RRAM devices in an array are flexibly configured in a bipolar or complementary structure. The validity was demonstrated by experimentally implemented NAND and XOR logic functions and a theoretically designed 1-bit full adder. With the trade-off between temporal and spatial computing complexity, our method makes better use of limited computing resources, thus provides an attractive scheme for the construction of logic-in-memory systems.
Data Structures in Natural Computing: Databases as Weak or Strong Anticipatory Systems
NASA Astrophysics Data System (ADS)
Rossiter, B. N.; Heather, M. A.
2004-08-01
Information systems anticipate the real world. Classical databases store, organise and search collections of data of that real world but only as weak anticipatory information systems. This is because of the reductionism and normalisation needed to map the structuralism of natural data on to idealised machines with von Neumann architectures consisting of fixed instructions. Category theory developed as a formalism to explore the theoretical concept of naturality shows that methods like sketches arising from graph theory as only non-natural models of naturality cannot capture real-world structures for strong anticipatory information systems. Databases need a schema of the natural world. Natural computing databases need the schema itself to be also natural. Natural computing methods including neural computers, evolutionary automata, molecular and nanocomputing and quantum computation have the potential to be strong. At present they are mainly at the stage of weak anticipatory systems.
A Novel Method for Modeling Neumann and Robin Boundary Conditions in Smoothed Particle Hydrodynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ryan, Emily M.; Tartakovsky, Alexandre M.; Amon, Cristina
2010-08-26
In this paper we present an improved method for handling Neumann or Robin boundary conditions in smoothed particle hydrodynamics. The Neumann and Robin boundary conditions are common to many physical problems (such as heat/mass transfer), and can prove challenging to model in volumetric modeling techniques such as smoothed particle hydrodynamics (SPH). A new SPH method for diffusion type equations subject to Neumann or Robin boundary conditions is proposed. The new method is based on the continuum surface force model [1] and allows an efficient implementation of the Neumann and Robin boundary conditions in the SPH method for geometrically complex boundaries.more » The paper discusses the details of the method and the criteria needed to apply the model. The model is used to simulate diffusion and surface reactions and its accuracy is demonstrated through test cases for boundary conditions describing different surface reactions.« less
Interpreting quantum coherence through a quantum measurement process
NASA Astrophysics Data System (ADS)
Yao, Yao; Dong, G. H.; Xiao, Xing; Li, Mo; Sun, C. P.
2017-11-01
Recently, there has been a renewed interest in the quantification of coherence or other coherencelike concepts within the framework of quantum resource theory. However, rigorously defined or not, the notion of coherence or decoherence has already been used by the community for decades since the advent of quantum theory. Intuitively, the definitions of coherence and decoherence should be two sides of the same coin. Therefore, a natural question is raised: How can the conventional decoherence processes, such as the von Neumann-Lüders (projective) measurement postulation or partially dephasing channels, fit into the bigger picture of the recently established theoretical framework? Here we show that the state collapse rules of the von Neumann or Lüders-type measurements, as special cases of genuinely incoherent operations (GIOs), are consistent with the resource theories of quantum coherence. New hierarchical measures of coherence are proposed for the Lüders-type measurement and their relationship with measurement-dependent discord is addressed. Moreover, utilizing the fixed-point theory for C* algebra, we prove that GIOs indeed represent a particular type of partially dephasing (phase-damping) channels which have a matrix representation based on the Schur product. By virtue of the Stinespring dilation theorem, the physical realizations of incoherent operations are investigated in detail and we find that GIOs in fact constitute the core of strictly incoherent operations and generally incoherent operations and the unspeakable notion of coherence induced by GIOs can be transferred to the theories of speakable coherence by the corresponding permutation or relabeling operators.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Furukawa, Shunsuke; Kim, Yong Baek; School of Physics, Korea Institute for Advanced Study, Seoul 130-722
We consider a system of two coupled Tomonaga-Luttinger liquids (TLL's) on parallel chains and study the Renyi entanglement entropy S{sub n} between the two chains. Here the entanglement cut is introduced between the chains, not along the perpendicular direction, as has been done in previous studies of one-dimensional systems. The limit n{yields}1 corresponds to the von Neumann entanglement entropy. The system is effectively described by two-component bosonic field theory with different TLL parameters in the symmetric and antisymmetric channels as far as the coupled system remains in a gapless phase. We argue that in this system, S{sub n} is amore » linear function of the length of the chains (boundary law) followed by a universal subleading constant {gamma}{sub n} determined by the ratio of the two TLL parameters. The formulas of {gamma}{sub n} for integer n{>=}2 are derived using (a) ground-state wave functionals of TLL's and (b) boundary conformal field theory, which lead to the same result. These predictions are checked in a numerical diagonalization analysis of a hard-core bosonic model on a ladder. Although the analytic continuation of {gamma}{sub n} to n{yields}1 turns out to be a difficult problem, our numerical result suggests that the subleading constant in the von Neumann entropy is also universal. Our results may provide useful characterization of inherently anisotropic quantum phases such as the sliding Luttinger liquid phase via qualitatively different behaviors of the entanglement entropy with the entanglement partitions along different directions.« less
NASA Astrophysics Data System (ADS)
Mushtaq, A.; Mustafa, M.
In this paper, the classical Von Kármán problem of infinite disk is extended when an electrically conducting nanofluid fills the space above the rotating disk which also stretches uniformly in the radial direction. Buongiorno model is considered in order to incorporate the novel Brownian motion and thermophoresis effects. Heat transport mechanism is modeled through more practically feasible convective conditions while Neumann type condition for nanoparticle concentration is adopted. Modified Von Kármán transformations are utilized to obtain self-similar differential system which is treated through a numerical method. Stretching phenomenon yields an additional parameter c which compares the stretch rate with the swirl rate. The effect of parameter c is to reduce the temperature and nanoparticle concentration profiles. Torque required to main steady rotation of the disk increases for increasing values of c while an improvement in cooling rate is anticipated in case of radial stretching, which is important in engineering processes. Brownian diffusion does not influence the heat flux from the stretching wall. Moreover, the wall heat flux has the maximum value for the situation in which thermoporetic force is absent.
Entanglement entropy in Fermi gases and Anderson's orthogonality catastrophe.
Ossipov, A
2014-09-26
We study the ground-state entanglement entropy of a finite subsystem of size L of an infinite system of noninteracting fermions scattered by a potential of finite range a. We derive a general relation between the scattering matrix and the overlap matrix and use it to prove that for a one-dimensional symmetric potential the von Neumann entropy, the Rényi entropies, and the full counting statistics are robust against potential scattering, provided that L/a≫1. The results of numerical calculations support the validity of this conclusion for a generic potential.
An Alternate Approach to Axiomatizations of the Von Neumann/Morgenstern Characteristic Function.
1987-03-01
I ~Nh/N OROENSTERN C.. CU) STANFORD UNIY CA INST FOR I MATHEMTICAL STUDIES IN THE SOCIAL S.. U0CASFEA A LEWIS ET AL. MAR 87 TR-569 F/0 12/3 M 1111...Research NATIONAL SCIENCE FOUNDATION GRANT DMS-84-10456 THE ECONOMICS SERIES INSTITUTE FOR MATHEMATICAL STUDIES IN THE SOCIAL SCIENCES Fourth Floor, Encina...characteristic function of a game - that gives us an intuitive idea of the value of a coalition - is of central importance in the theory of N- person
Quantum States and Generalized Observables: A Simple Proof of Gleason's Theorem
NASA Astrophysics Data System (ADS)
Busch, P.
2003-09-01
A quantum state can be understood in a loose sense as a map that assigns a value to every observable. Formalizing this characterization of states in terms of generalized probability distributions on the set of effects, we obtain a simple proof of the result, analogous to Gleason’s theorem, that any quantum state is given by a density operator. As a corollary we obtain a vonNeumann type argument against noncontextual hidden variables. It follows that on an individual interpretation of quantum mechanics the values of effects are appropriately understood as propensities.
NASA Astrophysics Data System (ADS)
Poszwa, A.
2018-05-01
We investigate quantum decoherence of spin states caused by Rashba spin-orbit (SO) coupling for an electron confined to a planar quantum dot (QD) in the presence of a magnetic field (B). The Schrödinger equation has been solved in a frame of second-order perturbation theory. The relationship between the von Neumann (vN) entropy and the spin polarization is obtained. The relation is explicitly demonstrated for the InSb semiconductor QD.
Entropic characterization of separability in Gaussian states
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sudha; Devi, A. R. Usha; Inspire Institute Inc., McLean, Virginia 22101
2010-02-15
We explore separability of bipartite divisions of mixed Gaussian states based on the positivity of the Abe-Rajagopal (AR) q-conditional entropy. The AR q-conditional entropic characterization provide more stringent restrictions on separability (in the limit q{yields}{infinity}) than that obtained from the corresponding von Neumann conditional entropy (q=1 case)--similar to the situation in finite dimensional states. Effectiveness of this approach, in relation to the results obtained by partial transpose criterion, is explicitly analyzed in three illustrative examples of two-mode Gaussian states of physical significance.
Entropic inequalities for a class of quantum secret-sharing states
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarvepalli, Pradeep
It is well known that von Neumann entropy is nonmonotonic, unlike Shannon entropy (which is monotonically nondecreasing). Consequently, it is difficult to relate the entropies of the subsystems of a given quantum state. In this paper, we show that if we consider quantum secret-sharing states arising from a class of monotone span programs, then we can partially recover the monotonicity of entropy for the so-called unauthorized sets. Furthermore, we can show for these quantum states that the entropy of the authorized sets is monotonically nonincreasing.
Two-Step Deterministic Remote Preparation of an Arbitrary Quantum State
NASA Astrophysics Data System (ADS)
Wang, Mei-Yu; Yan, Feng-Li
2010-11-01
We present a two-step deterministic remote state preparation protocol for an arbitrary quhit with the aid of a three-particle Greenberger—Horne—Zeilinger state. Generalization of this protocol for higher-dimensional Hilbert space systems among three parties is also given. We show that only single-particle von Neumann measurements, local operations, and classical communication are necessary. Moreover, since the overall information of the quantum state can be divided into two different pieces, which may be at different locations, this protocol may be useful in the quantum information field.
Schur Complement Inequalities for Covariance Matrices and Monogamy of Quantum Correlations
NASA Astrophysics Data System (ADS)
Lami, Ludovico; Hirche, Christoph; Adesso, Gerardo; Winter, Andreas
2016-11-01
We derive fundamental constraints for the Schur complement of positive matrices, which provide an operator strengthening to recently established information inequalities for quantum covariance matrices, including strong subadditivity. This allows us to prove general results on the monogamy of entanglement and steering quantifiers in continuous variable systems with an arbitrary number of modes per party. A powerful hierarchical relation for correlation measures based on the log-determinant of covariance matrices is further established for all Gaussian states, which has no counterpart among quantities based on the conventional von Neumann entropy.
Schur Complement Inequalities for Covariance Matrices and Monogamy of Quantum Correlations.
Lami, Ludovico; Hirche, Christoph; Adesso, Gerardo; Winter, Andreas
2016-11-25
We derive fundamental constraints for the Schur complement of positive matrices, which provide an operator strengthening to recently established information inequalities for quantum covariance matrices, including strong subadditivity. This allows us to prove general results on the monogamy of entanglement and steering quantifiers in continuous variable systems with an arbitrary number of modes per party. A powerful hierarchical relation for correlation measures based on the log-determinant of covariance matrices is further established for all Gaussian states, which has no counterpart among quantities based on the conventional von Neumann entropy.
Realisation of all 16 Boolean logic functions in a single magnetoresistance memory cell.
Gao, Shuang; Yang, Guang; Cui, Bin; Wang, Shouguo; Zeng, Fei; Song, Cheng; Pan, Feng
2016-07-07
Stateful logic circuits based on next-generation nonvolatile memories, such as magnetoresistance random access memory (MRAM), promise to break the long-standing von Neumann bottleneck in state-of-the-art data processing devices. For the successful commercialisation of stateful logic circuits, a critical step is realizing the best use of a single memory cell to perform logic functions. In this work, we propose a method for implementing all 16 Boolean logic functions in a single MRAM cell, namely a magnetoresistance (MR) unit. Based on our experimental results, we conclude that this method is applicable to any MR unit with a double-hump-like hysteresis loop, especially pseudo-spin-valve magnetic tunnel junctions with a high MR ratio. Moreover, after simply reversing the correspondence between voltage signals and output logic values, this method could also be applicable to any MR unit with a double-pit-like hysteresis loop. These results may provide a helpful solution for the final commercialisation of MRAM-based stateful logic circuits in the near future.
NASA Astrophysics Data System (ADS)
Li, Shenmin; Guo, Hua
2002-09-01
The scattering dynamics of vibrationally excited NO from a metal surface is investigated theoretically using a dissipative model that includes both the neutral and negative ion states. The Liouville-von Neumann equation is solved numerically by a Monte Carlo wave packet method, in which the wave packet is allowed to "jump" between the neutral and negative ion states in a stochastic fashion. It is shown that the temporary population of the negative ion state results in significant changes in vibrational dynamics, which eventually lead to vibrationally inelastic scattering of NO. Reasonable agreement with experiment is obtained with empirical potential energy surfaces. In particular, the experimentally observed facile multiquantum relaxation of the vibrationally highly excited NO is reproduced. The simulation also provides interesting insight into the scattering dynamics.
Large time-step stability of explicit one-dimensional advection schemes
NASA Technical Reports Server (NTRS)
Leonard, B. P.
1993-01-01
There is a wide-spread belief that most explicit one-dimensional advection schemes need to satisfy the so-called 'CFL condition' - that the Courant number, c = udelta(t)/delta(x), must be less than or equal to one, for stability in the von Neumann sense. This puts severe limitations on the time-step in high-speed, fine-grid calculations and is an impetus for the development of implicit schemes, which often require less restrictive time-step conditions for stability, but are more expensive per time-step. However, it turns out that, at least in one dimension, if explicit schemes are formulated in a consistent flux-based conservative finite-volume form, von Neumann stability analysis does not place any restriction on the allowable Courant number. Any explicit scheme that is stable for c is less than 1, with a complex amplitude ratio, G(c), can be easily extended to arbitrarily large c. The complex amplitude ratio is then given by exp(- (Iota)(Nu)(Theta)) G(delta(c)), where N is the integer part of c, and delta(c) = c - N (less than 1); this is clearly stable. The CFL condition is, in fact, not a stability condition at all, but, rather, a 'range restriction' on the 'pieces' in a piece-wise polynomial interpolation. When a global view is taken of the interpolation, the need for a CFL condition evaporates. A number of well-known explicit advection schemes are considered and thus extended to large delta(t). The analysis also includes a simple interpretation of (large delta(t)) total-variation-diminishing (TVD) constraints.
A Study of Complex Deep Learning Networks on High Performance, Neuromorphic, and Quantum Computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Potok, Thomas E; Schuman, Catherine D; Young, Steven R
Current Deep Learning models use highly optimized convolutional neural networks (CNN) trained on large graphical processing units (GPU)-based computers with a fairly simple layered network topology, i.e., highly connected layers, without intra-layer connections. Complex topologies have been proposed, but are intractable to train on current systems. Building the topologies of the deep learning network requires hand tuning, and implementing the network in hardware is expensive in both cost and power. In this paper, we evaluate deep learning models using three different computing architectures to address these problems: quantum computing to train complex topologies, high performance computing (HPC) to automatically determinemore » network topology, and neuromorphic computing for a low-power hardware implementation. Due to input size limitations of current quantum computers we use the MNIST dataset for our evaluation. The results show the possibility of using the three architectures in tandem to explore complex deep learning networks that are untrainable using a von Neumann architecture. We show that a quantum computer can find high quality values of intra-layer connections and weights, while yielding a tractable time result as the complexity of the network increases; a high performance computer can find optimal layer-based topologies; and a neuromorphic computer can represent the complex topology and weights derived from the other architectures in low power memristive hardware. This represents a new capability that is not feasible with current von Neumann architecture. It potentially enables the ability to solve very complicated problems unsolvable with current computing technologies.« less
Kundla, Enn
2007-04-01
The evolution of the magnetic polarization of an ensemble of paired spin(-1/2) nuclei in an MAS NMR (nuclear magnetic resonance) experiment and the induced spectrum are described theoretically by means of a Liouville-von Neumann equation representation in a wobbling rotating frame in combination with the averaged Hamiltonian theory. In this method, the effect of a high-intensity external static magnetic field and the effects of the leftover interaction components of the Hamiltonian that commute with the approximate Hamiltonian are taken into account simultaneously and equivalently. This method reproduces details that really exist in the recorded spectra, caused by secular terms in the Hamiltonian, which might otherwise be smoothed out owing to the approximate treatment of the effects of the secular terms. Complete analytical expressions, which describe the whole NMR spectrum including the rotational sideband sets, and which consider all the relevant intermolecular interactions, are obtained.
What's wrong with hazard-ranking systems? An expository note.
Cox, Louis Anthony Tony
2009-07-01
Two commonly recommended principles for allocating risk management resources to remediate uncertain hazards are: (1) select a subset to maximize risk-reduction benefits (e.g., maximize the von Neumann-Morgenstern expected utility of the selected risk-reducing activities), and (2) assign priorities to risk-reducing opportunities and then select activities from the top of the priority list down until no more can be afforded. When different activities create uncertain but correlated risk reductions, as is often the case in practice, then these principles are inconsistent: priority scoring and ranking fails to maximize risk-reduction benefits. Real-world risk priority scoring systems used in homeland security and terrorism risk assessment, environmental risk management, information system vulnerability rating, business risk matrices, and many other important applications do not exploit correlations among risk-reducing opportunities or optimally diversify risk-reducing investments. As a result, they generally make suboptimal risk management recommendations. Applying portfolio optimization methods instead of risk prioritization ranking, rating, or scoring methods can achieve greater risk-reduction value for resources spent.
Quantum thermodynamics and quantum entanglement entropies in an expanding universe
NASA Astrophysics Data System (ADS)
Farahmand, Mehrnoosh; Mohammadzadeh, Hosein; Mehri-Dehnavi, Hossein
2017-05-01
We investigate an asymptotically spatially flat Robertson-Walker space-time from two different perspectives. First, using von Neumann entropy, we evaluate the entanglement generation due to the encoded information in space-time. Then, we work out the entropy of particle creation based on the quantum thermodynamics of the scalar field on the underlying space-time. We show that the general behavior of both entropies are the same. Therefore, the entanglement can be applied to the customary quantum thermodynamics of the universe. Also, using these entropies, we can recover some information about the parameters of space-time.
Brain architecture: a design for natural computation.
Kaiser, Marcus
2007-12-15
Fifty years ago, John von Neumann compared the architecture of the brain with that of the computers he invented and which are still in use today. In those days, the organization of computers was based on concepts of brain organization. Here, we give an update on current results on the global organization of neural systems. For neural systems, we outline how the spatial and topological architecture of neuronal and cortical networks facilitates robustness against failures, fast processing and balanced network activation. Finally, we discuss mechanisms of self-organization for such architectures. After all, the organization of the brain might again inspire computer architecture.
Quantum Logic Networks for Probabilistic and Controlled Teleportation of Unknown Quantum States
NASA Astrophysics Data System (ADS)
Gao, Ting
2004-08-01
We present simplification schemes for probabilistic and controlled teleportation of the unknown quantum states of both one particle and two particles and construct efficient quantum logic networks for implementing the new schemes by means of the primitive operations consisting of single-qubit gates, two-qubit controlled-not gates, Von Neumann measurement, and classically controlled operations. In these schemes the teleportation are not always successful but with certain probability. The project supported by National Natural Science Foundation of China under Grant No. 10271081 and the Natural Science Foundation of Hebei Province of China under Grant No. A2004000141
Hybrid normed ideal perturbations of n-tuples of operators I
NASA Astrophysics Data System (ADS)
Voiculescu, Dan-Virgil
2018-06-01
In hybrid normed ideal perturbations of n-tuples of operators, the normed ideal is allowed to vary with the component operators. We begin extending to this setting the machinery we developed for normed ideal perturbations based on the modulus of quasicentral approximation and an adaptation of our non-commutative generalization of the Weyl-von Neumann theorem. For commuting n-tuples of hermitian operators, the modulus of quasicentral approximation remains essentially the same when Cn- is replaced by a hybrid n-tuple Cp1,…- , … , Cpn- , p1-1 + ⋯ + pn-1 = 1. The proof involves singular integrals of mixed homogeneity.
Scattering General Analysis; ANALISIS GENERAL DE LA DISPERSION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tixaire, A.G.
1962-01-01
A definition of scattering states is given. It is shown that such states must belong to the absolutely continuous part of the spectrum of the total hamiltonian whenever scattering systems are considered. Such embedding may be proper unless the quantum system is physically admissible. The Moller wave operators are analyzed using Abel- and Cesaro-limit theoretical arguments. Von Neumann s ergodic theorem is partially generalized. A rigorous derivation of the Gell-Mann and Goldberger and Lippmann and Schwinger equations is obtained by making use of results on spectral theory, wave function, and eigendifferential concepts contained. (auth)
One-way unlocalizable quantum discord
NASA Astrophysics Data System (ADS)
Xi, Zhengjun; Fan, Heng; Li, Yongming
2012-05-01
In this paper, we present the concept of the one-way unlocalizable quantum discord and investigate its properties. We provide a polygamy inequality for it in a tripartite pure quantum system of arbitrary dimension. Several tradeoff relations between the one-way unlocalizable quantum discord and other correlations are given. If the von Neumann measurement is made on a part of the system, we give two expressions of the one-way unlocalizable quantum discord in terms of partial distillable entanglement and quantum disturbance. Finally, we also provide a lower bound for bipartite shareability of quantum correlation beyond entanglement in a tripartite system.
Jacobsen, J L; Saleur, H
2008-02-29
We determine exactly the probability distribution of the number N_(c) of valence bonds connecting a subsystem of length L>1 to the rest of the system in the ground state of the XXX antiferromagnetic spin chain. This provides, in particular, the asymptotic behavior of the valence-bond entanglement entropy S_(VB)=N_(c)ln2=4ln2/pi(2)lnL disproving a recent conjecture that this should be related with the von Neumann entropy, and thus equal to 1/3lnL. Our results generalize to the Q-state Potts model.
Quantum-state reconstruction by maximizing likelihood and entropy.
Teo, Yong Siah; Zhu, Huangjun; Englert, Berthold-Georg; Řeháček, Jaroslav; Hradil, Zdeněk
2011-07-08
Quantum-state reconstruction on a finite number of copies of a quantum system with informationally incomplete measurements, as a rule, does not yield a unique result. We derive a reconstruction scheme where both the likelihood and the von Neumann entropy functionals are maximized in order to systematically select the most-likely estimator with the largest entropy, that is, the least-bias estimator, consistent with a given set of measurement data. This is equivalent to the joint consideration of our partial knowledge and ignorance about the ensemble to reconstruct its identity. An interesting structure of such estimators will also be explored.
Strong subadditivity for log-determinant of covariance matrices and its applications
NASA Astrophysics Data System (ADS)
Adesso, Gerardo; Simon, R.
2016-08-01
We prove that the log-determinant of the covariance matrix obeys the strong subadditivity inequality for arbitrary tripartite states of multimode continuous variable quantum systems. This establishes general limitations on the distribution of information encoded in the second moments of canonically conjugate operators. The inequality is shown to be stronger than the conventional strong subadditivity inequality for von Neumann entropy in a class of pure tripartite Gaussian states. We finally show that such an inequality implies a strict monogamy-type constraint for joint Einstein-Podolsky-Rosen steerability of single modes by Gaussian measurements performed on multiple groups of modes.
Detonability of hydrocarbon fuels in air
NASA Technical Reports Server (NTRS)
Beeson, H. D.; Mcclenagan, R. D.; Bishop, C. V.; Benz, F. J.; Pitz, W. J.; Westbrook, C. K.; Lee, J. H. S.
1991-01-01
Studies were conducted of the detonation of gas-phase mixtures of n-hexane and JP-4, with oxidizers as varied as air and pure oxygen, measuring detonation velocities and cell sizes as a function of stoichiometry and diluent concentration. The induction length of a one-dimensional Zeldovich-von Neumann-Doering detonation was calculated on the basis of a theoretical model that employed the reaction kinetics of the hydrocarbon fuels used. Critical energy and critical tube diameter are compared for a relative measure of the heavy hydrocarbon fuels studied; detonation sensitivity appears to increase slightly with increasing carbon number.
Modeling arson - An exercise in qualitative model building
NASA Technical Reports Server (NTRS)
Heineke, J. M.
1975-01-01
A detailed example is given of the role of von Neumann and Morgenstern's 1944 'expected utility theorem' (in the theory of games and economic behavior) in qualitative model building. Specifically, an arsonist's decision as to the amount of time to allocate to arson and related activities is modeled, and the responsiveness of this time allocation to changes in various policy parameters is examined. Both the activity modeled and the method of presentation are intended to provide an introduction to the scope and power of the expected utility theorem in modeling situations of 'choice under uncertainty'. The robustness of such a model is shown to vary inversely with the number of preference restrictions used in the analysis. The fewer the restrictions, the wider is the class of agents to which the model is applicable, and accordingly more confidence is put in the derived results. A methodological discussion on modeling human behavior is included.
Determination of detonation parameters for liquid High Explosives
NASA Astrophysics Data System (ADS)
Mochalova, Valentina; Utkin, Alexander
2011-06-01
The experimental investigation of detonation parameters and reaction zone structure in liquid HE (bis-(2-fluoro-2,2-dinitroethyl)formal (FEFO), tetranitromethane (TNM), nitromethane (NM)) was conducted. Detonation front in TNM and NM was stable while the instability of detonation in FEFO was observed. Von Neumann spike was recorded for these HE and its parameters were determined. The different methods for C-J point determination were used for each HE. For FEFO reaction time τ was found from experiments with different charge diameters (τ is approximately equal to 300 ns); for TNM - at fixed diameter and different lengths of charges (τ ~ 200 ns); for NM - at fixed diameter and length of charges, but detonation initiation was carried out by different explosive charges (τ ~ 50 ns). It was found that in TNM the detonation velocity depends on charge diameter. Maximum value of reaction rate in investigated liquid HE was observed after shock jump and induction time was not recorded.
Determination of detonation parameters for liquid high explosives
NASA Astrophysics Data System (ADS)
Mochalova, Valentina; Utkin, Alexander
2012-03-01
The experimental investigation of detonation parameters and reaction zone structure in liquid HE (bis-(2-fluoro-2,2-dinitroethyl)formal (FEFO), tetranitromethane (TNM), nitromethane (NM)) was conducted by means of laser interferometer VISAR. Detonation front in TNM and NM was stable while the instability of detonation in FEFO was observed. The parameters of Von Neumann spike were determined for these HE. The different methods for C-J point determination were used for each HE. For FEFO reaction time t was found from experiments with different charge diameters (τ is approximately equal to 300 ns); for TNM - at fixed diameter and different lengths of charges (τ ≈ 200 ns); for NM - at fixed diameter and length of charges, but detonation initiation was carried out by different explosive charges (τ ≈ 50 ns). It was found that in TNM the detonation velocity depends on charge diameter. Maximum value of reaction rate in investigated liquid HE was observed after shock jump.
Multihop teleportation of two-qubit state via the composite GHZ-Bell channel
NASA Astrophysics Data System (ADS)
Zou, Zhen-Zhen; Yu, Xu-Tao; Gong, Yan-Xiao; Zhang, Zai-Chen
2017-01-01
A multihop teleportation protocol in quantum communication network is introduced to teleport an arbitrary two-qubit state, between two nodes without directly sharing entanglement pairs. Quantum channels are built among neighbor nodes based on a five-qubit entangled system composed of GHZ and Bell pairs. The von Neumann measurements in all intermediate nodes and the source node are implemented, and then the measurement outcomes are sent to the destination node independently. After collecting all the measurement outcomes at the destination node, an efficient method is proposed to calculate the unitary operations for transforming the receiver's states to the state teleported. Therefore, only adopting the proper unitary operations at the destination node, the desired quantum state can be recovered perfectly. The transmission flexibility and efficiency of quantum network with composite GHZ-Bell channel are improved by transmitting measurement outcomes of all nodes in parallelism and reducing hop-by-hop teleportation delay.
The fuzzy cube and causal efficacy: representation of concomitant mechanisms in stroke.
Jobe, Thomas H.; Helgason, Cathy M.
1998-04-01
Twentieth century medical science has embraced nineteenth century Boolean probability theory based upon two-valued Aristotelian logic. With the later addition of bit-based, von Neumann structured computational architectures, an epistemology based on randomness has led to a bivalent epidemiological methodology that dominates medical decision making. In contrast, fuzzy logic, based on twentieth century multi-valued logic, and computational structures that are content addressed and adaptively modified, has advanced a new scientific paradigm for the twenty-first century. Diseases such as stroke involve multiple concomitant causal factors that are difficult to represent using conventional statistical methods. We tested which paradigm best represented this complex multi-causal clinical phenomenon-stroke. We show that the fuzzy logic paradigm better represented clinical complexity in cerebrovascular disease than current probability theory based methodology. We believe this finding is generalizable to all of clinical science since multiple concomitant causal factors are involved in nearly all known pathological processes.
NASA Astrophysics Data System (ADS)
Saiki, Toshiharu
2016-09-01
Control of localized surface plasmon resonance (LSPR) excited on metal nanostructures has drawn attention for applications in dynamic switching of plasmonic devices. As a reversible active media for LSPR control, chalcogenide phase-change materials (PCMs) such as GeSbTe (GST) are promising for high-contrast robust plasmonic switching. Owing to the plasticity and the threshold behavior during both amorphization and crystallization of PCMs, PCM-based LSPR switching elements possess a dual functionality of memory and processing. Integration of LSPR switching elements so that they interact with each other will allow us to build non-von-Neumann computing devices. As a specific demonstration, we discuss the implementation of a cellular automata (CA) algorithm into interacting LSPR switching elements. In the model we propose, PCM cells, which can be in one of two states (amorphous and crystalline), interact with each other by being linked by a AuNR, whose LSPR peak wavelength is determined by the phase of PCM cells on the both sides. The CA program proceeds by irradiating with a light pulse train. The local rule set is defined by the temperature rise in the PCM cells induced by the LSPR of the AuNR, which is subject to the intensity and wavelength of the irradiating pulse. We also investigate the possibility of solving a problem analogous to the spin-glass problem by using a coupled dipole system, in which the individual coupling strengths can be modified to optimize the system so that the exact solution can be easily reached. For this algorithm, we propose an implementation based on an idea that coupled plasmon particles can create long-range spatial correlations, and the interaction of this with a phase-change material allows the coupling strength to be modified.
AHaH computing-from metastable switches to attractors to machine learning.
Nugent, Michael Alexander; Molter, Timothy Wesley
2014-01-01
Modern computing architecture based on the separation of memory and processing leads to a well known problem called the von Neumann bottleneck, a restrictive limit on the data bandwidth between CPU and RAM. This paper introduces a new approach to computing we call AHaH computing where memory and processing are combined. The idea is based on the attractor dynamics of volatile dissipative electronics inspired by biological systems, presenting an attractive alternative architecture that is able to adapt, self-repair, and learn from interactions with the environment. We envision that both von Neumann and AHaH computing architectures will operate together on the same machine, but that the AHaH computing processor may reduce the power consumption and processing time for certain adaptive learning tasks by orders of magnitude. The paper begins by drawing a connection between the properties of volatility, thermodynamics, and Anti-Hebbian and Hebbian (AHaH) plasticity. We show how AHaH synaptic plasticity leads to attractor states that extract the independent components of applied data streams and how they form a computationally complete set of logic functions. After introducing a general memristive device model based on collections of metastable switches, we show how adaptive synaptic weights can be formed from differential pairs of incremental memristors. We also disclose how arrays of synaptic weights can be used to build a neural node circuit operating AHaH plasticity. By configuring the attractor states of the AHaH node in different ways, high level machine learning functions are demonstrated. This includes unsupervised clustering, supervised and unsupervised classification, complex signal prediction, unsupervised robotic actuation and combinatorial optimization of procedures-all key capabilities of biological nervous systems and modern machine learning algorithms with real world application.
AHaH Computing–From Metastable Switches to Attractors to Machine Learning
Nugent, Michael Alexander; Molter, Timothy Wesley
2014-01-01
Modern computing architecture based on the separation of memory and processing leads to a well known problem called the von Neumann bottleneck, a restrictive limit on the data bandwidth between CPU and RAM. This paper introduces a new approach to computing we call AHaH computing where memory and processing are combined. The idea is based on the attractor dynamics of volatile dissipative electronics inspired by biological systems, presenting an attractive alternative architecture that is able to adapt, self-repair, and learn from interactions with the environment. We envision that both von Neumann and AHaH computing architectures will operate together on the same machine, but that the AHaH computing processor may reduce the power consumption and processing time for certain adaptive learning tasks by orders of magnitude. The paper begins by drawing a connection between the properties of volatility, thermodynamics, and Anti-Hebbian and Hebbian (AHaH) plasticity. We show how AHaH synaptic plasticity leads to attractor states that extract the independent components of applied data streams and how they form a computationally complete set of logic functions. After introducing a general memristive device model based on collections of metastable switches, we show how adaptive synaptic weights can be formed from differential pairs of incremental memristors. We also disclose how arrays of synaptic weights can be used to build a neural node circuit operating AHaH plasticity. By configuring the attractor states of the AHaH node in different ways, high level machine learning functions are demonstrated. This includes unsupervised clustering, supervised and unsupervised classification, complex signal prediction, unsupervised robotic actuation and combinatorial optimization of procedures–all key capabilities of biological nervous systems and modern machine learning algorithms with real world application. PMID:24520315
NASA Technical Reports Server (NTRS)
Demuren, A. O.; Ibraheem, S. O.
1993-01-01
The convergence characteristics of various approximate factorizations for the 3D Euler and Navier-Stokes equations are examined using the von-Neumann stability analysis method. Three upwind-difference based factorizations and several central-difference based factorizations are considered for the Euler equations. In the upwind factorizations both the flux-vector splitting methods of Steger and Warming and van Leer are considered. Analysis of the Navier-Stokes equations is performed only on the Beam and Warming central-difference scheme. The range of CFL numbers over which each factorization is stable is presented for one-, two-, and three-dimensional flow. Also presented for each factorization is the CFL number at which the maximum eigenvalue is minimized, for all Fourier components, as well as for the high frequency range only. The latter is useful for predicting the effectiveness of multigrid procedures with these schemes as smoothers. Further, local mode analysis is performed to test the suitability of using a uniform flow field in the stability analysis. Some inconsistencies in the results from previous analyses are resolved.
Experimental Investigation of Spectra of Dynamical Maps and their Relation to non-Markovianity
NASA Astrophysics Data System (ADS)
Yu, Shang; Wang, Yi-Tao; Ke, Zhi-Jin; Liu, Wei; Meng, Yu; Li, Zhi-Peng; Zhang, Wen-Hao; Chen, Geng; Tang, Jian-Shun; Li, Chuan-Feng; Guo, Guang-Can
2018-02-01
The spectral theorem of von Neumann has been widely applied in various areas, such as the characteristic spectral lines of atoms. It has been recently proposed that dynamical evolution also possesses spectral lines. As the most intrinsic property of evolution, the behavior of these spectra can, in principle, exhibit almost every feature of this evolution, among which the most attractive topic is non-Markovianity, i.e., the memory effects during evolution. Here, we develop a method to detect these spectra, and moreover, we experimentally examine the relation between the spectral behavior and non-Markovianity by engineering the environment to prepare dynamical maps with different non-Markovian properties and then detecting the dynamical behavior of the spectral values. These spectra will lead to a witness for essential non-Markovianity. We also experimentally verify another simplified witness method for essential non-Markovianity. Interestingly, in both cases, we observe the sudden transition from essential non-Markovianity to something else. Our work shows the role of the spectra of evolution in the studies of non-Makovianity and provides the alternative methods to characterize non-Markovian behavior.
Reprogrammable logic in memristive crossbar for in-memory computing
NASA Astrophysics Data System (ADS)
Cheng, Long; Zhang, Mei-Yun; Li, Yi; Zhou, Ya-Xiong; Wang, Zhuo-Rui; Hu, Si-Yu; Long, Shi-Bing; Liu, Ming; Miao, Xiang-Shui
2017-12-01
Memristive stateful logic has emerged as a promising next-generation in-memory computing paradigm to address escalating computing-performance pressures in traditional von Neumann architecture. Here, we present a nonvolatile reprogrammable logic method that can process data between different rows and columns in a memristive crossbar array based on material implication (IMP) logic. Arbitrary Boolean logic can be executed with a reprogrammable cell containing four memristors in a crossbar array. In the fabricated Ti/HfO2/W memristive array, some fundamental functions, such as universal NAND logic and data transfer, were experimentally implemented. Moreover, using eight memristors in a 2 × 4 array, a one-bit full adder was theoretically designed and verified by simulation to exhibit the feasibility of our method to accomplish complex computing tasks. In addition, some critical logic-related performances were further discussed, such as the flexibility of data processing, cascading problem and bit error rate. Such a method could be a step forward in developing IMP-based memristive nonvolatile logic for large-scale in-memory computing architecture.
Heisenberg (and Schrödinger, and Pauli) on hidden variables
NASA Astrophysics Data System (ADS)
Bacciagaluppi, Guido; Crull, Elise
In this paper, we discuss various aspects of Heisenberg's thought on hidden variables in the period 1927-1935. We also compare Heisenberg's approach to others current at the time, specifically that embodied by von Neumann's impossibility proof, but also views expressed mainly in correspondence by Pauli and by Schrödinger. We shall base ourselves mostly on published and unpublished materials that are known but little-studied, among others Heisenberg's own draft response to the EPR paper. Our aim will be not only to clarify Heisenberg's thought on the hidden-variables question, but in part also to clarify how this question was understood more generally at the time.
Functors of White Noise Associated to Characters of the Infinite Symmetric Group
NASA Astrophysics Data System (ADS)
Bożejko, Marek; Guţă, Mădălin
The characters of the infinite symmetric group are extended to multiplicative positive definite functions on pair partitions by using an explicit representation due to Veršik and Kerov. The von Neumann algebra generated by the fields with f in an infinite dimensional real Hilbert space is infinite and the vacuum vector is not separating. For a family depending on an integer N< - 1 an ``exclusion principle'' is found allowing at most ``identical particles'' on the same state:
Entanglement of two blocks of spins in the critical Ising model
NASA Astrophysics Data System (ADS)
Facchi, P.; Florio, G.; Invernizzi, C.; Pascazio, S.
2008-11-01
We compute the entropy of entanglement of two blocks of L spins at a distance d in the ground state of an Ising chain in an external transverse magnetic field. We numerically study the von Neumann entropy for different values of the transverse field. At the critical point we obtain analytical results for blocks of size L=1 and 2. In the general case, the critical entropy is shown to be additive when d→∞ . Finally, based on simple arguments, we derive an expression for the entropy at the critical point as a function of both L and d . This formula is in excellent agreement with numerical results.
Super-quantum correlation for SU(2) invariant state in 4⊗ 2 system
NASA Astrophysics Data System (ADS)
Li, Lin-Song; Tao, Yuan-Hong; Nan, Hua; Xu, Hui
2018-04-01
We analytically evaluate the weak one-way deficit and super-quantum discord for a system composed of spin-3/2 and spin-1/2 subsystems possessing SU(2) symmetry. We also make a comparative study of the relationships among the quantum discord, one-way deficit, weak one-way deficit, and super-quantum discord for the SU(2) invariant state. It is shown that super-quantum discord via weak measurement is greater than that via von Neumann measurement. But weak one-way deficit is less than the one-way deficit. As a result, weak measurement do not always reveal more quantumness.
Quantum entangled dark solitons formed by ultracold atoms in optical lattices.
Mishmash, R V; Carr, L D
2009-10-02
Inspired by experiments on Bose-Einstein condensates in optical lattices, we study the quantum evolution of dark soliton initial conditions in the context of the Bose-Hubbard Hamiltonian. An extensive set of quantum measures is utilized in our analysis, including von Neumann and generalized quantum entropies, quantum depletion, and the pair correlation function. We find that quantum effects cause the soliton to fill in. Moreover, soliton-soliton collisions become inelastic, in strong contrast to the predictions of mean-field theory. These features show that the lifetime and collision properties of dark solitons in optical lattices provide clear signals of quantum effects.
Spatial correlation in matter-wave interference as a measure of decoherence, dephasing, and entropy
NASA Astrophysics Data System (ADS)
Chen, Zilin; Beierle, Peter; Batelaan, Herman
2018-04-01
The loss of contrast in double-slit electron diffraction due to dephasing and decoherence processes is studied. It is shown that the spatial intensity correlation function of diffraction patterns can be used to distinguish between dephasing and decoherence. This establishes a measure of time reversibility that does not require the determination of coherence terms of the density matrix, while von Neumann entropy, another measure of time reversibility, does require coherence terms. This technique is exciting in view of the need to understand and control the detrimental experimental effect of contrast loss and for fundamental studies on the transition from the classical to the quantum regime.
NASA Astrophysics Data System (ADS)
Benini, Luca
2017-06-01
The "internet of everything" envisions trillions of connected objects loaded with high-bandwidth sensors requiring massive amounts of local signal processing, fusion, pattern extraction and classification. From the computational viewpoint, the challenge is formidable and can be addressed only by pushing computing fabrics toward massive parallelism and brain-like energy efficiency levels. CMOS technology can still take us a long way toward this goal, but technology scaling is losing steam. Energy efficiency improvement will increasingly hinge on architecture, circuits, design techniques such as heterogeneous 3D integration, mixed-signal preprocessing, event-based approximate computing and non-Von-Neumann architectures for scalable acceleration.
Spectral density of mixtures of random density matrices for qubits
NASA Astrophysics Data System (ADS)
Zhang, Lin; Wang, Jiamei; Chen, Zhihua
2018-06-01
We derive the spectral density of the equiprobable mixture of two random density matrices of a two-level quantum system. We also work out the spectral density of mixture under the so-called quantum addition rule. We use the spectral densities to calculate the average entropy of mixtures of random density matrices, and show that the average entropy of the arithmetic-mean-state of n qubit density matrices randomly chosen from the Hilbert-Schmidt ensemble is never decreasing with the number n. We also get the exact value of the average squared fidelity. Some conjectures and open problems related to von Neumann entropy are also proposed.
Avetissian, H K; Ghazaryan, A G; Matevosyan, H H; Mkrtchian, G F
2015-10-01
The microscopic quantum theory of plasma nonlinear interaction with the coherent shortwave electromagnetic radiation of arbitrary intensity is developed. The Liouville-von Neumann equation for the density matrix is solved analytically considering a wave field exactly and a scattering potential of plasma ions as a perturbation. With the help of this solution we calculate the nonlinear inverse-bremsstrahlung absorption rate for a grand canonical ensemble of electrons. The latter is studied in Maxwellian, as well as in degenerate quantum plasma for x-ray lasers at superhigh intensities and it is shown that one can achieve the efficient absorption coefficient in these cases.
Forward marching procedure for separated boundary-layer flows
NASA Technical Reports Server (NTRS)
Carter, J. E.; Wornom, S. F.
1975-01-01
A forward-marching procedure for separated boundary-layer flows which permits the rapid and accurate solution of flows of limited extent is presented. The streamwise convection of vorticity in the reversed flow region is neglected, and this approximation is incorporated into a previously developed (Carter, 1974) inverse boundary-layer procedure. The equations are solved by the Crank-Nicolson finite-difference scheme in which column iteration is carried out at each streamwise station. Instabilities encountered in the column iterations are removed by introducing timelike terms in the finite-difference equations. This provides both unconditional diagonal dominance and a column iterative scheme, found to be stable using the von Neumann stability analysis.
Out-of-equilibrium protocol for Rényi entropies via the Jarzynski equality.
Alba, Vincenzo
2017-06-01
In recent years entanglement measures, such as the von Neumann and the Rényi entropies, provided a unique opportunity to access elusive features of quantum many-body systems. However, extracting entanglement properties analytically, experimentally, or in numerical simulations can be a formidable task. Here, by combining the replica trick and the Jarzynski equality we devise an alternative effective out-of-equilibrium protocol for measuring the equilibrium Rényi entropies. The key idea is to perform a quench in the geometry of the replicas. The Rényi entropies are obtained as the exponential average of the work performed during the quench. We illustrate an application of the method in classical Monte Carlo simulations, although it could be useful in different contexts, such as in quantum Monte Carlo, or experimentally in cold-atom systems. The method is most effective in the quasistatic regime, i.e., for a slow quench. As a benchmark, we compute the Rényi entropies in the Ising universality class in 1+1 dimensions. We find perfect agreement with the well-known conformal field theory predictions.
Biomolecular computers with multiple restriction enzymes.
Sakowski, Sebastian; Krasinski, Tadeusz; Waldmajer, Jacek; Sarnik, Joanna; Blasiak, Janusz; Poplawski, Tomasz
2017-01-01
The development of conventional, silicon-based computers has several limitations, including some related to the Heisenberg uncertainty principle and the von Neumann "bottleneck". Biomolecular computers based on DNA and proteins are largely free of these disadvantages and, along with quantum computers, are reasonable alternatives to their conventional counterparts in some applications. The idea of a DNA computer proposed by Ehud Shapiro's group at the Weizmann Institute of Science was developed using one restriction enzyme as hardware and DNA fragments (the transition molecules) as software and input/output signals. This computer represented a two-state two-symbol finite automaton that was subsequently extended by using two restriction enzymes. In this paper, we propose the idea of a multistate biomolecular computer with multiple commercially available restriction enzymes as hardware. Additionally, an algorithmic method for the construction of transition molecules in the DNA computer based on the use of multiple restriction enzymes is presented. We use this method to construct multistate, biomolecular, nondeterministic finite automata with four commercially available restriction enzymes as hardware. We also describe an experimental applicaton of this theoretical model to a biomolecular finite automaton made of four endonucleases.
Block entropy and quantum phase transition in the anisotropic Kondo necklace model
NASA Astrophysics Data System (ADS)
Mendoza-Arenas, J. J.; Franco, R.; Silva-Valencia, J.
2010-06-01
We study the von Neumann block entropy in the Kondo necklace model for different anisotropies η in the XY interaction between conduction spins using the density matrix renormalization group method. It was found that the block entropy presents a maximum for each η considered, and, comparing it with the results of the quantum criticality of the model based on the behavior of the energy gap, we observe that the maximum block entropy occurs at the quantum critical point between an antiferromagnetic and a Kondo singlet state, so this measure of entanglement is useful for giving information about where a quantum phase transition occurs in this model. We observe that the block entropy also presents a maximum at the quantum critical points that are obtained when an anisotropy Δ is included in the Kondo exchange between localized and conduction spins; when Δ diminishes for a fixed value of η, the critical point increases, favoring the antiferromagnetic phase.
NASA Astrophysics Data System (ADS)
Stöltzner, Michael
Answering to the double-faced influence of string theory on mathematical practice and rigour, the mathematical physicists Arthur Jaffe and Frank Quinn have contemplated the idea that there exists a `theoretical' mathematics (alongside `theoretical' physics) whose basic structures and results still require independent corroboration by mathematical proof. In this paper, I shall take the Jaffe-Quinn debate mainly as a problem of mathematical ontology and analyse it against the backdrop of two philosophical views that are appreciative towards informal mathematical development and conjectural results: Lakatos's methodology of proofs and refutations and John von Neumann's opportunistic reading of Hilbert's axiomatic method. The comparison of both approaches shows that mitigating Lakatos's falsificationism makes his insights about mathematical quasi-ontology more relevant to 20th century mathematics in which new structures are introduced by axiomatisation and not necessarily motivated by informal ancestors. The final section discusses the consequences of string theorists' claim to finality for the theory's mathematical make-up. I argue that ontological reductionism as advocated by particle physicists and the quest for mathematically deeper axioms do not necessarily lead to identical results.
NASA Technical Reports Server (NTRS)
Rogers, David
1988-01-01
The advent of the Connection Machine profoundly changes the world of supercomputers. The highly nontraditional architecture makes possible the exploration of algorithms that were impractical for standard Von Neumann architectures. Sparse distributed memory (SDM) is an example of such an algorithm. Sparse distributed memory is a particularly simple and elegant formulation for an associative memory. The foundations for sparse distributed memory are described, and some simple examples of using the memory are presented. The relationship of sparse distributed memory to three important computational systems is shown: random-access memory, neural networks, and the cerebellum of the brain. Finally, the implementation of the algorithm for sparse distributed memory on the Connection Machine is discussed.
Nonvolatile “AND,” “OR,” and “NOT” Boolean logic gates based on phase-change memory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Y.; Zhong, Y. P.; Deng, Y. F.
2013-12-21
Electronic devices or circuits that can implement both logic and memory functions are regarded as the building blocks for future massive parallel computing beyond von Neumann architecture. Here we proposed phase-change memory (PCM)-based nonvolatile logic gates capable of AND, OR, and NOT Boolean logic operations verified in SPICE simulations and circuit experiments. The logic operations are parallel computing and results can be stored directly in the states of the logic gates, facilitating the combination of computing and memory in the same circuit. These results are encouraging for ultralow-power and high-speed nonvolatile logic circuit design based on novel memory devices.
Emulating short-term synaptic dynamics with memristive devices
NASA Astrophysics Data System (ADS)
Berdan, Radu; Vasilaki, Eleni; Khiat, Ali; Indiveri, Giacomo; Serb, Alexandru; Prodromakis, Themistoklis
2016-01-01
Neuromorphic architectures offer great promise for achieving computation capacities beyond conventional Von Neumann machines. The essential elements for achieving this vision are highly scalable synaptic mimics that do not undermine biological fidelity. Here we demonstrate that single solid-state TiO2 memristors can exhibit non-associative plasticity phenomena observed in biological synapses, supported by their metastable memory state transition properties. We show that, contrary to conventional uses of solid-state memory, the existence of rate-limiting volatility is a key feature for capturing short-term synaptic dynamics. We also show how the temporal dynamics of our prototypes can be exploited to implement spatio-temporal computation, demonstrating the memristors full potential for building biophysically realistic neural processing systems.
An Algebraic Formulation of Level One Wess-Zumino Models
NASA Astrophysics Data System (ADS)
Böckenhauer, Jens
The highest weight modules of the chiral algebra of orthogonal WZW models at level one possess a realization in fermionic representation spaces; the Kac-Moody and Virasoro generators are represented as unbounded limits of even CAR algebras. It is shown that the representation theory of the underlying even CAR algebras reproduces precisely the sectors of the chiral algebra. This fact allows to develop a theory of local von Neumann algebras on the punctured circle, fitting nicely in the Doplicher-Haag-Roberts framework. The relevant localized endomorphisms which generate the charged sectors are explicitly constructed by means of Bogoliubov transformations. Using CAR theory, the fusion rules in terms of sector equivalence classes are proven.
NASA Astrophysics Data System (ADS)
McCaul, G. M. G.; Lorenz, C. D.; Kantorovich, L.
2017-03-01
We present a partition-free approach to the evolution of density matrices for open quantum systems coupled to a harmonic environment. The influence functional formalism combined with a two-time Hubbard-Stratonovich transformation allows us to derive a set of exact differential equations for the reduced density matrix of an open system, termed the extended stochastic Liouville-von Neumann equation. Our approach generalizes previous work based on Caldeira-Leggett models and a partitioned initial density matrix. This provides a simple, yet exact, closed-form description for the evolution of open systems from equilibriated initial conditions. The applicability of this model and the potential for numerical implementations are also discussed.
Early History of BELL'S Theorem Theory and Experiment
NASA Astrophysics Data System (ADS)
Clauser, John F.
Before 1980 it was unfashionable for a physicist to admit that he either did not understand and/or doubted the Truth and/or Orthodoxy of Quantum Mechanics (QM). Contemporary wisdom deemed it impossible that it may lead to incorrect predictions. Thus, it was foolish to suggest that it warranted further testing. Said wisdom proclaimed that nothing would ever be gained by any such pursuit. Bohr had won his debates with Einstein. Von Neumann had proven all other interpretations wrong. That was the end to it! Only an iconoclast dared think otherwise. Here I provide a brief history of some of my encounters with a few fellow iconoclasts, past denizens of a QM doubter's subculture.
Subsystem eigenstate thermalization hypothesis
NASA Astrophysics Data System (ADS)
Dymarsky, Anatoly; Lashkari, Nima; Liu, Hong
2018-01-01
Motivated by the qualitative picture of canonical typicality, we propose a refined formulation of the eigenstate thermalization hypothesis (ETH) for chaotic quantum systems. This formulation, which we refer to as subsystem ETH, is in terms of the reduced density matrix of subsystems. This strong form of ETH outlines the set of observables defined within the subsystem for which it guarantees eigenstate thermalization. We discuss the limits when the size of the subsystem is small or comparable to its complement. In the latter case we outline the way to calculate the leading volume-proportional contribution to the von Neumann and Renyi entanglment entropies. Finally, we provide numerical evidence for the proposal in the case of a one-dimensional Ising spin chain.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alcaraz, Francisco Castilho; Ibanez Berganza, Miguel; Sierra, German
In a quantum critical chain, the scaling regime of the energy and momentum of the ground state and low-lying excitations are described by conformal field theory (CFT). The same holds true for the von Neumann and Renyi entropies of the ground state, which display a universal logarithmic behavior depending on the central charge. In this Letter we generalize this result to those excited states of the chain that correspond to primary fields in CFT. It is shown that the nth Renyi entropy is related to a 2n-point correlator of primary fields. We verify this statement for the critical XX andmore » XXZ chains. This result uncovers a new link between quantum information theory and CFT.« less
NASA Astrophysics Data System (ADS)
Rashvand, Taghi
2016-11-01
We present a new scheme for quantum teleportation that one can teleport an unknown state via a non-maximally entangled channel with certainly, using an auxiliary system. In this scheme depending on the state of the auxiliary system, one can find a class of orthogonal vectors set as a basis which by performing von Neumann measurement in each element of this class Alice can teleport an unknown state with unit fidelity and unit probability. A comparison of our scheme with some previous schemes is given and we will see that our scheme has advantages that the others do not.
Conditional steering under the von Neumann scenario
NASA Astrophysics Data System (ADS)
Mukherjee, Kaushiki; Paul, Biswajit; Karmakar, Sumana; Sarkar, Debasis; Mukherjee, Amit; Bhattacharya, Some Sankar; Roy, Arup
2017-08-01
In Phys. Lett. A 166, 293 (1992), 10.1016/0375-9601(92)90711-T, Popescu and Rohrlich characterized nonlocality of pure n -partite entangled systems by studying bipartite violation of local realism when n -2 number of parties perform projective measurements on their particles. A pertinent question in this scenario is whether similar characterization is possible for n -partite mixed entangled states also. In the present work we have followed an analogous approach so as to explore whether given a tripartite mixed entangled state the conditional bipartite states obtained by performing projective measurement on the third party demonstrate a weaker form of nonlocality, quantum steering. We also compare this phenomenon of conditional steering with existing notions of tripartite correlations.
Faithful nonclassicality indicators and extremal quantum correlations in two-qubit states
NASA Astrophysics Data System (ADS)
Girolami, Davide; Paternostro, Mauro; Adesso, Gerardo
2011-09-01
The state disturbance induced by locally measuring a quantum system yields a signature of nonclassical correlations beyond entanglement. Here, we present a detailed study of such correlations for two-qubit mixed states. To overcome the asymmetry of quantum discord and the unfaithfulness of measurement-induced disturbance (severely overestimating quantum correlations), we propose an ameliorated measurement-induced disturbance as nonclassicality indicator, optimized over joint local measurements, and we derive its closed expression for relevant two-qubit states. We study its analytical relation with discord, and characterize the maximally quantum-correlated mixed states, that simultaneously extremize both quantifiers at given von Neumann entropy: among all two-qubit states, these states possess the most robust quantum correlations against noise.
Horizon Entropy from Quantum Gravity Condensates.
Oriti, Daniele; Pranzetti, Daniele; Sindoni, Lorenzo
2016-05-27
We construct condensate states encoding the continuum spherically symmetric quantum geometry of a horizon in full quantum gravity, i.e., without any classical symmetry reduction, in the group field theory formalism. Tracing over the bulk degrees of freedom, we show how the resulting reduced density matrix manifestly exhibits a holographic behavior. We derive a complete orthonormal basis of eigenstates for the reduced density matrix of the horizon and use it to compute the horizon entanglement entropy. By imposing consistency with the horizon boundary conditions and semiclassical thermodynamical properties, we recover the Bekenstein-Hawking entropy formula for any value of the Immirzi parameter. Our analysis supports the equivalence between the von Neumann (entanglement) entropy interpretation and the Boltzmann (statistical) one.
Bare Quantum Null Energy Condition.
Fu, Zicao; Marolf, Donald
2018-02-16
The quantum null energy condition (QNEC) is a conjectured relation between a null version of quantum field theory energy and derivatives of quantum field theory von Neumann entropy. In some cases, divergences cancel between these two terms and the QNEC is intrinsically finite. We study the more general case here where they do not and argue that a QNEC can still hold for bare (unrenormalized) quantities. While the original QNEC applied only to locally stationary null congruences in backgrounds that solve semiclassical theories of quantum gravity, at least in the formal perturbation theory at a small Planck length, the quantum focusing conjecture can be viewed as the special case of our bare QNEC for which the metric is on shell.
A non-linear model of economic production processes
NASA Astrophysics Data System (ADS)
Ponzi, A.; Yasutomi, A.; Kaneko, K.
2003-06-01
We present a new two phase model of economic production processes which is a non-linear dynamical version of von Neumann's neoclassical model of production, including a market price-setting phase as well as a production phase. The rate of an economic production process is observed, for the first time, to depend on the minimum of its input supplies. This creates highly non-linear supply and demand dynamics. By numerical simulation, production networks are shown to become unstable when the ratio of different products to total processes increases. This provides some insight into observed stability of competitive capitalist economies in comparison to monopolistic economies. Capitalist economies are also shown to have low unemployment.
Calculating with light using a chip-scale all-optical abacus.
Feldmann, J; Stegmaier, M; Gruhler, N; Ríos, C; Bhaskaran, H; Wright, C D; Pernice, W H P
2017-11-02
Machines that simultaneously process and store multistate data at one and the same location can provide a new class of fast, powerful and efficient general-purpose computers. We demonstrate the central element of an all-optical calculator, a photonic abacus, which provides multistate compute-and-store operation by integrating functional phase-change materials with nanophotonic chips. With picosecond optical pulses we perform the fundamental arithmetic operations of addition, subtraction, multiplication, and division, including a carryover into multiple cells. This basic processing unit is embedded into a scalable phase-change photonic network and addressed optically through a two-pulse random access scheme. Our framework provides first steps towards light-based non-von Neumann arithmetic.
Self-adjoint realisations of the Dirac-Coulomb Hamiltonian for heavy nuclei
NASA Astrophysics Data System (ADS)
Gallone, Matteo; Michelangeli, Alessandro
2018-02-01
We derive a classification of the self-adjoint extensions of the three-dimensional Dirac-Coulomb operator in the critical regime of the Coulomb coupling. Our approach is solely based upon the Kreĭn-Višik-Birman extension scheme, or also on Grubb's universal classification theory, as opposite to previous works within the standard von Neumann framework. This let the boundary condition of self-adjointness emerge, neatly and intrinsically, as a multiplicative constraint between regular and singular part of the functions in the domain of the extension, the multiplicative constant giving also immediate information on the invertibility property and on the resolvent and spectral gap of the extension.
NASA Astrophysics Data System (ADS)
Bykov, N. V.
2014-12-01
Numerical modelling of a ballistic setup with a tapered adapter and plastic piston is considered. The processes in the firing chamber are described within the framework of quasi- one-dimensional gas dynamics and a geometrical law of propellant burn by means of Lagrangian mass coordinates. The deformable piston is considered to be an ideal liquid with specific equations of state. The numerical solution is obtained by means of a modified explicit von Neumann scheme. The calculation results given show that the ballistic setup with a tapered adapter and plastic piston produces increased shell muzzle velocities by a factor of more than 1.5-2.
Relating different quantum generalizations of the conditional Rényi entropy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tomamichel, Marco; School of Physics, The University of Sydney, Sydney 2006; Berta, Mario
2014-08-15
Recently a new quantum generalization of the Rényi divergence and the corresponding conditional Rényi entropies was proposed. Here, we report on a surprising relation between conditional Rényi entropies based on this new generalization and conditional Rényi entropies based on the quantum relative Rényi entropy that was used in previous literature. Our result generalizes the well-known duality relation H(A|B) + H(A|C) = 0 of the conditional von Neumann entropy for tripartite pure states to Rényi entropies of two different kinds. As a direct application, we prove a collection of inequalities that relate different conditional Rényi entropies and derive a new entropicmore » uncertainty relation.« less
Bare Quantum Null Energy Condition
NASA Astrophysics Data System (ADS)
Fu, Zicao; Marolf, Donald
2018-02-01
The quantum null energy condition (QNEC) is a conjectured relation between a null version of quantum field theory energy and derivatives of quantum field theory von Neumann entropy. In some cases, divergences cancel between these two terms and the QNEC is intrinsically finite. We study the more general case here where they do not and argue that a QNEC can still hold for bare (unrenormalized) quantities. While the original QNEC applied only to locally stationary null congruences in backgrounds that solve semiclassical theories of quantum gravity, at least in the formal perturbation theory at a small Planck length, the quantum focusing conjecture can be viewed as the special case of our bare QNEC for which the metric is on shell.
Beller Lecture: The Roots of Leo Szilard and his Interdisciplinarity
NASA Astrophysics Data System (ADS)
Marx, George
1998-04-01
A Central European among the whites was said about Leo Szilard who originated from a polycultural family. In the early 20th century he grew up in Hungary, at the crossroads of history, where political regimes, national borders, ideological doctrines, ``final truths'' changed in a dizzying cavalcade. Instead of conservative dogmatism this social environment required critical thinking in order to survive. World War I was the school of Theodore von Kármán, John von Neumann, Eugene P. Wigner and Leo Szilard; each of them learned trespassing political and disciplinary boundaries without inhibition. Their sensitivity for trends had been utilized by the United States when war efforts and high tech required orientation under new horizons. Szilard's interest ranged from statistical physics through information theory to biological evolution, from life phenomena through hot atoms to nuclear strategy. His intellectual adventures might look crazy jumps for specialists. But now, looking back to the political and technological history of the 20th century one can see than it was a consequent progress of a future-sensitive mind.
Theory of Mach reflection of detonation at glancing incidence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bdzil, John Bohdan; Short, Mark
In this paper, we present a theory for Mach reflection of a detonation undergoing glancing incidence reflection off of a rigid wall. Our focus is on condensed-phase explosives, which we describe with a constant adiabatic gamma equation of state and an irreversible and either state-independent or weakly state-dependent reaction rate. We consider two detonation models: (1) the instantaneous reaction heat-release Chapman–Jouguet (CJ) limit and (2) the spatially resolved reaction heat-release Zeldovich–von Neumann–Dmore » $$\\ddot{Ø}$$ring (ZND) limit, where here we only consider that a small fraction of the detonation energy release is spatially resolved (the SRHR limit). We observe a three-shock reflection in the CJ limit case, with a Mach shock that is curved. In addition, we develop an analytical expression for the triple-point track angle as a function of the angle of incidence. For the SRHR model, we observe a smooth lead shock, akin to von Neumann reflection, with no reflected shock in the reaction zone. Only at larger angles of incidence is a three-shock Mach reflection observed.« less
Theory of Mach reflection of detonation at glancing incidence
Bdzil, John Bohdan; Short, Mark
2016-12-06
In this paper, we present a theory for Mach reflection of a detonation undergoing glancing incidence reflection off of a rigid wall. Our focus is on condensed-phase explosives, which we describe with a constant adiabatic gamma equation of state and an irreversible and either state-independent or weakly state-dependent reaction rate. We consider two detonation models: (1) the instantaneous reaction heat-release Chapman–Jouguet (CJ) limit and (2) the spatially resolved reaction heat-release Zeldovich–von Neumann–Dmore » $$\\ddot{Ø}$$ring (ZND) limit, where here we only consider that a small fraction of the detonation energy release is spatially resolved (the SRHR limit). We observe a three-shock reflection in the CJ limit case, with a Mach shock that is curved. In addition, we develop an analytical expression for the triple-point track angle as a function of the angle of incidence. For the SRHR model, we observe a smooth lead shock, akin to von Neumann reflection, with no reflected shock in the reaction zone. Only at larger angles of incidence is a three-shock Mach reflection observed.« less
NASA Astrophysics Data System (ADS)
Liou, Meng-Sing
2013-11-01
The development of computational fluid dynamics over the last few decades has yielded enormous successes and capabilities that are being routinely employed today; however there remain some open problems to be properly resolved. One example is the so-called overheating problem, which can arise in two very different scenarios, from either colliding or receding streams. Common in both is a localized, numerically over-predicted temperature. Von Neumann reported the former, a compressive overheating, nearly 70 years ago and numerically smeared the temperature peak by introducing artificial diffusion. However, the latter is unphysical in an expansive (rarefying) situation; it still dogs every method known to the author. We will present a study aiming at resolving this overheating problem and we find that: (1) the entropy increase is one-to-one linked to the increase in the temperature rise and (2) the overheating is inevitable in the current computational fluid dynamics framework in practice. Finally we will show a simple hybrid method that fundamentally cures the overheating problem in a rarefying flow, but also retains the property of accurate shock capturing. Moreover, this remedy (enhancement of current numerical methods) can be included easily in the present Eulerian codes. This work is performed under NASA's Fundamental Aeronautics Program.
Monte Carlo: in the beginning and some great expectations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Metropolis, N.
1985-01-01
The central theme will be on the historical setting and origins of the Monte Carlo Method. The scene was post-war Los Alamos Scientific Laboratory. There was an inevitability about the Monte Carlo Event: the ENIAC had recently enjoyed its meteoric rise (on a classified Los Alamos problem); Stan Ulam had returned to Los Alamos; John von Neumann was a frequent visitor. Techniques, algorithms, and applications developed rapidly at Los Alamos. Soon, the fascination of the Method reached wider horizons. The first paper was submitted for publication in the spring of 1949. In the summer of 1949, the first open conferencemore » was held at the University of California at Los Angeles. Of some interst perhaps is an account of Fermi's earlier, independent application in neutron moderation studies while at the University of Rome. The quantum leap expected with the advent of massively parallel processors will provide stimuli for very ambitious applications of the Monte Carlo Method in disciplines ranging from field theories to cosmology, including more realistic models in the neurosciences. A structure of multi-instruction sets for parallel processing is ideally suited for the Monte Carlo approach. One may even hope for a modest hardening of the soft sciences.« less
Liu, Yan; Cheng, H D; Huang, Jianhua; Zhang, Yingtao; Tang, Xianglong
2012-10-01
In this paper, a novel lesion segmentation within breast ultrasound (BUS) image based on the cellular automata principle is proposed. Its energy transition function is formulated based on global image information difference and local image information difference using different energy transfer strategies. First, an energy decrease strategy is used for modeling the spatial relation information of pixels. For modeling global image information difference, a seed information comparison function is developed using an energy preserve strategy. Then, a texture information comparison function is proposed for considering local image difference in different regions, which is helpful for handling blurry boundaries. Moreover, two neighborhood systems (von Neumann and Moore neighborhood systems) are integrated as the evolution environment, and a similarity-based criterion is used for suppressing noise and reducing computation complexity. The proposed method was applied to 205 clinical BUS images for studying its characteristic and functionality, and several overlapping area error metrics and statistical evaluation methods are utilized for evaluating its performance. The experimental results demonstrate that the proposed method can handle BUS images with blurry boundaries and low contrast well and can segment breast lesions accurately and effectively.
On Bi-Grid Local Mode Analysis of Solution Techniques for 3-D Euler and Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
Ibraheem, S. O.; Demuren, A. O.
1994-01-01
A procedure is presented for utilizing a bi-grid stability analysis as a practical tool for predicting multigrid performance in a range of numerical methods for solving Euler and Navier-Stokes equations. Model problems based on the convection, diffusion and Burger's equation are used to illustrate the superiority of the bi-grid analysis as a predictive tool for multigrid performance in comparison to the smoothing factor derived from conventional von Neumann analysis. For the Euler equations, bi-grid analysis is presented for three upwind difference based factorizations, namely Spatial, Eigenvalue and Combination splits, and two central difference based factorizations, namely LU and ADI methods. In the former, both the Steger-Warming and van Leer flux-vector splitting methods are considered. For the Navier-Stokes equations, only the Beam-Warming (ADI) central difference scheme is considered. In each case, estimates of multigrid convergence rates from the bi-grid analysis are compared to smoothing factors obtained from single-grid stability analysis. Effects of grid aspect ratio and flow skewness are examined. Both predictions are compared with practical multigrid convergence rates for 2-D Euler and Navier-Stokes solutions based on the Beam-Warming central scheme.
Rényi squashed entanglement, discord, and relative entropy differences
NASA Astrophysics Data System (ADS)
Seshadreesan, Kaushik P.; Berta, Mario; Wilde, Mark M.
2015-10-01
The squashed entanglement quantifies the amount of entanglement in a bipartite quantum state, and it satisfies all of the axioms desired for an entanglement measure. The quantum discord is a measure of quantum correlations that are different from those due to entanglement. What these two measures have in common is that they are both based upon the conditional quantum mutual information. In Berta et al (2015 J. Math. Phys. 56 022205), we recently proposed Rényi generalizations of the conditional quantum mutual information of a tripartite state on ABC (with C being the conditioning system), which were shown to satisfy some properties that hold for the original quantity, such as non-negativity, duality, and monotonicity with respect to local operations on the system B (with it being left open to show that the Rényi quantity is monotone with respect to local operations on system A). Here we define a Rényi squashed entanglement and a Rényi quantum discord based on a Rényi conditional quantum mutual information and investigate these quantities in detail. Taking as a conjecture that the Rényi conditional quantum mutual information is monotone with respect to local operations on both systems A and B, we prove that the Rényi squashed entanglement and the Rényi quantum discord satisfy many of the properties of the respective original von Neumann entropy based quantities. In our prior work (Berta et al 2015 Phys. Rev. A 91 022333), we also detailed a procedure to obtain Rényi generalizations of any quantum information measure that is equal to a linear combination of von Neumann entropies with coefficients chosen from the set \\{-1,0,1\\}. Here, we extend this procedure to include differences of relative entropies. Using the extended procedure and a conjectured monotonicity of the Rényi generalizations in the Rényi parameter, we discuss potential remainder terms for well known inequalities such as monotonicity of the relative entropy, joint convexity of the relative entropy, and the Holevo bound.
Process, System, Causality, and Quantum Mechanics: A Psychoanalysis of Animal Faith
NASA Astrophysics Data System (ADS)
Etter, Tom; Noyes, H. Pierre
We shall argue in this paper that a central piece of modern physics does not really belong to physics at all but to elementary probability theory. Given a joint probability distribution J on a set of random variables containing x and y, define a link between x and y to be the condition x=y on J. Define the {\\it state} D of a link x=y as the joint probability distribution matrix on x and y without the link. The two core laws of quantum mechanics are the Born probability rule, and the unitary dynamical law whose best known form is the Schrodinger's equation. Von Neumann formulated these two laws in the language of Hilbert space as prob(P) = trace(PD) and D'T = TD respectively, where P is a projection, D and D' are (von Neumann) density matrices, and T is a unitary transformation. We'll see that if we regard link states as density matrices, the algebraic forms of these two core laws occur as completely general theorems about links. When we extend probability theory by allowing cases to count negatively, we find that the Hilbert space framework of quantum mechanics proper emerges from the assumption that all D's are symmetrical in rows and columns. On the other hand, Markovian systems emerge when we assume that one of every linked variable pair has a uniform probability distribution. By representing quantum and Markovian structure in this way, we see clearly both how they differ, and also how they can coexist in natural harmony with each other, as they must in quantum measurement, which we'll examine in some detail. Looking beyond quantum mechanics, we see how both structures have their special places in a much larger continuum of formal systems that we have yet to look for in nature.
NASA Astrophysics Data System (ADS)
Gromov, V. A.; Sharygin, G. S.; Mironov, M. V.
2012-08-01
An interval method of radar signal detection and selection based on non-energetic polarization parameter - the ellipticity angle - is suggested. The examined method is optimal by the Neumann-Pearson criterion. The probability of correct detection for a preset probability of false alarm is calculated for different signal/noise ratios. Recommendations for optimization of the given method are provided.
Harmonic oscillator representation in the theory of scattering and nuclear reactions
NASA Technical Reports Server (NTRS)
Smirnov, Yuri F.; Shirokov, A. M.; Lurie, Yuri, A.; Zaitsev, S. A.
1995-01-01
The following questions, concerning the application of the harmonic oscillator representation (HOR) in the theory of scattering and reactions, are discussed: the formulation of the scattering theory in HOR; exact solutions of the free motion Schroedinger equation in HOR; separable expansion of the short range potentials and the calculation of the phase shifts; 'isolated states' as generalization of the Wigner-von Neumann bound states embedded in continuum; a nuclear coupled channel problem in HOR; and the description of true three body scattering in HOR. As an illustration the soft dipole mode in the (11)Li nucleus is considered in a frame of the (9)Li+n+n cluster model taking into account of three body continuum effects.
Partial Measurements and the Realization of Quantum-Mechanical Counterfactuals
NASA Astrophysics Data System (ADS)
Paraoanu, G. S.
2011-07-01
We propose partial measurements as a conceptual tool to understand how to operate with counterfactual claims in quantum physics. Indeed, unlike standard von Neumann measurements, partial measurements can be reversed probabilistically. We first analyze the consequences of this rather unusual feature for the principle of superposition, for the complementarity principle, and for the issue of hidden variables. Then we move on to exploring non-local contexts, by reformulating the EPR paradox, the quantum teleportation experiment, and the entanglement-swapping protocol for the situation in which one uses partial measurements followed by their stochastic reversal. This leads to a number of counter-intuitive results, which are shown to be resolved if we give up the idea of attributing reality to the wavefunction of a single quantum system.
A history of the Allais paradox.
Heukelom, Floris
2015-03-01
This article documents the history of the Allais paradox, and shows that underneath the many discussions of the various protagonists lay different, irreconcilable epistemological positions. Savage, like his mentor von Neumann and similar to economist Friedman, worked from an epistemology of generalized characterizations. Allais, on the other hand, like economists Samuelson and Baumol, started from an epistemology of exact descriptions in which every axiom was an empirical claim that could be refuted directly by observations. As a result, the two sides failed to find a common ground. Only a few decades later was the now so-called Allais paradox rediscovered as an important precursor when a new behavioural economic subdiscipline started to adopt the epistemology of exact descriptions and its accompanying falsifications of rational choice theory.
Connes' embedding problem and Tsirelson's problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Junge, M.; Palazuelos, C.; Navascues, M.
2011-01-15
We show that Tsirelson's problem concerning the set of quantum correlations and Connes' embedding problem on finite approximations in von Neumann algebras (known to be equivalent to Kirchberg's QWEP conjecture) are essentially equivalent. Specifically, Tsirelson's problem asks whether the set of bipartite quantum correlations generated between tensor product separated systems is the same as the set of correlations between commuting C{sup *}-algebras. Connes' embedding problem asks whether any separable II{sub 1} factor is a subfactor of the ultrapower of the hyperfinite II{sub 1} factor. We show that an affirmative answer to Connes' question implies a positive answer to Tsirelson's. Conversely,more » a positive answer to a matrix valued version of Tsirelson's problem implies a positive one to Connes' problem.« less
On the reduced dynamics of a subset of interacting bosonic particles
NASA Astrophysics Data System (ADS)
Gessner, Manuel; Buchleitner, Andreas
2018-03-01
The quantum dynamics of a subset of interacting bosons in a subspace of fixed particle number is described in terms of symmetrized many-particle states. A suitable partial trace operation over the von Neumann equation of an N-particle system produces a hierarchical expansion for the subdynamics of M ≤ N particles. Truncating this hierarchy with a pure product state ansatz yields the general, nonlinear coherent mean-field equation of motion. In the special case of a contact interaction potential, this reproduces the Gross-Pitaevskii equation. To account for incoherent effects on top of the mean-field evolution, we discuss possible extensions towards a second-order perturbation theory that accounts for interaction-induced decoherence in form of a nonlinear Lindblad-type master equation.
Deterministic quantum teleportation and information splitting via a peculiar W-class state
NASA Astrophysics Data System (ADS)
Mei, Feng; Yu, Ya-Fei; Zhang, Zhi-Ming
2010-02-01
In the paper (Phys. Rev. 2006 A 74 062320) Agrawal et al. have introduced a kind of W-class state which can be used for the quantum teleportation of single-particle state via a three-particle von Neumann measurement, and they thought that the state could not be used to teleport an unknown state by making two-particle and one-particle measurements. Here we reconsider the features of the W-class state and the quantum teleportation process via the W-class state. We show that, by introducing a unitary operation, the quantum teleportation can be achieved deterministically by making two-particle and one-particle measurements. In addition, our protocol is extended to the process of teleporting two-particle state and splitting information.
Open Quantum Systems and Classical Trajectories
NASA Astrophysics Data System (ADS)
Rebolledo, Rolando
2004-09-01
A Quantum Markov Semigroup consists of a family { T} = ({ T}t)_{t ∈ B R+} of normal ω*- continuous completely positive maps on a von Neumann algebra 𝔐 which preserve the unit and satisfy the semigroup property. This class of semigroups has been extensively used to represent open quantum systems. This article is aimed at studying the existence of a { T} -invariant abelian subalgebra 𝔄 of 𝔐. When this happens, the restriction of { T}t to 𝔄 defines a classical Markov semigroup T = (Tt)
Criteria for equality in two entropic inequalities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shirokov, M. E., E-mail: msh@mi.ras.ru
2014-07-31
We obtain a simple criterion for local equality between the constrained Holevo capacity and the quantum mutual information of a quantum channel. This shows that the set of all states for which this equality holds is determined by the kernel of the channel (as a linear map). Applications to Bosonic Gaussian channels are considered. It is shown that for a Gaussian channel having no completely depolarizing components the above characteristics may coincide only at non-Gaussian mixed states and a criterion for the existence of such states is given. All the obtained results may be reformulated as conditions for equality betweenmore » the constrained Holevo capacity of a quantum channel and the input von Neumann entropy. Bibliography: 20 titles. (paper)« less
Pulse-coupled neural network implementation in FPGA
NASA Astrophysics Data System (ADS)
Waldemark, Joakim T. A.; Lindblad, Thomas; Lindsey, Clark S.; Waldemark, Karina E.; Oberg, Johnny; Millberg, Mikael
1998-03-01
Pulse Coupled Neural Networks (PCNN) are biologically inspired neural networks, mainly based on studies of the visual cortex of small mammals. The PCNN is very well suited as a pre- processor for image processing, particularly in connection with object isolation, edge detection and segmentation. Several implementations of PCNN on von Neumann computers, as well as on special parallel processing hardware devices (e.g. SIMD), exist. However, these implementations are not as flexible as required for many applications. Here we present an implementation in Field Programmable Gate Arrays (FPGA) together with a performance analysis. The FPGA hardware implementation may be considered a platform for further, extended implementations and easily expanded into various applications. The latter may include advanced on-line image analysis with close to real-time performance.
Sharpening the second law of thermodynamics with the quantum Bayes theorem.
Gharibyan, Hrant; Tegmark, Max
2014-09-01
We prove a generalization of the classic Groenewold-Lindblad entropy inequality, combining decoherence and the quantum Bayes theorem into a simple unified picture where decoherence increases entropy while observation decreases it. This provides a rigorous quantum-mechanical version of the second law of thermodynamics, governing how the entropy of a system (the entropy of its density matrix, partial-traced over the environment and conditioned on what is known) evolves under general decoherence and observation. The powerful tool of spectral majorization enables both simple alternative proofs of the classic Lindblad and Holevo inequalities without using strong subadditivity, and also novel inequalities for decoherence and observation that hold not only for von Neumann entropy, but also for arbitrary concave entropies.
Weak values of spin and momentum in atomic systems.
NASA Astrophysics Data System (ADS)
Flack, Robert; Hiley, Basil; Barker, Peter; Monachello, Vincenzo; Morley, Joel
2017-04-01
Weak values have a long history and were first considered by Landau and London in connection with superfluids. Hirschfelder called them sub-observables and Dirac anticipatied them when discussing non-commutative geometry in quantum mechanics. The idea of a weak value has returned to prominence due to Aharonov, Albert and Vaidman showing how they can be measured. They are not eigenvalues of the system and can not be measured by a collapse of the wave function with the traditional Von Neumann (strong) measurement which is a single stage process. In contrast the weak measurement process has three stages; preselection, weak stage and finally a post selection. Although weak values have been observed using photons and neutrons, we are building two experiments to observe weak values of spin and momentum in atomic systems. For spin we are following the method outlined by Duck et al which is a variant on the original Stern-Gerlach experiment using a metastable, 23S1 , form of helium. For momentum we are using a method similar to that used by Kocsis with excited argon atoms in the 3P2 state, passing through a 2-slit interferometer. The design, simulation and re John Fetzer Memorial Trust.
Biomolecular computers with multiple restriction enzymes
Sakowski, Sebastian; Krasinski, Tadeusz; Waldmajer, Jacek; Sarnik, Joanna; Blasiak, Janusz; Poplawski, Tomasz
2017-01-01
Abstract The development of conventional, silicon-based computers has several limitations, including some related to the Heisenberg uncertainty principle and the von Neumann “bottleneck”. Biomolecular computers based on DNA and proteins are largely free of these disadvantages and, along with quantum computers, are reasonable alternatives to their conventional counterparts in some applications. The idea of a DNA computer proposed by Ehud Shapiro’s group at the Weizmann Institute of Science was developed using one restriction enzyme as hardware and DNA fragments (the transition molecules) as software and input/output signals. This computer represented a two-state two-symbol finite automaton that was subsequently extended by using two restriction enzymes. In this paper, we propose the idea of a multistate biomolecular computer with multiple commercially available restriction enzymes as hardware. Additionally, an algorithmic method for the construction of transition molecules in the DNA computer based on the use of multiple restriction enzymes is presented. We use this method to construct multistate, biomolecular, nondeterministic finite automata with four commercially available restriction enzymes as hardware. We also describe an experimental applicaton of this theoretical model to a biomolecular finite automaton made of four endonucleases. PMID:29064510
Population Control of Self-Replicating Systems: Option C
NASA Technical Reports Server (NTRS)
Mccord, R. L.
1983-01-01
From the conception and development of the theory of self-replicating automata by John von Neumann, others have expanded on his theories. In 1980, Georg von Tiesenhausen and Wesley A. Darbro developed a report which is a "first' in presenting the theories in a conceptualized engineering setting. In that report several options involving self-replicating systems are presented. One of the options allows each primary to generate n replicas, one in each sequential time frame after its own generation. Each replica is limited to a maximum of m ancestors. This study involves determining the state vector of the replicas in an efficient manner. The problem is cast in matrix notation, where F = fij is a non-diagonalizable matrix. Any element fij represents the number of elements of type j = (c,d) in time frame k+1 generated from type i = (a,b) in time frame k. It is then shown that the state vector is: bar F(k)=bar F (non-zero) X F sub K = bar F (non-zero) xmx J sub kx m sub-1 where J is a matrix in Jordan form having the same eigenvalues as F. M is a matrix composed of the eigenvectors and the generalized eigenvectors of F.
NASA Astrophysics Data System (ADS)
Xiong, W.; Li, J.; Zhu, Y.; Luo, X.
2018-07-01
The transition between regular reflection (RR) and Mach reflection (MR) of a Type V shock-shock interaction on a double-wedge geometry with non-equilibrium high-temperature gas effects is investigated theoretically and numerically. A modified shock polar method that involves thermochemical non-equilibrium processes is applied to calculate the theoretical critical angles of transition based on the detachment criterion and the von Neumann criterion. Two-dimensional inviscid numerical simulations are performed correspondingly to reveal the interactive wave patterns, the transition processes, and the critical transition angles. The theoretical and numerical results of the critical transition angles are compared, which shows evident disagreement, indicating that the transition mechanism between RR and MR of a Type V shock interaction is beyond the admissible scope of the classical theory. Numerical results show that the collisions of triple points of the Type V interaction cause the transition instead. Compared with the frozen counterpart, it is found that the high-temperature gas effects lead to a larger critical transition angle and a larger hysteresis interval.
Complex-network description of thermal quantum states in the Ising spin chain
NASA Astrophysics Data System (ADS)
Sundar, Bhuvanesh; Valdez, Marc Andrew; Carr, Lincoln D.; Hazzard, Kaden R. A.
2018-05-01
We use network analysis to describe and characterize an archetypal quantum system—an Ising spin chain in a transverse magnetic field. We analyze weighted networks for this quantum system, with link weights given by various measures of spin-spin correlations such as the von Neumann and Rényi mutual information, concurrence, and negativity. We analytically calculate the spin-spin correlations in the system at an arbitrary temperature by mapping the Ising spin chain to fermions, as well as numerically calculate the correlations in the ground state using matrix product state methods, and then analyze the resulting networks using a variety of network measures. We demonstrate that the network measures show some traits of complex networks already in this spin chain, arguably the simplest quantum many-body system. The network measures give insight into the phase diagram not easily captured by more typical quantities, such as the order parameter or correlation length. For example, the network structure varies with transverse field and temperature, and the structure in the quantum critical fan is different from the ordered and disordered phases.
Entanglement dynamics in short- and long-range harmonic oscillators
NASA Astrophysics Data System (ADS)
Nezhadhaghighi, M. Ghasemi; Rajabpour, M. A.
2014-11-01
We study the time evolution of the entanglement entropy in the short- and long-range-coupled harmonic oscillators that have well-defined continuum limit field theories. We first introduce a method to calculate the entanglement evolution in generic coupled harmonic oscillators after quantum quench. Then we study the entanglement evolution after quantum quench in harmonic systems in which the couplings decay effectively as 1 /rd +α with the distance r . After quenching the mass from a nonzero value to zero we calculate numerically the time evolution of von Neumann and Rényi entropies. We show that for 1 <α <2 we have a linear growth of entanglement and then saturation independent of the initial state. For 0 <α <1 depending on the initial state we can have logarithmic growth or just fluctuation of entanglement. We also calculate the mutual information dynamics of two separated individual harmonic oscillators. Our findings suggest that in our system there is no particular connection between having a linear growth of entanglement after quantum quench and having a maximum group velocity or generalized Lieb-Robinson bound.
Artificial viscosity in Godunov-type schemes to cure the carbuncle phenomenon
NASA Astrophysics Data System (ADS)
Rodionov, Alexander V.
2017-09-01
This work presents a new approach for curing the carbuncle instability. The idea underlying the approach is to introduce some dissipation in the form of right-hand sides of the Navier-Stokes equations into the basic method of solving Euler equations; in so doing, we replace the molecular viscosity coefficient by the artificial viscosity coefficient and calculate heat conductivity assuming that the Prandtl number is constant. For the artificial viscosity coefficient we have chosen a formula that is consistent with the von Neumann and Richtmyer artificial viscosity, but has its specific features (extension to multidimensional simulations, introduction of a threshold compression intensity that restricts additional dissipation to the shock layer only). The coefficients and the expression for the characteristic mesh size in this formula are chosen from a large number of Quirk-type problem computations. The new cure for the carbuncle flaw has been tested on first-order schemes (Godunov, Roe, HLLC and AUSM+ schemes) as applied to one- and two-dimensional simulations on smooth structured grids. Its efficiency has been demonstrated on several well-known test problems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaufmann, Ralph M., E-mail: rkaufman@math.purdue.edu; Khlebnikov, Sergei, E-mail: skhleb@physics.purdue.edu; Wehefritz-Kaufmann, Birgit, E-mail: ebkaufma@math.purdue.edu
2012-11-15
Motivated by the Double Gyroid nanowire network we develop methods to detect Dirac points and classify level crossings, aka. singularities in the spectrum of a family of Hamiltonians. The approach we use is singularity theory. Using this language, we obtain a characterization of Dirac points and also show that the branching behavior of the level crossings is given by an unfolding of A{sub n} type singularities. Which type of singularity occurs can be read off a characteristic region inside the miniversal unfolding of an A{sub k} singularity. We then apply these methods in the setting of families of graph Hamiltonians,more » such as those for wire networks. In the particular case of the Double Gyroid we analytically classify its singularities and show that it has Dirac points. This indicates that nanowire systems of this type should have very special physical properties. - Highlights: Black-Right-Pointing-Pointer New method for analytically finding Dirac points. Black-Right-Pointing-Pointer Novel relation of level crossings to singularity theory. Black-Right-Pointing-Pointer More precise version of the von-Neumann-Wigner theorem for arbitrary smooth families of Hamiltonians of fixed size. Black-Right-Pointing-Pointer Analytical proof of the existence of Dirac points for the Gyroid wire network.« less
Jacobi spectral Galerkin method for elliptic Neumann problems
NASA Astrophysics Data System (ADS)
Doha, E.; Bhrawy, A.; Abd-Elhameed, W.
2009-01-01
This paper is concerned with fast spectral-Galerkin Jacobi algorithms for solving one- and two-dimensional elliptic equations with homogeneous and nonhomogeneous Neumann boundary conditions. The paper extends the algorithms proposed by Shen (SIAM J Sci Comput 15:1489-1505, 1994) and Auteri et al. (J Comput Phys 185:427-444, 2003), based on Legendre polynomials, to Jacobi polynomials with arbitrary α and β. The key to the efficiency of our algorithms is to construct appropriate basis functions with zero slope at the endpoints, which lead to systems with sparse matrices for the discrete variational formulations. The direct solution algorithm developed for the homogeneous Neumann problem in two-dimensions relies upon a tensor product process. Nonhomogeneous Neumann data are accounted for by means of a lifting. Numerical results indicating the high accuracy and effectiveness of these algorithms are presented.
Convergence acceleration of the Proteus computer code with multigrid methods
NASA Technical Reports Server (NTRS)
Demuren, A. O.; Ibraheem, S. O.
1995-01-01
This report presents the results of a study to implement convergence acceleration techniques based on the multigrid concept in the two-dimensional and three-dimensional versions of the Proteus computer code. The first section presents a review of the relevant literature on the implementation of the multigrid methods in computer codes for compressible flow analysis. The next two sections present detailed stability analysis of numerical schemes for solving the Euler and Navier-Stokes equations, based on conventional von Neumann analysis and the bi-grid analysis, respectively. The next section presents details of the computational method used in the Proteus computer code. Finally, the multigrid implementation and applications to several two-dimensional and three-dimensional test problems are presented. The results of the present study show that the multigrid method always leads to a reduction in the number of iterations (or time steps) required for convergence. However, there is an overhead associated with the use of multigrid acceleration. The overhead is higher in 2-D problems than in 3-D problems, thus overall multigrid savings in CPU time are in general better in the latter. Savings of about 40-50 percent are typical in 3-D problems, but they are about 20-30 percent in large 2-D problems. The present multigrid method is applicable to steady-state problems and is therefore ineffective in problems with inherently unstable solutions.
Modification of Classical SPM for Slightly Rough Surface Scattering with Low Grazing Angle Incidence
NASA Astrophysics Data System (ADS)
Guo, Li-Xin; Wei, Guo-Hui; Kim, Cheyoung; Wu, Zhen-Sen
2005-11-01
Based on the impedance/admittance rough boundaries, the reflection coefficients and the scattering cross section with low grazing angle incidence are obtained for both VV and HH polarizations. The error of the classical perturbation method at grazing angle is overcome for the vertical polarization at a rough Neumann boundary of infinite extent. The derivation of the formulae and the numerical results show that the backscattering cross section depends on the grazing angle to the fourth power for both Neumann and Dirichlet boundary conditions with low grazing angle incidence. Our results can reduce to that of the classical small perturbation method by neglecting the Neumann and Dirichlet boundary conditions. The project supported by National Natural Science Foundation of China under Grant No. 60101001 and the National Defense Foundation of China
Merolla, Paul A; Arthur, John V; Alvarez-Icaza, Rodrigo; Cassidy, Andrew S; Sawada, Jun; Akopyan, Filipp; Jackson, Bryan L; Imam, Nabil; Guo, Chen; Nakamura, Yutaka; Brezzo, Bernard; Vo, Ivan; Esser, Steven K; Appuswamy, Rathinakumar; Taba, Brian; Amir, Arnon; Flickner, Myron D; Risk, William P; Manohar, Rajit; Modha, Dharmendra S
2014-08-08
Inspired by the brain's structure, we have developed an efficient, scalable, and flexible non-von Neumann architecture that leverages contemporary silicon technology. To demonstrate, we built a 5.4-billion-transistor chip with 4096 neurosynaptic cores interconnected via an intrachip network that integrates 1 million programmable spiking neurons and 256 million configurable synapses. Chips can be tiled in two dimensions via an interchip communication interface, seamlessly scaling the architecture to a cortexlike sheet of arbitrary size. The architecture is well suited to many applications that use complex neural networks in real time, for example, multiobject detection and classification. With 400-pixel-by-240-pixel video input at 30 frames per second, the chip consumes 63 milliwatts. Copyright © 2014, American Association for the Advancement of Science.
On the structure of the master equation for a two-level system coupled to a thermal bath
NASA Astrophysics Data System (ADS)
de Vega, Inés
2015-04-01
We derive a master equation from the exact stochastic Liouville-von-Neumann (SLN) equation (Stockburger and Grabert 2002 Phys. Rev. Lett. 88 170407). The latter depends on two correlated noises and describes exactly the dynamics of an oscillator (which can be either harmonic or present an anharmonicity) coupled to an environment at thermal equilibrium. The newly derived master equation is obtained by performing analytically the average over different noise trajectories. It is found to have a complex hierarchical structure that might be helpful to explain the convergence problems occurring when performing numerically the stochastic average of trajectories given by the SLN equation (Koch et al 2008 Phys. Rev. Lett. 100 230402, Koch 2010 PhD thesis Fakultät Mathematik und Naturwissenschaften der Technischen Universitat Dresden).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Naegle, John H.; Suppona, Roger A.; Aimone, James Bradley
In 2016, Lewis Rhodes Labs, (LRL), shipped the first commercially viable Neuromorphic Processing Unit, (NPU), branded as a Neuromorphic Data Microscope (NDM). This product leverages architectural mechanisms derived from the sensory cortex of the human brain to efficiently implement pattern matching. LRL and Sandia National Labs have optimized this product for streaming analytics, and demonstrated a 1,000x power per operation reduction in an FPGA format. When reduced to an ASIC, the efficiency will improve to 1,000,000x. Additionally, the neuromorphic nature of the device gives it powerful computational attributes that are counterintuitive to those schooled in traditional von Neumann architectures. Themore » Neuromorphic Data Microscope is the first of a broad class of brain-inspired, time domain processors that will profoundly alter the functionality and economics of data processing.« less
Scaling of Rényi entanglement entropies of the free fermi-gas ground state: a rigorous proof.
Leschke, Hajo; Sobolev, Alexander V; Spitzer, Wolfgang
2014-04-25
In a remarkable paper [Phys. Rev. Lett. 96, 100503 (2006)], Gioev and Klich conjectured an explicit formula for the leading asymptotic growth of the spatially bipartite von Neumann entanglement entropy of noninteracting fermions in multidimensional Euclidean space at zero temperature. Based on recent progress by one of us (A. V. S.) in semiclassical functional calculus for pseudodifferential operators with discontinuous symbols, we provide here a complete proof of that formula and of its generalization to Rényi entropies of all orders α>0. The special case α=1/2 is also known under the name logarithmic negativity and often considered to be a particularly useful quantification of entanglement. These formulas exhibiting a "logarithmically enhanced area law" have been used already in many publications.
Discrete cosine and sine transforms generalized to honeycomb lattice
NASA Astrophysics Data System (ADS)
Hrivnák, Jiří; Motlochová, Lenka
2018-06-01
The discrete cosine and sine transforms are generalized to a triangular fragment of the honeycomb lattice. The honeycomb point sets are constructed by subtracting the root lattice from the weight lattice points of the crystallographic root system A2. The two-variable orbit functions of the Weyl group of A2, discretized simultaneously on the weight and root lattices, induce a novel parametric family of extended Weyl orbit functions. The periodicity and von Neumann and Dirichlet boundary properties of the extended Weyl orbit functions are detailed. Three types of discrete complex Fourier-Weyl transforms and real-valued Hartley-Weyl transforms are described. Unitary transform matrices and interpolating behavior of the discrete transforms are exemplified. Consequences of the developed discrete transforms for transversal eigenvibrations of the mechanical graphene model are discussed.
Dynamical manifestations of quantum chaos
NASA Astrophysics Data System (ADS)
Torres Herrera, Eduardo Jonathan; Santos, Lea
2017-04-01
A main feature of a chaotic quantum system is a rigid spectrum, where the levels do not cross. Dynamical quantities, such as the von Neumann entanglement entropy, Shannon information entropy, and out-of-time correlators can differentiate the ergodic from the nonergodic phase in disordered interacting systems, but not level repulsion from level crossing in the delocalized phase of disordered and clean models. This is in contrast with the long-time evolution of the survival probability of the initial state. The onset of correlated energy levels is manifested by a drop, referred to as correlation hole, below the asymptotic value of the survival probability. The correlation hole is an unambiguous indicator of the presence of level repulsion. EJTH is grateful to VIEP, BUAP for financial support through the VIEP projects program.
NASA Astrophysics Data System (ADS)
Tipler, F. J.
1982-10-01
An assessment is presented of the probability of the existence of intelligent extraterrestrial life in view of biological evolutionary constraints, in order to furnish some perspective for the hopes and claims of search of extraterrestrial intelligence (SETI) enthusiasts. Attention is given to a hypothetical extraterrestrial civilization's exploration/colonization of interstellar space by means of von Neumann machine-like, endlessly self-replicating space probes which would eventually reach the planetary systems of all stars in the Galaxy. These probes would be able to replicate the biology of their creator species, upon reaching a hospitable planet. It is suggested that the fundamental technological feasibility of such schemes, and their geometrically progressive comprehension of the Galaxy, would make actual colonization of the earth by extraterrestrials so probable as to destroy the hopes of SETI backers for occasional contact.
On τ-Compactness of Products of τ-Measurable Operators
NASA Astrophysics Data System (ADS)
Bikchentaev, Airat M.
2017-12-01
Let M be a von Neumann algebra of operators on a Hilbert space H, τ be a faithful normal semifinite trace on M. We obtain some new inequalities for rearrangements of τ-measurable operators products. We also establish some sufficient τ-compactness conditions for products of selfadjoint τ-measurable operators. Next we obtain a τ-compactness criterion for product of a nonnegative τ-measurable operator with an arbitrary τ-measurable operator. We construct an example that shows importance of nonnegativity for one of the factors. The similar results are obtained also for elementary operators from M. We apply our results to symmetric spaces on (M, τ ). The results are new even for the *-algebra B(H) of all linear bounded operators on H endowed with the canonical trace τ = tr.
Philosophical Approaches towards Sciences of Life in Early Cybernetics
NASA Astrophysics Data System (ADS)
Montagnini, Leone
2008-07-01
The article focuses on the different conceptual and philosophical approaches towards the sciences of life operating in the backstage of Early Cybernetics. After a short reconstruction of the main steps characterizing the origins of Cybernetics, from 1940 until 1948, the paper examines the complementary conceptual views between Norbert Wiener and John von Neumann, as a "fuzzy thinking" versus a "logical thinking", and the marked difference between the "methodological individualism" shared by both of them versus the "methodological collectivism" of most of the numerous scientists of life and society attending the Macy Conferences on Cybernetics. The main thesis sustained here is that these different approaches, quite invisible to the participants, were different, maybe even opposite, but they could provoke clashes, as well as cooperate in a synergic way.
NASA Astrophysics Data System (ADS)
Goodall, Clive
1993-08-01
A decisive and lethal response to a naive radical skepticism concerning the prospects for the existence of Extraterrestrial Intelligence is derivable from core areas of Modern Analytic Philosophy. The naive skeptical view is fundamentally flawed in the way it oversimplifies certain complex issues, failing as it does, to recognize a special class of conceptual problems for what they really are and mistakenly treating them instead as empirical issues. Specifically, this skepticism is based upon an untenable oversimplifying mode of the 'mind-brain' relation. Moreover, independent logical considerations concerning the mind-brain relation provide evidential grounds for why we should in fact expect a priori that an Alien Intelligence will face constraints upon, and immense difficulties in, making its existence known by non- electromagnetic means.
A finite element algorithm for high-lying eigenvalues with Neumann and Dirichlet boundary conditions
NASA Astrophysics Data System (ADS)
Báez, G.; Méndez-Sánchez, R. A.; Leyvraz, F.; Seligman, T. H.
2014-01-01
We present a finite element algorithm that computes eigenvalues and eigenfunctions of the Laplace operator for two-dimensional problems with homogeneous Neumann or Dirichlet boundary conditions, or combinations of either for different parts of the boundary. We use an inverse power plus Gauss-Seidel algorithm to solve the generalized eigenvalue problem. For Neumann boundary conditions the method is much more efficient than the equivalent finite difference algorithm. We checked the algorithm by comparing the cumulative level density of the spectrum obtained numerically with the theoretical prediction given by the Weyl formula. We found a systematic deviation due to the discretization, not to the algorithm itself.
Topological and Orthomodular Modeling of Context in Behavioral Science
NASA Astrophysics Data System (ADS)
Narens, Louis
2017-02-01
Two non-boolean methods are discussed for modeling context in behavioral data and theory. The first is based on intuitionistic logic, which is similar to classical logic except that not every event has a complement. Its probability theory is also similar to classical probability theory except that the definition of probability function needs to be generalized to unions of events instead of applying only to unions of disjoint events. The generalization is needed, because intuitionistic event spaces may not contain enough disjoint events for the classical definition to be effective. The second method develops a version of quantum logic for its underlying probability theory. It differs from Hilbert space logic used in quantum mechanics as a foundation for quantum probability theory in variety of ways. John von Neumann and others have commented about the lack of a relative frequency approach and a rational foundation for this probability theory. This article argues that its version of quantum probability theory does not have such issues. The method based on intuitionistic logic is useful for modeling cognitive interpretations that vary with context, for example, the mood of the decision maker, the context produced by the influence of other items in a choice experiment, etc. The method based on this article's quantum logic is useful for modeling probabilities across contexts, for example, how probabilities of events from different experiments are related.
NASA Astrophysics Data System (ADS)
Nakamura, Gen; Wang, Haibing
2017-05-01
Consider the problem of reconstructing unknown Robin inclusions inside a heat conductor from boundary measurements. This problem arises from active thermography and is formulated as an inverse boundary value problem for the heat equation. In our previous works, we proposed a sampling-type method for reconstructing the boundary of the Robin inclusion and gave its rigorous mathematical justification. This method is non-iterative and based on the characterization of the solution to the so-called Neumann- to-Dirichlet map gap equation. In this paper, we give a further investigation of the reconstruction method from both the theoretical and numerical points of view. First, we clarify the solvability of the Neumann-to-Dirichlet map gap equation and establish a relation of its solution to the Green function associated with an initial-boundary value problem for the heat equation inside the Robin inclusion. This naturally provides a way of computing this Green function from the Neumann-to-Dirichlet map and explains what is the input for the linear sampling method. Assuming that the Neumann-to-Dirichlet map gap equation has a unique solution, we also show the convergence of our method for noisy measurements. Second, we give the numerical implementation of the reconstruction method for two-dimensional spatial domains. The measurements for our inverse problem are simulated by solving the forward problem via the boundary integral equation method. Numerical results are presented to illustrate the efficiency and stability of the proposed method. By using a finite sequence of transient input over a time interval, we propose a new sampling method over the time interval by single measurement which is most likely to be practical.
Sedghi, Aliasghar; Rezaei, Behrooz
2016-11-20
Using the Dirichlet-to-Neumann map method, we have calculated the photonic band structure of two-dimensional metallodielectric photonic crystals having the square and triangular lattices of circular metal rods in a dielectric background. We have selected the transverse electric mode of electromagnetic waves, and the resulting band structures showed the existence of photonic bandgap in these structures. We theoretically study the effect of background dielectric on the photonic bandgap.
NASA Astrophysics Data System (ADS)
Brown-Dymkoski, Eric; Kasimov, Nurlybek; Vasilyev, Oleg V.
2014-04-01
In order to introduce solid obstacles into flows, several different methods are used, including volume penalization methods which prescribe appropriate boundary conditions by applying local forcing to the constitutive equations. One well known method is Brinkman penalization, which models solid obstacles as porous media. While it has been adapted for compressible, incompressible, viscous and inviscid flows, it is limited in the types of boundary conditions that it imposes, as are most volume penalization methods. Typically, approaches are limited to Dirichlet boundary conditions. In this paper, Brinkman penalization is extended for generalized Neumann and Robin boundary conditions by introducing hyperbolic penalization terms with characteristics pointing inward on solid obstacles. This Characteristic-Based Volume Penalization (CBVP) method is a comprehensive approach to conditions on immersed boundaries, providing for homogeneous and inhomogeneous Dirichlet, Neumann, and Robin boundary conditions on hyperbolic and parabolic equations. This CBVP method can be used to impose boundary conditions for both integrated and non-integrated variables in a systematic manner that parallels the prescription of exact boundary conditions. Furthermore, the method does not depend upon a physical model, as with porous media approach for Brinkman penalization, and is therefore flexible for various physical regimes and general evolutionary equations. Here, the method is applied to scalar diffusion and to direct numerical simulation of compressible, viscous flows. With the Navier-Stokes equations, both homogeneous and inhomogeneous Neumann boundary conditions are demonstrated through external flow around an adiabatic and heated cylinder. Theoretical and numerical examination shows that the error from penalized Neumann and Robin boundary conditions can be rigorously controlled through an a priori penalization parameter η. The error on a transient boundary is found to converge as O(η), which is more favorable than the error convergence of the already established Dirichlet boundary condition.
Electronic Properties of Cyclacenes from TAO-DFT
Wu, Chun-Shian; Lee, Pei-Yin; Chai, Jeng-Da
2016-01-01
Owing to the presence of strong static correlation effects, accurate prediction of the electronic properties (e.g., the singlet-triplet energy gaps, vertical ionization potentials, vertical electron affinities, fundamental gaps, symmetrized von Neumann entropy, active orbital occupation numbers, and real-space representation of active orbitals) of cyclacenes with n fused benzene rings (n = 4–100) has posed a great challenge to traditional electronic structure methods. To meet the challenge, we study these properties using our newly developed thermally-assisted-occupation density functional theory (TAO-DFT), a very efficient method for the study of large systems with strong static correlation effects. Besides, to examine the role of cyclic topology, the electronic properties of cyclacenes are also compared with those of acenes. Similar to acenes, the ground states of cyclacenes are singlets for all the cases studied. In contrast to acenes, the electronic properties of cyclacenes, however, exhibit oscillatory behavior (for n ≤ 30) in the approach to the corresponding properties of acenes with increasing number of benzene rings. On the basis of the calculated orbitals and their occupation numbers, the larger cyclacenes are shown to exhibit increasing polyradical character in their ground states, with the active orbitals being mainly localized at the peripheral carbon atoms. PMID:27853249
Reliability enhancement of Navier-Stokes codes through convergence acceleration
NASA Technical Reports Server (NTRS)
Merkle, Charles L.; Dulikravich, George S.
1995-01-01
Methods for enhancing the reliability of Navier-Stokes computer codes through improving convergence characteristics are presented. The improving of these characteristics decreases the likelihood of code unreliability and user interventions in a design environment. The problem referred to as a 'stiffness' in the governing equations for propulsion-related flowfields is investigated, particularly in regard to common sources of equation stiffness that lead to convergence degradation of CFD algorithms. Von Neumann stability theory is employed as a tool to study the convergence difficulties involved. Based on the stability results, improved algorithms are devised to ensure efficient convergence in different situations. A number of test cases are considered to confirm a correlation between stability theory and numerical convergence. The examples of turbulent and reacting flow are presented, and a generalized form of the preconditioning matrix is derived to handle these problems, i.e., the problems involving additional differential equations for describing the transport of turbulent kinetic energy, dissipation rate and chemical species. Algorithms for unsteady computations are considered. The extension of the preconditioning techniques and algorithms derived for Navier-Stokes computations to three-dimensional flow problems is discussed. New methods to accelerate the convergence of iterative schemes for the numerical integration of systems of partial differential equtions are developed, with a special emphasis on the acceleration of convergence on highly clustered grids.
Burton, Donald E.; Morgan, Nathaniel Ray; Charest, Marc Robert Joseph; ...
2017-11-22
From the very origins of numerical hydrodynamics in the Lagrangian work of von Neumann and Richtmyer [83], the issue of total energy conservation as well as entropy production has been problematic. Because of well known problems with mesh deformation, Lagrangian schemes have evolved into Arbitrary Lagrangian–Eulerian (ALE) methods [39] that combine the best properties of Lagrangian and Eulerian methods. Energy issues have persisted for this class of methods. We believe that fundamental issues of energy conservation and entropy production in ALE require further examination. The context of the paper is an ALE scheme that is extended in the sense thatmore » it permits cyclic or periodic remap of data between grids of the same or differing connectivity. The principal design goals for a remap method then consist of total energy conservation, bounded internal energy, and compatibility of kinetic energy and momentum. We also have secondary objectives of limiting velocity and stress in a non-directional manner, keeping primitive variables monotone, and providing a higher than second order reconstruction of remapped variables. Particularly, the new contributions fall into three categories associated with: energy conservation and entropy production, reconstruction and bounds preservation of scalar and tensor fields, and conservative remap of nonlinear fields. Our paper presents a derivation of the methods, details of implementation, and numerical results for a number of test problems. The methods requires volume integration of polynomial functions in polytopal cells with planar facets, and the requisite expressions are derived for arbitrary order.« less
NASA Astrophysics Data System (ADS)
Burton, D. E.; Morgan, N. R.; Charest, M. R. J.; Kenamond, M. A.; Fung, J.
2018-02-01
From the very origins of numerical hydrodynamics in the Lagrangian work of von Neumann and Richtmyer [83], the issue of total energy conservation as well as entropy production has been problematic. Because of well known problems with mesh deformation, Lagrangian schemes have evolved into Arbitrary Lagrangian-Eulerian (ALE) methods [39] that combine the best properties of Lagrangian and Eulerian methods. Energy issues have persisted for this class of methods. We believe that fundamental issues of energy conservation and entropy production in ALE require further examination. The context of the paper is an ALE scheme that is extended in the sense that it permits cyclic or periodic remap of data between grids of the same or differing connectivity. The principal design goals for a remap method then consist of total energy conservation, bounded internal energy, and compatibility of kinetic energy and momentum. We also have secondary objectives of limiting velocity and stress in a non-directional manner, keeping primitive variables monotone, and providing a higher than second order reconstruction of remapped variables. In particular, the new contributions fall into three categories associated with: energy conservation and entropy production, reconstruction and bounds preservation of scalar and tensor fields, and conservative remap of nonlinear fields. The paper presents a derivation of the methods, details of implementation, and numerical results for a number of test problems. The methods requires volume integration of polynomial functions in polytopal cells with planar facets, and the requisite expressions are derived for arbitrary order.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burton, Donald E.; Morgan, Nathaniel Ray; Charest, Marc Robert Joseph
From the very origins of numerical hydrodynamics in the Lagrangian work of von Neumann and Richtmyer [83], the issue of total energy conservation as well as entropy production has been problematic. Because of well known problems with mesh deformation, Lagrangian schemes have evolved into Arbitrary Lagrangian–Eulerian (ALE) methods [39] that combine the best properties of Lagrangian and Eulerian methods. Energy issues have persisted for this class of methods. We believe that fundamental issues of energy conservation and entropy production in ALE require further examination. The context of the paper is an ALE scheme that is extended in the sense thatmore » it permits cyclic or periodic remap of data between grids of the same or differing connectivity. The principal design goals for a remap method then consist of total energy conservation, bounded internal energy, and compatibility of kinetic energy and momentum. We also have secondary objectives of limiting velocity and stress in a non-directional manner, keeping primitive variables monotone, and providing a higher than second order reconstruction of remapped variables. Particularly, the new contributions fall into three categories associated with: energy conservation and entropy production, reconstruction and bounds preservation of scalar and tensor fields, and conservative remap of nonlinear fields. Our paper presents a derivation of the methods, details of implementation, and numerical results for a number of test problems. The methods requires volume integration of polynomial functions in polytopal cells with planar facets, and the requisite expressions are derived for arbitrary order.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bisio, Alessandro; D'Ariano, Giacomo Mauro; Perinotti, Paolo
We analyze quantum algorithms for cloning of a quantum measurement. Our aim is to mimic two uses of a device performing an unknown von Neumann measurement with a single use of the device. When the unknown device has to be used before the bipartite state to be measured is available we talk about 1{yields}2 learning of the measurement, otherwise the task is called 1{yields}2 cloning of a measurement. We perform the optimization for both learning and cloning for arbitrary dimension d of the Hilbert space. For 1{yields}2 cloning we also propose a simple quantum network that achieves the optimal fidelity.more » The optimal fidelity for 1{yields}2 learning just slightly outperforms the estimate and prepare strategy in which one first estimates the unknown measurement and depending on the result suitably prepares the duplicate.« less
Assisted Distillation of Quantum Coherence.
Chitambar, E; Streltsov, A; Rana, S; Bera, M N; Adesso, G; Lewenstein, M
2016-02-19
We introduce and study the task of assisted coherence distillation. This task arises naturally in bipartite systems where both parties work together to generate the maximal possible coherence on one of the subsystems. Only incoherent operations are allowed on the target system, while general local quantum operations are permitted on the other; this is an operational paradigm that we call local quantum-incoherent operations and classical communication. We show that the asymptotic rate of assisted coherence distillation for pure states is equal to the coherence of assistance, an analog of the entanglement of assistance, whose properties we characterize. Our findings imply a novel interpretation of the von Neumann entropy: it quantifies the maximum amount of extra quantum coherence a system can gain when receiving assistance from a collaborative party. Our results are generalized to coherence localization in a multipartite setting and possible applications are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kudaka, Shoju; Matsumoto, Shuichi
2007-07-15
In order to acquire an extended Salecker-Wigner formula from which to derive the optimal accuracy in reading a clock with a massive particle as the signal, von Neumann's classical measurement is employed, by which simultaneously both position and momentum of the signal particle can be measured approximately. By an appropriate selection of wave function for the initial state of the composite system (a clock and a signal particle), the formula is derived accurately. Valid ranges of the running time of a clock with a given optimal accuracy are also given. The extended formula means that contrary to the Salecker-Wigner formulamore » there exists the possibility of a higher accuracy of time measurement, even if the mass of the clock is very small.« less
Free boundary problems in shock reflection/diffraction and related transonic flow problems
Chen, Gui-Qiang; Feldman, Mikhail
2015-01-01
Shock waves are steep wavefronts that are fundamental in nature, especially in high-speed fluid flows. When a shock hits an obstacle, or a flying body meets a shock, shock reflection/diffraction phenomena occur. In this paper, we show how several long-standing shock reflection/diffraction problems can be formulated as free boundary problems, discuss some recent progress in developing mathematical ideas, approaches and techniques for solving these problems, and present some further open problems in this direction. In particular, these shock problems include von Neumann's problem for shock reflection–diffraction by two-dimensional wedges with concave corner, Lighthill's problem for shock diffraction by two-dimensional wedges with convex corner, and Prandtl-Meyer's problem for supersonic flow impinging onto solid wedges, which are also fundamental in the mathematical theory of multidimensional conservation laws. PMID:26261363
DOE Office of Scientific and Technical Information (OSTI.GOV)
Menikoff, Ralph
The Zel’dovich-von Neumann-Doering (ZND) profile of a detonation wave is derived. Two basic assumptions are required: i. An equation of state (EOS) for a partly burned explosive; P(V, e, λ). ii. A burn rate for the reaction progress variable; d/dt λ = R(V, e, λ). For a steady planar detonation wave the reactive flow PDEs can be reduced to ODEs. The detonation wave profile can be determined from an ODE plus algebraic equations for points on the partly burned detonation loci with a specified wave speed. Furthermore, for the CJ detonation speed the end of the reaction zone is sonic.more » A solution to the reactive flow equations can be constructed with a rarefaction wave following the detonation wave profile. This corresponds to an underdriven detonation wave, and the rarefaction is know as a Taylor wave.« less
NASA Astrophysics Data System (ADS)
Vodenicarevic, D.; Locatelli, N.; Mizrahi, A.; Friedman, J. S.; Vincent, A. F.; Romera, M.; Fukushima, A.; Yakushiji, K.; Kubota, H.; Yuasa, S.; Tiwari, S.; Grollier, J.; Querlioz, D.
2017-11-01
Low-energy random number generation is critical for many emerging computing schemes proposed to complement or replace von Neumann architectures. However, current random number generators are always associated with an energy cost that is prohibitive for these computing schemes. We introduce random number bit generation based on specific nanodevices: superparamagnetic tunnel junctions. We experimentally demonstrate high-quality random bit generation that represents an orders-of-magnitude improvement in energy efficiency over current solutions. We show that the random generation speed improves with nanodevice scaling, and we investigate the impact of temperature, magnetic field, and cross talk. Finally, we show how alternative computing schemes can be implemented using superparamagentic tunnel junctions as random number generators. These results open the way for fabricating efficient hardware computing devices leveraging stochasticity, and they highlight an alternative use for emerging nanodevices.
Quantum mechanics on phase space and the Coulomb potential
NASA Astrophysics Data System (ADS)
Campos, P.; Martins, M. G. R.; Vianna, J. D. M.
2017-04-01
Symplectic quantum mechanics (SMQ) makes possible to derive the Wigner function without the use of the Liouville-von Neumann equation. In this formulation of the quantum theory the Galilei Lie algebra is constructed using the Weyl (or star) product with Q ˆ = q ⋆ = q +iħ/2∂p , P ˆ = p ⋆ = p -iħ/2∂q, and the Schrödinger equation is rewritten in phase space; in consequence physical applications involving the Coulomb potential present some specific difficulties. Within this context, in order to treat the Schrödinger equation in phase space, a procedure based on the Levi-Civita (or Bohlin) transformation is presented and applied to two-dimensional (2D) hydrogen atom. Amplitudes of probability in phase space and the correspondent Wigner quasi-distribution functions are derived and discussed.
Quantum-like Probabilistic Models Outside Physics
NASA Astrophysics Data System (ADS)
Khrennikov, Andrei
We present a quantum-like (QL) model in that contexts (complexes of e.g. mental, social, biological, economic or even political conditions) are represented by complex probability amplitudes. This approach gives the possibility to apply the mathematical quantum formalism to probabilities induced in any domain of science. In our model quantum randomness appears not as irreducible randomness (as it is commonly accepted in conventional quantum mechanics, e.g. by von Neumann and Dirac), but as a consequence of obtaining incomplete information about a system. We pay main attention to the QL description of processing of incomplete information. Our QL model can be useful in cognitive, social and political sciences as well as economics and artificial intelligence. In this paper we consider in a more detail one special application — QL modeling of brain's functioning. The brain is modeled as a QL-computer.
On quantum Rényi entropies: A new generalization and some properties
NASA Astrophysics Data System (ADS)
Müller-Lennert, Martin; Dupuis, Frédéric; Szehr, Oleg; Fehr, Serge; Tomamichel, Marco
2013-12-01
The Rényi entropies constitute a family of information measures that generalizes the well-known Shannon entropy, inheriting many of its properties. They appear in the form of unconditional and conditional entropies, relative entropies, or mutual information, and have found many applications in information theory and beyond. Various generalizations of Rényi entropies to the quantum setting have been proposed, most prominently Petz's quasi-entropies and Renner's conditional min-, max-, and collision entropy. However, these quantum extensions are incompatible and thus unsatisfactory. We propose a new quantum generalization of the family of Rényi entropies that contains the von Neumann entropy, min-entropy, collision entropy, and the max-entropy as special cases, thus encompassing most quantum entropies in use today. We show several natural properties for this definition, including data-processing inequalities, a duality relation, and an entropic uncertainty relation.
Robustness of Many-Body Localization in the Presence of Dissipation
NASA Astrophysics Data System (ADS)
Levi, Emanuele; Heyl, Markus; Lesanovsky, Igor; Garrahan, Juan P.
2016-06-01
Many-body localization (MBL) has emerged as a novel paradigm for robust ergodicity breaking in closed quantum many-body systems. However, it is not yet clear to which extent MBL survives in the presence of dissipative processes induced by the coupling to an environment. Here we study heating and ergodicity for a paradigmatic MBL system—an interacting fermionic chain subject to quenched disorder—in the presence of dephasing. We find that, even though the system is eventually driven into an infinite-temperature state, heating as monitored by the von Neumann entropy can progress logarithmically slowly, implying exponentially large time scales for relaxation. This slow loss of memory of initial conditions makes signatures of nonergodicity visible over a long, but transient, time regime. We point out a potential controlled realization of the considered setup with cold atomic gases held in optical lattices.
Quantum mechanics on phase space: The hydrogen atom and its Wigner functions
NASA Astrophysics Data System (ADS)
Campos, P.; Martins, M. G. R.; Fernandes, M. C. B.; Vianna, J. D. M.
2018-03-01
Symplectic quantum mechanics (SQM) considers a non-commutative algebra of functions on a phase space Γ and an associated Hilbert space HΓ, to construct a unitary representation for the Galilei group. From this unitary representation the Schrödinger equation is rewritten in phase space variables and the Wigner function can be derived without the use of the Liouville-von Neumann equation. In this article the Coulomb potential in three dimensions (3D) is resolved completely by using the phase space Schrödinger equation. The Kustaanheimo-Stiefel(KS) transformation is applied and the Coulomb and harmonic oscillator potentials are connected. In this context we determine the energy levels, the amplitude of probability in phase space and correspondent Wigner quasi-distribution functions of the 3D-hydrogen atom described by Schrödinger equation in phase space.
Gambarota, Giulio
2017-07-15
Magnetic resonance spectroscopy (MRS) is a well established modality for investigating tissue metabolism in vivo. In recent years, many efforts by the scientific community have been directed towards the improvement of metabolite detection and quantitation. Quantum mechanics simulations allow for investigations of the MR signal behaviour of metabolites; thus, they provide an essential tool in the optimization of metabolite detection. In this review, we will examine quantum mechanics simulations based on the density matrix formalism. The density matrix was introduced by von Neumann in 1927 to take into account statistical effects within the theory of quantum mechanics. We will discuss the main steps of the density matrix simulation of an arbitrary spin system and show some examples for the strongly coupled two spin system. Copyright © 2016 Elsevier Inc. All rights reserved.
Trainable hardware for dynamical computing using error backpropagation through physical media.
Hermans, Michiel; Burm, Michaël; Van Vaerenbergh, Thomas; Dambre, Joni; Bienstman, Peter
2015-03-24
Neural networks are currently implemented on digital Von Neumann machines, which do not fully leverage their intrinsic parallelism. We demonstrate how to use a novel class of reconfigurable dynamical systems for analogue information processing, mitigating this problem. Our generic hardware platform for dynamic, analogue computing consists of a reciprocal linear dynamical system with nonlinear feedback. Thanks to reciprocity, a ubiquitous property of many physical phenomena like the propagation of light and sound, the error backpropagation-a crucial step for tuning such systems towards a specific task-can happen in hardware. This can potentially speed up the optimization process significantly, offering important benefits for the scalability of neuro-inspired hardware. In this paper, we show, using one experimentally validated and one conceptual example, that such systems may provide a straightforward mechanism for constructing highly scalable, fully dynamical analogue computers.
Quantum discord of two-qubit X states
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen Qing; Yu Sixia; Hefei National Laboratory for Physical Sciences at Microscale and Department of Modern Physics, University of Science and Technology of China, Hefei, 230026 Anhui
Quantum discord provides a measure for quantifying quantum correlations beyond entanglement and is very hard to compute even for two-qubit states because of the minimization over all possible measurements. Recently a simple algorithm to evaluate the quantum discord for two-qubit X states was proposed by Ali, Rau, and Alber [Phys. Rev. A 81, 042105 (2010)] with minimization taken over only a few cases. Here we shall at first identify a class of X states, whose quantum discord can be evaluated analytically without any minimization, for which their algorithm is valid, and also identify a family of X states for whichmore » their algorithm fails. And then we demonstrate that this special family of X states provides furthermore an explicit example for the inequivalence between the minimization over positive operator-valued measures and that over von Neumann measurements.« less
Trainable hardware for dynamical computing using error backpropagation through physical media
NASA Astrophysics Data System (ADS)
Hermans, Michiel; Burm, Michaël; van Vaerenbergh, Thomas; Dambre, Joni; Bienstman, Peter
2015-03-01
Neural networks are currently implemented on digital Von Neumann machines, which do not fully leverage their intrinsic parallelism. We demonstrate how to use a novel class of reconfigurable dynamical systems for analogue information processing, mitigating this problem. Our generic hardware platform for dynamic, analogue computing consists of a reciprocal linear dynamical system with nonlinear feedback. Thanks to reciprocity, a ubiquitous property of many physical phenomena like the propagation of light and sound, the error backpropagation—a crucial step for tuning such systems towards a specific task—can happen in hardware. This can potentially speed up the optimization process significantly, offering important benefits for the scalability of neuro-inspired hardware. In this paper, we show, using one experimentally validated and one conceptual example, that such systems may provide a straightforward mechanism for constructing highly scalable, fully dynamical analogue computers.
NASA Astrophysics Data System (ADS)
Fu, Libi; Song, Weiguo; Lo, Siuming
2017-01-01
Emergencies involved in mass events are related to a variety of factors and processes. An important factor is the transmission of information on danger that has an influence on nonlinear crowd dynamics during the process of crowd dispersion. Due to much uncertainty in this process, there is an urgent need to propose a method to investigate the influence. In this paper, a novel fuzzy-theory-based method is presented to study crowd dynamics under the influence of information transmission. Fuzzy functions and rules are designed for the ambiguous description of human states. Reasonable inference is employed to decide the output values of decision making such as pedestrian movement speed and directions. Through simulation under four-way pedestrian situations, good crowd dispersion phenomena are achieved. Simulation results under different conditions demonstrate that information transmission cannot always induce successful crowd dispersion in all situations. This depends on whether decision strategies in response to information on danger are unified and effective, especially in dense crowds. Results also suggest that an increase in drift strength at low density and the percentage of pedestrians, who choose one of the furthest unoccupied Von Neumann neighbors from the dangerous source as the drift direction at high density, is helpful in crowd dispersion. Compared with previous work, our comprehensive study improves an in-depth understanding of nonlinear crowd dynamics under the effect of information on danger.
One-dimensional anyons under three-body interactions.
NASA Astrophysics Data System (ADS)
Silva-Valencia, Jereson; Arcila-Forero, Julian; Franco, Roberto
Anyons are a third class of particles with nontrivial exchange statistics, particles carrying fractional statistics that interpolate between bosons and fermions. In the last years, it has been made some proposals to emulate an anyon gas by confining bosonic atoms in optical lattices [ Nat. Commun. 2, 361 (2011)]. In this work, we studied the ground state of anyons interacting through local three-body terms in one-dimension, motivated by recent experimental and theoretical studies about multi-body interactions in cold atoms setups. We used the density-matrix renormalization group method to find the phase diagram and the von Neumann block entropy to determinate the critical point position. The main quantum phases found are the superfluid and the Mott insulator ones. For the statistical angle θ = π /4, the phase diagram shows that the Mott lobes are surrounded by superfluid regions, the Mott lobes increase with the density and the first Mott lobe has two anyons per site. We found that a Mott lobe with one anyon per site, it is possible for larger statistical angles, a fact that it is impossible with bosons. DIBE- Universidad Nacional de Colombia and Departamento Administrativo de Ciencia, Tecnología e Innovación (COLCIENCAS) (Grant No. FP44842-057-2015).
Jones index, secret sharing and total quantum dimension
NASA Astrophysics Data System (ADS)
Fiedler, Leander; Naaijkens, Pieter; Osborne, Tobias J.
2017-02-01
We study the total quantum dimension in the thermodynamic limit of topologically ordered systems. In particular, using the anyons (or superselection sectors) of such models, we define a secret sharing scheme, storing information invisible to a malicious party, and argue that the total quantum dimension quantifies how well we can perform this task. We then argue that this can be made mathematically rigorous using the index theory of subfactors, originally due to Jones and later extended by Kosaki and Longo. This theory provides us with a ‘relative entropy’ of two von Neumann algebras and a quantum channel, and we argue how these can be used to quantify how much classical information two parties can hide form an adversary. We also review the total quantum dimension in finite systems, in particular how it relates to topological entanglement entropy. It is known that the latter also has an interpretation in terms of secret sharing schemes, although this is shown by completely different methods from ours. Our work provides a different and independent take on this, which at the same time is completely mathematically rigorous. This complementary point of view might be beneficial, for example, when studying the stability of the total quantum dimension when the system is perturbed.
Propagation of acoustic shock waves between parallel rigid boundaries and into shadow zones
DOE Office of Scientific and Technical Information (OSTI.GOV)
Desjouy, C., E-mail: cyril.desjouy@gmail.com; Ollivier, S.; Dragna, D.
2015-10-28
The study of acoustic shock propagation in complex environments is of great interest for urban acoustics, but also for source localization, an underlying problematic in military applications. To give a better understanding of the phenomenon taking place during the propagation of acoustic shocks, laboratory-scale experiments and numerical simulations were performed to study the propagation of weak shock waves between parallel rigid boundaries, and into shadow zones created by corners. In particular, this work focuses on the study of the local interactions taking place between incident, reflected, and diffracted waves according to the geometry in both regular or irregular – alsomore » called Von Neumann – regimes of reflection. In this latter case, an irregular reflection can lead to the formation of a Mach stem that can modify the spatial distribution of the acoustic pressure. Short duration acoustic shock waves were produced by a 20 kilovolts electric spark source and a schlieren optical method was used to visualize the incident shockfront and the reflection/diffraction patterns. Experimental results are compared to numerical simulations based on the high-order finite difference solution of the two dimensional Navier-Stokes equations.« less
NASA Astrophysics Data System (ADS)
Luo, Shunlong; Li, Nan; Cao, Xuelian
2009-05-01
The no-broadcasting theorem, first established by Barnum [Phys. Rev. Lett. 76, 2818 (1996)], states that a set of quantum states can be broadcast if and only if it constitutes a commuting family. Quite recently, Piani [Phys. Rev. Lett. 100, 090502 (2008)] showed, by using an ingenious and sophisticated method, that the correlations in a single bipartite state can be locally broadcast if and only if the state is effectively a classical one (i.e., the correlations therein are classical). In this Brief Report, under the condition of nondegenerate spectrum, we provide an alternative and significantly simpler proof of the latter result based on the original no-broadcasting theorem and the monotonicity of the quantum relative entropy. This derivation motivates us to conjecture the equivalence between these two elegant yet formally different no-broadcasting theorems and indicates a subtle and fundamental issue concerning spectral degeneracy which also lies at the heart of the conflict between the von Neumann projection postulate and the Lüders ansatz for quantum measurements. This relation not only offers operational interpretations for commutativity and classicality but also illustrates the basic significance of noncommutativity in characterizing quantumness from the informational perspective.
Stimulated resonant x-ray Raman scattering with incoherent radiation
NASA Astrophysics Data System (ADS)
Weninger, Clemens; Rohringer, Nina
2013-11-01
We present a theoretical study on stimulated electronic Raman scattering in neon by resonant excitation with an x-ray free electron laser (XFEL). This study is in support of the recent experimental demonstration [C. Weninger , Phys. Rev. Lett. (to be published)] of stimulated x-ray Raman scattering. Focusing the broadband XFEL pulses into a cell of neon gas at atmospheric pressure a strong inelastic x-ray scattering signal in the forward direction was observed, as the x-ray energy was varied across the region of core-excited Rydberg states and the K edge. The broadband and intrinsically incoherent x-ray pulses from the XFEL lead to a rich, structured line shape of the scattered radiation. We present a generalized Maxwell-Liouville-von Neumann approach to self-consistently solve for the amplification of the scattered radiation along with the time evolution of the density matrix of the atomic and residual ionic system. An in-depth analysis of the evolution of the emission spectra as a function of the Raman gain is presented. Furthermore, we propose the use of statistical methods to obtain high-resolution scattering data beyond the lifetime broadening despite pumping with incoherent x-ray pulses.
Klinkusch, Stefan; Tremblay, Jean Christophe
2016-05-14
In this contribution, we introduce a method for simulating dissipative, ultrafast many-electron dynamics in intense laser fields. The method is based on the norm-conserving stochastic unraveling of the dissipative Liouville-von Neumann equation in its Lindblad form. The N-electron wave functions sampling the density matrix are represented in the basis of singly excited configuration state functions. The interaction with an external laser field is treated variationally and the response of the electronic density is included to all orders in this basis. The coupling to an external environment is included via relaxation operators inducing transition between the configuration state functions. Single electron ionization is represented by irreversible transition operators from the ionizing states to an auxiliary continuum state. The method finds its efficiency in the representation of the operators in the interaction picture, where the resolution-of-identity is used to reduce the size of the Hamiltonian eigenstate basis. The zeroth-order eigenstates can be obtained either at the configuration interaction singles level or from a time-dependent density functional theory reference calculation. The latter offers an alternative to explicitly time-dependent density functional theory which has the advantage of remaining strictly valid for strong field excitations while improving the description of the correlation as compared to configuration interaction singles. The method is tested on a well-characterized toy system, the excitation of the low-lying charge transfer state in LiCN.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klinkusch, Stefan; Tremblay, Jean Christophe
In this contribution, we introduce a method for simulating dissipative, ultrafast many-electron dynamics in intense laser fields. The method is based on the norm-conserving stochastic unraveling of the dissipative Liouville-von Neumann equation in its Lindblad form. The N-electron wave functions sampling the density matrix are represented in the basis of singly excited configuration state functions. The interaction with an external laser field is treated variationally and the response of the electronic density is included to all orders in this basis. The coupling to an external environment is included via relaxation operators inducing transition between the configuration state functions. Single electronmore » ionization is represented by irreversible transition operators from the ionizing states to an auxiliary continuum state. The method finds its efficiency in the representation of the operators in the interaction picture, where the resolution-of-identity is used to reduce the size of the Hamiltonian eigenstate basis. The zeroth-order eigenstates can be obtained either at the configuration interaction singles level or from a time-dependent density functional theory reference calculation. The latter offers an alternative to explicitly time-dependent density functional theory which has the advantage of remaining strictly valid for strong field excitations while improving the description of the correlation as compared to configuration interaction singles. The method is tested on a well-characterized toy system, the excitation of the low-lying charge transfer state in LiCN.« less
NASA Astrophysics Data System (ADS)
Diggs, Angela; Balachandar, Sivaramakrishnan
2015-06-01
The present work addresses the numerical methods required for particle-gas and particle-particle interactions in Eulerian-Lagrangian simulations of multiphase flow. Local volume fraction as seen by each particle is the quantity of foremost importance in modeling and evaluating such interactions. We consider a general multiphase flow with a distribution of particles inside a fluid flow discretized on an Eulerian grid. Particle volume fraction is needed both as a Lagrangian quantity associated with each particle and also as an Eulerian quantity associated with the flow. In Eulerian Projection (EP) methods, the volume fraction is first obtained within each cell as an Eulerian quantity and then interpolated to each particle. In Lagrangian Projection (LP) methods, the particle volume fraction is obtained at each particle and then projected onto the Eulerian grid. Traditionally, EP methods are used in multiphase flow, but sub-grid resolution can be obtained through use of LP methods. By evaluating the total error and its components we compare the performance of EP and LP methods. The standard von Neumann error analysis technique has been adapted for rigorous evaluation of rate of convergence. The methods presented can be extended to obtain accurate field representations of other Lagrangian quantities. Most importantly, we will show that such careful attention to numerical methodologies is needed in order to capture complex shock interaction with a bed of particles. Supported by U.S. Department of Defense SMART Program and the U.S. Department of Energy PSAAP-II program under Contract No. DE-NA0002378.
Haag duality for Kitaev’s quantum double model for abelian groups
NASA Astrophysics Data System (ADS)
Fiedler, Leander; Naaijkens, Pieter
2015-11-01
We prove Haag duality for cone-like regions in the ground state representation corresponding to the translational invariant ground state of Kitaev’s quantum double model for finite abelian groups. This property says that if an observable commutes with all observables localized outside the cone region, it actually is an element of the von Neumann algebra generated by the local observables inside the cone. This strengthens locality, which says that observables localized in disjoint regions commute. As an application, we consider the superselection structure of the quantum double model for abelian groups on an infinite lattice in the spirit of the Doplicher-Haag-Roberts program in algebraic quantum field theory. We find that, as is the case for the toric code model on an infinite lattice, the superselection structure is given by the category of irreducible representations of the quantum double.
Entanglement entropy of 2D conformal quantum critical points: hearing the shape of a quantum drum.
Fradkin, Eduardo; Moore, Joel E
2006-08-04
The entanglement entropy of a pure quantum state of a bipartite system A union or logical sumB is defined as the von Neumann entropy of the reduced density matrix obtained by tracing over one of the two parts. In one dimension, the entanglement of critical ground states diverges logarithmically in the subsystem size, with a universal coefficient that for conformally invariant critical points is related to the central charge of the conformal field theory. We find that the entanglement entropy of a standard class of z=2 conformal quantum critical points in two spatial dimensions, in addition to a nonuniversal "area law" contribution linear in the size of the AB boundary, generically has a universal logarithmically divergent correction, which is completely determined by the geometry of the partition and by the central charge of the field theory that describes the critical wave function.
Mutual information and spontaneous symmetry breaking
NASA Astrophysics Data System (ADS)
Hamma, A.; Giampaolo, S. M.; Illuminati, F.
2016-01-01
We show that the metastable, symmetry-breaking ground states of quantum many-body Hamiltonians have vanishing quantum mutual information between macroscopically separated regions and are thus the most classical ones among all possible quantum ground states. This statement is obvious only when the symmetry-breaking ground states are simple product states, e.g., at the factorization point. On the other hand, symmetry-breaking states are in general entangled along the entire ordered phase, and to show that they actually feature the least macroscopic correlations compared to their symmetric superpositions is highly nontrivial. We prove this result in general, by considering the quantum mutual information based on the two-Rényi entanglement entropy and using a locality result stemming from quasiadiabatic continuation. Moreover, in the paradigmatic case of the exactly solvable one-dimensional quantum X Y model, we further verify the general result by considering also the quantum mutual information based on the von Neumann entanglement entropy.
Entanglement and area law with a fractal boundary in a topologically ordered phase
NASA Astrophysics Data System (ADS)
Hamma, Alioscia; Lidar, Daniel A.; Severini, Simone
2010-01-01
Quantum systems with short-range interactions are known to respect an area law for the entanglement entropy: The von Neumann entropy S associated to a bipartition scales with the boundary p between the two parts. Here we study the case in which the boundary is a fractal. We consider the topologically ordered phase of the toric code with a magnetic field. When the field vanishes it is possible to analytically compute the entanglement entropy for both regular and fractal bipartitions (A,B) of the system and this yields an upper bound for the entire topological phase. When the A-B boundary is regular we have S/p=1 for large p. When the boundary is a fractal of the Hausdorff dimension D, we show that the entanglement between the two parts scales as S/p=γ⩽1/D, and γ depends on the fractal considered.
Entanglement complexity in quantum many-body dynamics, thermalization, and localization
NASA Astrophysics Data System (ADS)
Yang, Zhi-Cheng; Hamma, Alioscia; Giampaolo, Salvatore M.; Mucciolo, Eduardo R.; Chamon, Claudio
2017-07-01
Entanglement is usually quantified by von Neumann entropy, but its properties are much more complex than what can be expressed with a single number. We show that the three distinct dynamical phases known as thermalization, Anderson localization, and many-body localization are marked by different patterns of the spectrum of the reduced density matrix for a state evolved after a quantum quench. While the entanglement spectrum displays Poisson statistics for the case of Anderson localization, it displays universal Wigner-Dyson statistics for both the cases of many-body localization and thermalization, albeit the universal distribution is asymptotically reached within very different time scales in these two cases. We further show that the complexity of entanglement, revealed by the possibility of disentangling the state through a Metropolis-like algorithm, is signaled by whether the entanglement spectrum level spacing is Poisson or Wigner-Dyson distributed.
Measures for the Dynamics in a Few-Body Quantum System with Harmonic Interactions
NASA Astrophysics Data System (ADS)
Nagy, I.; Pipek, J.; Glasser, M. L.
2018-01-01
We determine the exact time-dependent non-idempotent one-particle reduced density matrix and its spectral decomposition for a harmonically confined two-particle correlated one-dimensional system when the interaction terms in the Schrödinger Hamiltonian are changed abruptly. Based on this matrix in coordinate space we derive a precise condition for the equivalence of the purity and the overlap-square of the correlated and non-correlated wave functions as the model system with harmonic interactions evolves in time. This equivalence holds only if the interparticle interactions are affected, while the confinement terms are unaffected within the stability range of the system. Under this condition we analyze various time-dependent measures of entanglement and demonstrate that, depending on the magnitude of the changes made in the Hamiltonian, periodic, logarithmically increasing or constant value behavior of the von Neumann entropy can occur.
Spin-phase-space-entropy production
NASA Astrophysics Data System (ADS)
Santos, Jader P.; Céleri, Lucas C.; Brito, Frederico; Landi, Gabriel T.; Paternostro, Mauro
2018-05-01
Quantifying the degree of irreversibility of an open system dynamics represents a problem of both fundamental and applied relevance. Even though a well-known framework exists for thermal baths, the results give diverging results in the limit of zero temperature and are also not readily extended to nonequilibrium reservoirs, such as dephasing baths. Aimed at filling this gap, in this paper we introduce a phase-space-entropy production framework for quantifying the irreversibility of spin systems undergoing Lindblad dynamics. The theory is based on the spin Husimi-Q function and its corresponding phase-space entropy, known as Wehrl entropy. Unlike the von Neumann entropy production rate, we show that in our framework, the Wehrl entropy production rate remains valid at any temperature and is also readily extended to arbitrary nonequilibrium baths. As an application, we discuss the irreversibility associated with the interaction of a two-level system with a single-photon pulse, a problem which cannot be treated using the conventional approach.
Tsirelson's bound from a generalized data processing inequality
NASA Astrophysics Data System (ADS)
Dahlsten, Oscar C. O.; Lercher, Daniel; Renner, Renato
2012-06-01
The strength of quantum correlations is bounded from above by Tsirelson's bound. We establish a connection between this bound and the fact that correlations between two systems cannot increase under local operations, a property known as the data processing inequality (DPI). More specifically, we consider arbitrary convex probabilistic theories. These can be equipped with an entropy measure that naturally generalizes the von Neumann entropy, as shown recently in Short and Wehner (2010 New J. Phys. 12 033023) and Barnum et al (2010 New J. Phys. 12 033024). We prove that if the DPI holds with respect to this generalized entropy measure then the underlying theory necessarily respects Tsirelson's bound. We, moreover, generalize this statement to any entropy measure satisfying certain minimal requirements. A consequence of our result is that not all the entropic relations used for deriving Tsirelson's bound via information causality in Pawlowski et al (2009 Nature 461 1101-4) are necessary.
NASA Astrophysics Data System (ADS)
Tiwari, Durgesh Laxman; Sivasankaran, K.
This paper presents improved performance of Double Gate Graphene Nanomesh Field Effect Transistor (DG-GNMFET) with h-BN as substrate and gate oxide material. The DC characteristics of 0.95μm and 5nm channel length devices are studied for SiO2 and h-BN substrate and oxide material. For analyzing the ballistic behavior of electron for 5nm channel length, von Neumann boundary condition is considered near source and drain contact region. The simulated results show improved saturation current for h-BN encapsulated structure with two times higher on current value (0.375 for SiO2 and 0.621 for h-BN) as compared to SiO2 encapsulated structure. The obtained result shows h-BN to be a better substrate and oxide material for graphene electronics with improved device characteristics.
Memristive Mixed-Signal Neuromorphic Systems: Energy-Efficient Learning at the Circuit-Level
Chakma, Gangotree; Adnan, Md Musabbir; Wyer, Austin R.; ...
2017-11-23
Neuromorphic computing is non-von Neumann computer architecture for the post Moore’s law era of computing. Since a main focus of the post Moore’s law era is energy-efficient computing with fewer resources and less area, neuromorphic computing contributes effectively in this research. Here in this paper, we present a memristive neuromorphic system for improved power and area efficiency. Our particular mixed-signal approach implements neural networks with spiking events in a synchronous way. Moreover, the use of nano-scale memristive devices saves both area and power in the system. We also provide device-level considerations that make the system more energy-efficient. The proposed systemmore » additionally includes synchronous digital long term plasticity, an online learning methodology that helps the system train the neural networks during the operation phase and improves the efficiency in learning considering the power consumption and area overhead.« less
Detonation initiation of heterogeneous melt-cast high explosives
NASA Astrophysics Data System (ADS)
Chuzeville, V.; Baudin, G.; Lefrançois, A.; Genetier, M.; Barbarin, Y.; Jacquet, L.; Lhopitault, J.-L.; Peix, J.; Boulanger, R.; Catoire, L.
2017-01-01
2,4,6-trinitrotoluene (TNT) is widely used in conventional and insensitive munitions as a fusible binder, commonly melt-cast with other explosives such as 1,3,5-trinitroperhydro-1,3,5-triazine (RDX) or 3-nitro-1,2,4-triazol-one (NTO). In this paper, we study the shock-to-detonation transition phenomenon in two melt-cast high explosives (HE). We have performed plate impact tests on wedge samples to measure run-distance and time-to-detonation in order to establish the Pop-plot relation for several melt-cast HE. Highlighting the existence of the single curve buildup, we propose a two phase model based on a Zeldovich, Von-Neumann, Döring (ZND) approach where the deflagration fronts grow from the explosive grain boundaries. Knowing the grain size distribution, we calculate the deflagration velocities of the explosive charges as a function of shock pressure and explore the possible grain fragmentation.
NASA Astrophysics Data System (ADS)
Niestegge, Gerd
2014-09-01
In quantum mechanics, the selfadjoint Hilbert space operators play a triple role as observables, generators of the dynamical groups and statistical operators defining the mixed states. One might expect that this is typical of Hilbert space quantum mechanics, but it is not. The same triple role occurs for the elements of a certain ordered Banach space in a much more general theory based upon quantum logics and a conditional probability calculus (which is a quantum logical model of the Lueders-von Neumann measurement process). It is shown how positive groups, automorphism groups, Lie algebras and statistical operators emerge from one major postulate - the non-existence of third-order interference (third-order interference and its impossibility in quantum mechanics were discovered by R. Sorkin in 1994). This again underlines the power of the combination of the conditional probability calculus with the postulate that there is no third-order interference. In two earlier papers, its impact on contextuality and nonlocality had already been revealed.
Information-theoretic equilibrium and observable thermalization
NASA Astrophysics Data System (ADS)
Anzà, F.; Vedral, V.
2017-03-01
A crucial point in statistical mechanics is the definition of the notion of thermal equilibrium, which can be given as the state that maximises the von Neumann entropy, under the validity of some constraints. Arguing that such a notion can never be experimentally probed, in this paper we propose a new notion of thermal equilibrium, focused on observables rather than on the full state of the quantum system. We characterise such notion of thermal equilibrium for an arbitrary observable via the maximisation of its Shannon entropy and we bring to light the thermal properties that it heralds. The relation with Gibbs ensembles is studied and understood. We apply such a notion of equilibrium to a closed quantum system and show that there is always a class of observables which exhibits thermal equilibrium properties and we give a recipe to explicitly construct them. Eventually, an intimate connection with the Eigenstate Thermalisation Hypothesis is brought to light.
Information-theoretic equilibrium and observable thermalization
Anzà, F.; Vedral, V.
2017-01-01
A crucial point in statistical mechanics is the definition of the notion of thermal equilibrium, which can be given as the state that maximises the von Neumann entropy, under the validity of some constraints. Arguing that such a notion can never be experimentally probed, in this paper we propose a new notion of thermal equilibrium, focused on observables rather than on the full state of the quantum system. We characterise such notion of thermal equilibrium for an arbitrary observable via the maximisation of its Shannon entropy and we bring to light the thermal properties that it heralds. The relation with Gibbs ensembles is studied and understood. We apply such a notion of equilibrium to a closed quantum system and show that there is always a class of observables which exhibits thermal equilibrium properties and we give a recipe to explicitly construct them. Eventually, an intimate connection with the Eigenstate Thermalisation Hypothesis is brought to light. PMID:28266646
Local modular Hamiltonians from the quantum null energy condition
NASA Astrophysics Data System (ADS)
Koeller, Jason; Leichenauer, Stefan; Levine, Adam; Shahbazi-Moghaddam, Arvin
2018-03-01
The vacuum modular Hamiltonian K of the Rindler wedge in any relativistic quantum field theory is given by the boost generator. Here we investigate the modular Hamiltonian for more general half-spaces which are bounded by an arbitrary smooth cut of a null plane. We derive a formula for the second derivative of the modular Hamiltonian with respect to the coordinates of the cut which schematically reads K''=Tv v . This formula can be integrated twice to obtain a simple expression for the modular Hamiltonian. The result naturally generalizes the standard expression for the Rindler modular Hamiltonian to this larger class of regions. Our primary assumptions are the quantum null energy condition—an inequality between the second derivative of the von Neumann entropy of a region and the stress tensor—and its saturation in the vacuum for these regions. We discuss the validity of these assumptions in free theories and holographic theories to all orders in 1 /N .
Ultrafast Synaptic Events in a Chalcogenide Memristor
NASA Astrophysics Data System (ADS)
Li, Yi; Zhong, Yingpeng; Xu, Lei; Zhang, Jinjian; Xu, Xiaohua; Sun, Huajun; Miao, Xiangshui
2013-04-01
Compact and power-efficient plastic electronic synapses are of fundamental importance to overcoming the bottlenecks of developing a neuromorphic chip. Memristor is a strong contender among the various electronic synapses in existence today. However, the speeds of synaptic events are relatively slow in most attempts at emulating synapses due to the material-related mechanism. Here we revealed the intrinsic memristance of stoichiometric crystalline Ge2Sb2Te5 that originates from the charge trapping and releasing by the defects. The device resistance states, representing synaptic weights, were precisely modulated by 30 ns potentiating/depressing electrical pulses. We demonstrated four spike-timing-dependent plasticity (STDP) forms by applying programmed pre- and postsynaptic spiking pulse pairs in different time windows ranging from 50 ms down to 500 ns, the latter of which is 105 times faster than the speed of STDP in human brain. This study provides new opportunities for building ultrafast neuromorphic computing systems and surpassing Von Neumann architecture.
The Gibbs paradox and the physical criteria for indistinguishability of identical particles
NASA Astrophysics Data System (ADS)
Unnikrishnan, C. S.
2016-08-01
Gibbs paradox in the context of statistical mechanics addresses the issue of additivity of entropy of mixing gases. The usual discussion attributes the paradoxical situation to classical distinguishability of identical particles and credits quantum theory for enabling indistinguishability of identical particles to solve the problem. We argue that indistinguishability of identical particles is already a feature in classical mechanics and this is clearly brought out when the problem is treated in the language of information and associated entropy. We pinpoint the physical criteria for indistinguishability that is crucial for the treatment of the Gibbs’ problem and the consistency of its solution with conventional thermodynamics. Quantum mechanics provides a quantitative criterion, not possible in the classical picture, for the degree of indistinguishability in terms of visibility of quantum interference, or overlap of the states as pointed out by von Neumann, thereby endowing the entropy expression with mathematical continuity and physical reasonableness.
Deep learning with coherent nanophotonic circuits
NASA Astrophysics Data System (ADS)
Shen, Yichen; Harris, Nicholas C.; Skirlo, Scott; Prabhu, Mihika; Baehr-Jones, Tom; Hochberg, Michael; Sun, Xin; Zhao, Shijie; Larochelle, Hugo; Englund, Dirk; Soljačić, Marin
2017-07-01
Artificial neural networks are computational network models inspired by signal processing in the brain. These models have dramatically improved performance for many machine-learning tasks, including speech and image recognition. However, today's computing hardware is inefficient at implementing neural networks, in large part because much of it was designed for von Neumann computing schemes. Significant effort has been made towards developing electronic architectures tuned to implement artificial neural networks that exhibit improved computational speed and accuracy. Here, we propose a new architecture for a fully optical neural network that, in principle, could offer an enhancement in computational speed and power efficiency over state-of-the-art electronics for conventional inference tasks. We experimentally demonstrate the essential part of the concept using a programmable nanophotonic processor featuring a cascaded array of 56 programmable Mach-Zehnder interferometers in a silicon photonic integrated circuit and show its utility for vowel recognition.
Ultrafast synaptic events in a chalcogenide memristor.
Li, Yi; Zhong, Yingpeng; Xu, Lei; Zhang, Jinjian; Xu, Xiaohua; Sun, Huajun; Miao, Xiangshui
2013-01-01
Compact and power-efficient plastic electronic synapses are of fundamental importance to overcoming the bottlenecks of developing a neuromorphic chip. Memristor is a strong contender among the various electronic synapses in existence today. However, the speeds of synaptic events are relatively slow in most attempts at emulating synapses due to the material-related mechanism. Here we revealed the intrinsic memristance of stoichiometric crystalline Ge2Sb2Te5 that originates from the charge trapping and releasing by the defects. The device resistance states, representing synaptic weights, were precisely modulated by 30 ns potentiating/depressing electrical pulses. We demonstrated four spike-timing-dependent plasticity (STDP) forms by applying programmed pre- and postsynaptic spiking pulse pairs in different time windows ranging from 50 ms down to 500 ns, the latter of which is 10(5) times faster than the speed of STDP in human brain. This study provides new opportunities for building ultrafast neuromorphic computing systems and surpassing Von Neumann architecture.
Effect of Shock Precompression on the Critical Diameter of Liquid Explosives
NASA Astrophysics Data System (ADS)
Petel, Oren E.; Higgins, Andrew J.; Yoshinaka, Akio C.; Zhang, Fan
2006-07-01
The critical diameter of both ambient and shock-precompressed liquid nitromethane confined in PVC tubing are measured experimentally. The experiment was conducted for both amine sensitized and neat NM. In the precompression experiments, the explosive is compressed by a strong shock wave generated by a donor explosive and reflected from a high impedance anvil prior to being detonated by a secondary event. The pressures reached in the test sections prior to detonation propagation was approximately 7 and 8 GPa for amine sensitized and neat NM respectively. The results demonstrated a 30% - 65% decrease in the critical diameter for the shock-compressed explosives. This critical diameter decrease is observed despite a significant decrease in the predicted Von Neumann temperature of the detonation in the precompressed explosive. The results are discussed in the context of theoretical predictions based on thermal ignition theory and previous critical diameter measurements.
Memristive Mixed-Signal Neuromorphic Systems: Energy-Efficient Learning at the Circuit-Level
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chakma, Gangotree; Adnan, Md Musabbir; Wyer, Austin R.
Neuromorphic computing is non-von Neumann computer architecture for the post Moore’s law era of computing. Since a main focus of the post Moore’s law era is energy-efficient computing with fewer resources and less area, neuromorphic computing contributes effectively in this research. Here in this paper, we present a memristive neuromorphic system for improved power and area efficiency. Our particular mixed-signal approach implements neural networks with spiking events in a synchronous way. Moreover, the use of nano-scale memristive devices saves both area and power in the system. We also provide device-level considerations that make the system more energy-efficient. The proposed systemmore » additionally includes synchronous digital long term plasticity, an online learning methodology that helps the system train the neural networks during the operation phase and improves the efficiency in learning considering the power consumption and area overhead.« less
Response to defects in multipartite and bipartite entanglement of isotropic quantum spin networks
NASA Astrophysics Data System (ADS)
Roy, Sudipto Singha; Dhar, Himadri Shekhar; Rakshit, Debraj; SenDe, Aditi; Sen, Ujjwal
2018-05-01
Quantum networks are an integral component in performing efficient computation and communication tasks that are not accessible using classical systems. A key aspect in designing an effective and scalable quantum network is generating entanglement between its nodes, which is robust against defects in the network. We consider an isotropic quantum network of spin-1/2 particles with a finite fraction of defects, where the corresponding wave function of the network is rotationally invariant under the action of local unitaries. By using quantum information-theoretic concepts like strong subadditivity of von Neumann entropy and approximate quantum telecloning, we prove analytically that in the presence of defects, caused by loss of a finite fraction of spins, the network, composed of a fixed numbers of lattice sites, sustains genuine multisite entanglement and at the same time may exhibit finite moderate-range bipartite entanglement, in contrast to the network with no defects.
NASA Astrophysics Data System (ADS)
Park, DaeKil
2018-06-01
The dynamics of entanglement and uncertainty relation is explored by solving the time-dependent Schrödinger equation for coupled harmonic oscillator system analytically when the angular frequencies and coupling constant are arbitrarily time dependent. We derive the spectral and Schmidt decompositions for vacuum solution. Using the decompositions, we derive the analytical expressions for von Neumann and Rényi entropies. Making use of Wigner distribution function defined in phase space, we derive the time dependence of position-momentum uncertainty relations. To show the dynamics of entanglement and uncertainty relation graphically, we introduce two toy models and one realistic quenched model. While the dynamics can be conjectured by simple consideration in the toy models, the dynamics in the realistic quenched model is somewhat different from that in the toy models. In particular, the dynamics of entanglement exhibits similar pattern to dynamics of uncertainty parameter in the realistic quenched model.
Gaussian States Minimize the Output Entropy of One-Mode Quantum Gaussian Channels
NASA Astrophysics Data System (ADS)
De Palma, Giacomo; Trevisan, Dario; Giovannetti, Vittorio
2017-04-01
We prove the long-standing conjecture stating that Gaussian thermal input states minimize the output von Neumann entropy of one-mode phase-covariant quantum Gaussian channels among all the input states with a given entropy. Phase-covariant quantum Gaussian channels model the attenuation and the noise that affect any electromagnetic signal in the quantum regime. Our result is crucial to prove the converse theorems for both the triple trade-off region and the capacity region for broadcast communication of the Gaussian quantum-limited amplifier. Our result extends to the quantum regime the entropy power inequality that plays a key role in classical information theory. Our proof exploits a completely new technique based on the recent determination of the p →q norms of the quantum-limited amplifier [De Palma et al., arXiv:1610.09967]. This technique can be applied to any quantum channel.
Gaussian States Minimize the Output Entropy of One-Mode Quantum Gaussian Channels.
De Palma, Giacomo; Trevisan, Dario; Giovannetti, Vittorio
2017-04-21
We prove the long-standing conjecture stating that Gaussian thermal input states minimize the output von Neumann entropy of one-mode phase-covariant quantum Gaussian channels among all the input states with a given entropy. Phase-covariant quantum Gaussian channels model the attenuation and the noise that affect any electromagnetic signal in the quantum regime. Our result is crucial to prove the converse theorems for both the triple trade-off region and the capacity region for broadcast communication of the Gaussian quantum-limited amplifier. Our result extends to the quantum regime the entropy power inequality that plays a key role in classical information theory. Our proof exploits a completely new technique based on the recent determination of the p→q norms of the quantum-limited amplifier [De Palma et al., arXiv:1610.09967]. This technique can be applied to any quantum channel.
Information-theoretic equilibrium and observable thermalization.
Anzà, F; Vedral, V
2017-03-07
A crucial point in statistical mechanics is the definition of the notion of thermal equilibrium, which can be given as the state that maximises the von Neumann entropy, under the validity of some constraints. Arguing that such a notion can never be experimentally probed, in this paper we propose a new notion of thermal equilibrium, focused on observables rather than on the full state of the quantum system. We characterise such notion of thermal equilibrium for an arbitrary observable via the maximisation of its Shannon entropy and we bring to light the thermal properties that it heralds. The relation with Gibbs ensembles is studied and understood. We apply such a notion of equilibrium to a closed quantum system and show that there is always a class of observables which exhibits thermal equilibrium properties and we give a recipe to explicitly construct them. Eventually, an intimate connection with the Eigenstate Thermalisation Hypothesis is brought to light.
Bounding entanglement spreading after a local quench
NASA Astrophysics Data System (ADS)
Drumond, Raphael C.; Móller, Natália S.
2017-06-01
We consider the variation of von Neumann entropy of subsystem reduced states of general many-body lattice spin systems due to local quantum quenches. We obtain Lieb-Robinson-like bounds that are independent of the subsystem volume. The main assumptions are that the Hamiltonian satisfies a Lieb-Robinson bound and that the volume of spheres on the lattice grows at most exponentially with their radius. More specifically, the bound exponentially increases with time but exponentially decreases with the distance between the subsystem and the region where the quench takes place. The fact that the bound is independent of the subsystem volume leads to stronger constraints (than previously known) on the propagation of information throughout many-body systems. In particular, it shows that bipartite entanglement satisfies an effective "light cone," regardless of system size. Further implications to t density-matrix renormalization-group simulations of quantum spin chains and limitations to the propagation of information are discussed.
The Entangled Histories of Physics and Computation
NASA Astrophysics Data System (ADS)
Rodriguez, Cesar
2007-03-01
The history of physics and computation intertwine in a fascinating manner that is relevant to the field of quantum computation. This talk focuses of the interconnections between both by examining their rhyming philosophies, recurrent characters and common themes. Leibniz not only was one of the lead figures of calculus, but also left his footprint in physics and invented the concept of a universal computational language. This last idea was further developed by Boole, Russell, Hilbert and G"odel. Physicists such as Boltzmann and Maxwell also established the foundation of the field of information theory later developed by Shannon. The war efforts of von Neumann and Turing can be juxtaposed to the Manhattan Project. Professional and personal connections of these characters to the development of physics will be emphasized. Recently, new cryptographic developments lead to a reexamination of the fundamentals of quantum mechanics, while quantum computation is discovering a new perspective on the nature of information itself.
Synaptic plasticity functions in an organic electrochemical transistor
NASA Astrophysics Data System (ADS)
Gkoupidenis, Paschalis; Schaefer, Nathan; Strakosas, Xenofon; Fairfield, Jessamyn A.; Malliaras, George G.
2015-12-01
Synaptic plasticity functions play a crucial role in the transmission of neural signals in the brain. Short-term plasticity is required for the transmission, encoding, and filtering of the neural signal, whereas long-term plasticity establishes more permanent changes in neural microcircuitry and thus underlies memory and learning. The realization of bioinspired circuits that can actually mimic signal processing in the brain demands the reproduction of both short- and long-term aspects of synaptic plasticity in a single device. Here, we demonstrate the implementation of neuromorphic functions similar to biological memory, such as short- to long-term memory transition, in non-volatile organic electrochemical transistors (OECTs). Depending on the training of the OECT, the device displays either short- or long-term plasticity, therefore, exhibiting non von Neumann characteristics with merged processing and storing functionalities. These results are a first step towards the implementation of organic-based neuromorphic circuits.
Gilgamesh: A Multithreaded Processor-In-Memory Architecture for Petaflops Computing
NASA Technical Reports Server (NTRS)
Sterling, T. L.; Zima, H. P.
2002-01-01
Processor-in-Memory (PIM) architectures avoid the von Neumann bottleneck in conventional machines by integrating high-density DRAM and CMOS logic on the same chip. Parallel systems based on this new technology are expected to provide higher scalability, adaptability, robustness, fault tolerance and lower power consumption than current MPPs or commodity clusters. In this paper we describe the design of Gilgamesh, a PIM-based massively parallel architecture, and elements of its execution model. Gilgamesh extends existing PIM capabilities by incorporating advanced mechanisms for virtualizing tasks and data and providing adaptive resource management for load balancing and latency tolerance. The Gilgamesh execution model is based on macroservers, a middleware layer which supports object-based runtime management of data and threads allowing explicit and dynamic control of locality and load balancing. The paper concludes with a discussion of related research activities and an outlook to future work.
NASA Astrophysics Data System (ADS)
Watanabe, Norihiro; Kolditz, Olaf
2015-07-01
This work reports numerical stability conditions in two-dimensional solute transport simulations including discrete fractures surrounded by an impermeable rock matrix. We use an advective-dispersive problem described in Tang et al. (1981) and examine the stability of the Crank-Nicolson Galerkin finite element method (CN-GFEM). The stability conditions are analyzed in terms of the spatial discretization length perpendicular to the fracture, the flow velocity, the diffusion coefficient, the matrix porosity, the fracture aperture, and the fracture longitudinal dispersivity. In addition, we verify applicability of the recently developed finite element method-flux corrected transport (FEM-FCT) method by Kuzmin () to suppress oscillations in the hybrid system, with a comparison to the commonly utilized Streamline Upwinding/Petrov-Galerkin (SUPG) method. Major findings of this study are (1) the mesh von Neumann number (Fo) ≥ 0.373 must be satisfied to avoid undershooting in the matrix, (2) in addition to an upper bound, the Courant number also has a lower bound in the fracture in cases of low dispersivity, and (3) the FEM-FCT method can effectively suppress the oscillations in both the fracture and the matrix. The results imply that, in cases of low dispersivity, prerefinement of a numerical mesh is not sufficient to avoid the instability in the hybrid system if a problem involves evolutionary flow fields and dynamic material parameters. Applying the FEM-FCT method to such problems is recommended if negative concentrations cannot be tolerated and computing time is not a strong issue.
The Spectral Shift Function and Spectral Flow
NASA Astrophysics Data System (ADS)
Azamov, N. A.; Carey, A. L.; Sukochev, F. A.
2007-11-01
At the 1974 International Congress, I. M. Singer proposed that eta invariants and hence spectral flow should be thought of as the integral of a one form. In the intervening years this idea has lead to many interesting developments in the study of both eta invariants and spectral flow. Using ideas of [24] Singer’s proposal was brought to an advanced level in [16] where a very general formula for spectral flow as the integral of a one form was produced in the framework of noncommutative geometry. This formula can be used for computing spectral flow in a general semifinite von Neumann algebra as described and reviewed in [5]. In the present paper we take the analytic approach to spectral flow much further by giving a large family of formulae for spectral flow between a pair of unbounded self-adjoint operators D and D + V with D having compact resolvent belonging to a general semifinite von Neumann algebra {mathcal{N}} and the perturbation V in {mathcal{N}} . In noncommutative geometry terms we remove summability hypotheses. This level of generality is made possible by introducing a new idea from [3]. There it was observed that M. G. Krein’s spectral shift function (in certain restricted cases with V trace class) computes spectral flow. The present paper extends Krein’s theory to the setting of semifinite spectral triples where D has compact resolvent belonging to {mathcal{N}} and V is any bounded self-adjoint operator in {mathcal{N}} . We give a definition of the spectral shift function under these hypotheses and show that it computes spectral flow. This is made possible by the understanding discovered in the present paper of the interplay between spectral shift function theory and the analytic theory of spectral flow. It is this interplay that enables us to take Singer’s idea much further to create a large class of one forms whose integrals calculate spectral flow. These advances depend critically on a new approach to the calculus of functions of non-commuting operators discovered in [3] which generalizes the double operator integral formalism of [8-10]. One surprising conclusion that follows from our results is that the Krein spectral shift function is computed, in certain circumstances, by the Atiyah-Patodi-Singer index theorem [2].
Krizek, D R; Rick, M E
2000-03-15
A highly sensitive and rapid clinical method for the visualization of the multimeric structure of von Willebrand Factor in plasma and platelets is described. The method utilizes submerged horizontal agarose gel electrophoresis, followed by transfer of the von Willebrand Factor onto a polyvinylidine fluoride membrane, and immunolocalization and luminographic visualization of the von Willebrand Factor multimeric pattern. This method distinguishes type 1 from types 2A and 2B von Willebrand disease, allowing timely evaluation and classification of von Willebrand Factor in patient plasma. It also allows visualization of the unusually high molecular weight multimers present in platelets. There are several major advantages to this method including rapid processing, simplicity of gel preparation, high sensitivity to low concentrations of von Willebrand Factor, and elimination of radioactivity.
Topos quantum theory on quantization-induced sheaves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakayama, Kunji, E-mail: nakayama@law.ryukoku.ac.jp
2014-10-15
In this paper, we construct a sheaf-based topos quantum theory. It is well known that a topos quantum theory can be constructed on the topos of presheaves on the category of commutative von Neumann algebras of bounded operators on a Hilbert space. Also, it is already known that quantization naturally induces a Lawvere-Tierney topology on the presheaf topos. We show that a topos quantum theory akin to the presheaf-based one can be constructed on sheaves defined by the quantization-induced Lawvere-Tierney topology. That is, starting from the spectral sheaf as a state space of a given quantum system, we construct sheaf-basedmore » expressions of physical propositions and truth objects, and thereby give a method of truth-value assignment to the propositions. Furthermore, we clarify the relationship to the presheaf-based quantum theory. We give translation rules between the sheaf-based ingredients and the corresponding presheaf-based ones. The translation rules have “coarse-graining” effects on the spaces of the presheaf-based ingredients; a lot of different proposition presheaves, truth presheaves, and presheaf-based truth-values are translated to a proposition sheaf, a truth sheaf, and a sheaf-based truth-value, respectively. We examine the extent of the coarse-graining made by translation.« less
NASA Astrophysics Data System (ADS)
Khai Tiu, Ervin Shan; Huang, Yuk Feng; Ling, Lloyd
2018-03-01
An accurate streamflow forecasting model is important for the development of flood mitigation plan as to ensure sustainable development for a river basin. This study adopted Variational Mode Decomposition (VMD) data-preprocessing technique to process and denoise the rainfall data before putting into the Support Vector Machine (SVM) streamflow forecasting model in order to improve the performance of the selected model. Rainfall data and river water level data for the period of 1996-2016 were used for this purpose. Homogeneity tests (Standard Normal Homogeneity Test, the Buishand Range Test, the Pettitt Test and the Von Neumann Ratio Test) and normality tests (Shapiro-Wilk Test, Anderson-Darling Test, Lilliefors Test and Jarque-Bera Test) had been carried out on the rainfall series. Homogenous and non-normally distributed data were found in all the stations, respectively. From the recorded rainfall data, it was observed that Dungun River Basin possessed higher monthly rainfall from November to February, which was during the Northeast Monsoon. Thus, the monthly and seasonal rainfall series of this monsoon would be the main focus for this research as floods usually happen during the Northeast Monsoon period. The predicted water levels from SVM model were assessed with the observed water level using non-parametric statistical tests (Biased Method, Kendall's Tau B Test and Spearman's Rho Test).
NASA Astrophysics Data System (ADS)
King, Jacob; Kruger, Scott
2017-10-01
Flow can impact the stability and nonlinear evolution of range of instabilities (e.g. RWMs, NTMs, sawteeth, locked modes, PBMs, and high-k turbulence) and thus robust numerical algorithms for simulations with flow are essential. Recent simulations of DIII-D QH-mode [King et al., Phys. Plasmas and Nucl. Fus. 2017] with flow have been restricted to smaller time-step sizes than corresponding computations without flow. These computations use a mixed semi-implicit, implicit leapfrog time discretization as implemented in the NIMROD code [Sovinec et al., JCP 2004]. While prior analysis has shown that this algorithm is unconditionally stable with respect to the effect of large flows on the MHD waves in slab geometry [Sovinec et al., JCP 2010], our present Von Neumann stability analysis shows that a flow-induced numerical instability may arise when ad-hoc cylindrical curvature is included. Computations with the NIMROD code in cylindrical geometry with rigid rotation and without free-energy drive from current or pressure gradients qualitatively confirm this analysis. We explore potential methods to circumvent this flow-induced numerical instability such as using a semi-Lagrangian formulation instead of time-centered implicit advection and/or modification to the semi-implicit operator. This work is supported by the DOE Office of Science (Office of Fusion Energy Sciences).
Integral Method of Boundary Characteristics: Neumann Condition
NASA Astrophysics Data System (ADS)
Kot, V. A.
2018-05-01
A new algorithm, based on systems of identical equalities with integral and differential boundary characteristics, is proposed for solving boundary-value problems on the heat conduction in bodies canonical in shape at a Neumann boundary condition. Results of a numerical analysis of the accuracy of solving heat-conduction problems with variable boundary conditions with the use of this algorithm are presented. The solutions obtained with it can be considered as exact because their errors comprise hundredths and ten-thousandths of a persent for a wide range of change in the parameters of a problem.
Löwe, Angelica
2015-06-01
In the light of recently-published correspondence between Jung and Neumann, this paper considers and connects two aspects of their relationship: Jung's theory of an ethno-specific differentiation of the unconscious as formulated in 1934, and the relationship between Jung and Neumann at the beginning of the Holocaust in 1938-with Jung as the wise old man and a father figure on one hand, and Neumann as the apprentice and dependent son on the other. In examining these two issues, a detailed interpretation of four letters, two by Neumann and two by Jung, written in 1938 and 1939, is given. Neumann's reflections on the collective Jewish determination in the face of the November pogroms in 1938 led Jung to modify his view, with relativization and secularization of his former position. This shift precipitated a deep crisis with feelings of disorientation and desertion in Neumann; the paper discusses how a negative father complex was then constellated and imaged in a dream. After years of silence, the two men were able to renew the deep bonds that characterized their lifelong friendship. © 2015, The Society of Analytical Psychology.
FPGA cluster for high-performance AO real-time control system
NASA Astrophysics Data System (ADS)
Geng, Deli; Goodsell, Stephen J.; Basden, Alastair G.; Dipper, Nigel A.; Myers, Richard M.; Saunter, Chris D.
2006-06-01
Whilst the high throughput and low latency requirements for the next generation AO real-time control systems have posed a significant challenge to von Neumann architecture processor systems, the Field Programmable Gate Array (FPGA) has emerged as a long term solution with high performance on throughput and excellent predictability on latency. Moreover, FPGA devices have highly capable programmable interfacing, which lead to more highly integrated system. Nevertheless, a single FPGA is still not enough: multiple FPGA devices need to be clustered to perform the required subaperture processing and the reconstruction computation. In an AO real-time control system, the memory bandwidth is often the bottleneck of the system, simply because a vast amount of supporting data, e.g. pixel calibration maps and the reconstruction matrix, need to be accessed within a short period. The cluster, as a general computing architecture, has excellent scalability in processing throughput, memory bandwidth, memory capacity, and communication bandwidth. Problems, such as task distribution, node communication, system verification, are discussed.
Generalized entropy production fluctuation theorems for quantum systems
NASA Astrophysics Data System (ADS)
Rana, Shubhashis; Lahiri, Sourabh; Jayannavar, A. M.
2013-02-01
Based on trajectory dependent path probability formalism in state space, we derive generalized entropy production fluctuation relations for a quantum system in the presence of measurement and feedback. We have obtained these results for three different cases: (i) the system is evolving in isolation from its surroundings; (ii) the system being weakly coupled to a heat bath; and (iii) system in contact with reservoir using quantum Crooks fluctuation theorem. In case (iii), we build on the treatment carried out in [H. T. Quan and H. Dong, arxiv/cond-mat: 0812.4955], where a quantum trajectory has been defined as a sequence of alternating work and heat steps. The obtained entropy production fluctuation theorems retain the same form as in the classical case. The inequality of second law of thermodynamics gets modified in the presence of information. These fluctuation theorems are robust against intermediate measurements of any observable performed with respect to von Neumann projective measurements as well as weak or positive operator valued measurements.
Li, Yi; Zhong, Yingpeng; Zhang, Jinjian; Xu, Lei; Wang, Qing; Sun, Huajun; Tong, Hao; Cheng, Xiaoming; Miao, Xiangshui
2014-05-09
Nanoscale inorganic electronic synapses or synaptic devices, which are capable of emulating the functions of biological synapses of brain neuronal systems, are regarded as the basic building blocks for beyond-Von Neumann computing architecture, combining information storage and processing. Here, we demonstrate a Ag/AgInSbTe/Ag structure for chalcogenide memristor-based electronic synapses. The memristive characteristics with reproducible gradual resistance tuning are utilised to mimic the activity-dependent synaptic plasticity that serves as the basis of memory and learning. Bidirectional long-term Hebbian plasticity modulation is implemented by the coactivity of pre- and postsynaptic spikes, and the sign and degree are affected by assorted factors including the temporal difference, spike rate and voltage. Moreover, synaptic saturation is observed to be an adjustment of Hebbian rules to stabilise the growth of synaptic weights. Our results may contribute to the development of highly functional plastic electronic synapses and the further construction of next-generation parallel neuromorphic computing architecture.
Entanglement in the Anisotropic Kondo Necklace Model
NASA Astrophysics Data System (ADS)
Mendoza-Arenas, J. J.; Franco, R.; Silva-Valencia, J.
We study the entanglement in the one-dimensional Kondo necklace model with exact diagonalization, calculating the concurrence as a function of the Kondo coupling J and an anisotropy η in the interaction between conduction spins, and we review some results previously obtained in the limiting cases η = 0 and 1. We observe that as J increases, localized and conduction spins get more entangled, while neighboring conduction spins diminish their concurrence; localized spins require a minimum concurrence between conduction spins to be entangled. The anisotropy η diminishes the entanglement for neighboring spins when it increases, driving the system to the Ising limit η = 1 where conduction spins are not entangled. We observe that the concurrence does not give information about the quantum phase transition in the anisotropic Kondo necklace model (between a Kondo singlet and an antiferromagnetic state), but calculating the von Neumann block entropy with the density matrix renormalization group in a chain of 100 sites for the Ising limit indicates that this quantity is useful for locating the quantum critical point.
Why risk is not variance: an expository note.
Cox, Louis Anthony Tony
2008-08-01
Variance (or standard deviation) of return is widely used as a measure of risk in financial investment risk analysis applications, where mean-variance analysis is applied to calculate efficient frontiers and undominated portfolios. Why, then, do health, safety, and environmental (HS&E) and reliability engineering risk analysts insist on defining risk more flexibly, as being determined by probabilities and consequences, rather than simply by variances? This note suggests an answer by providing a simple proof that mean-variance decision making violates the principle that a rational decisionmaker should prefer higher to lower probabilities of receiving a fixed gain, all else being equal. Indeed, simply hypothesizing a continuous increasing indifference curve for mean-variance combinations at the origin is enough to imply that a decisionmaker must find unacceptable some prospects that offer a positive probability of gain and zero probability of loss. Unlike some previous analyses of limitations of variance as a risk metric, this expository note uses only simple mathematics and does not require the additional framework of von Neumann Morgenstern utility theory.
Topologies on quantum topoi induced by quantization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakayama, Kunji
2013-07-15
In the present paper, we consider effects of quantization in a topos approach of quantum theory. A quantum system is assumed to be coded in a quantum topos, by which we mean the topos of presheaves on the context category of commutative subalgebras of a von Neumann algebra of bounded operators on a Hilbert space. A classical system is modeled by a Lie algebra of classical observables. It is shown that a quantization map from the classical observables to self-adjoint operators on the Hilbert space naturally induces geometric morphisms from presheaf topoi related to the classical system to the quantummore » topos. By means of the geometric morphisms, we give Lawvere-Tierney topologies on the quantum topos (and their equivalent Grothendieck topologies on the context category). We show that, among them, there exists a canonical one which we call a quantization topology. We furthermore give an explicit expression of a sheafification functor associated with the quantization topology.« less
Poynting-Flux-Driven Bubbles and Shocks Around Merging Neutron Star Binaries
NASA Astrophysics Data System (ADS)
Medvedev, M. V.; Loeb, A.
2013-04-01
Merging binaries of compact relativistic objects are thought to be progenitors of short gamma-ray bursts. Because of the strong magnetic field of one or both binary members and high orbital frequencies, these binaries are strong sources of energy in the form of Poynting flux. The steady injection of energy by the binary forms a bubble filled with matter with the relativistic equation of state, which pushes on the surrounding plasma and can drive a shock wave in it. Unlike the Sedov-von Neumann-Taylor blast wave solution for a point-like explosion, the shock wave here is continuously driven by the ever-increasing pressure inside the bubble. We calculate from the first principles the dynamics and evolution of the bubble and the shock surrounding it, demonstrate that it exhibits finite time singularity and find the corresponding analytical solution. We predict that such binaries can be observed as radio sources a few hours before and after the merger.
Realistic Many-Body Quantum Systems vs. Full Random Matrices: Static and Dynamical Properties
NASA Astrophysics Data System (ADS)
Karp, Jonathan; Torres-Herrera, Jonathan; TáVora, Marco; Santos, Lea
We study the static and dynamical properties of isolated spin 1/2 systems as prototypes of many-body quantum systems and compare the results to those of full random matrices from a Gaussian orthogonal ensemble. Full random matrices do not represent realistic systems, because they imply that all particles interact at the same time, as opposed to realistic Hamiltonians, which are sparse and have only few-body interactions. Nevertheless, with full random matrices we can derive analytical results that can be used as references and bounds for the corresponding properties of realistic systems. In particular, we show that the results for the Shannon information entropy are very similar to those for the von Neumann entanglement entropy, with the former being computationally less expensive. We also discuss the behavior of the survival probability of the initial state at different time scales and show that it contains more information about the system than the entropies. Support from the NSF Grant No. DMR-1147430.
Artificial intelligence and synthetic biology: A tri-temporal contribution.
Bianchini, Francesco
2016-10-01
Artificial intelligence can make numerous contributions to synthetic biology. I would like to suggest three that are related to the past, present and future of artificial intelligence. From the past, works in biology and artificial systems by Turing and von Neumann prove highly interesting to explore within the new framework of synthetic biology, especially with regard to the notions of self-modification and self-replication and their links to emergence and the bottom-up approach. The current epistemological inquiry into emergence and research on swarm intelligence, superorganisms and biologically inspired cognitive architecture may lead to new achievements on the possibilities of synthetic biology in explaining cognitive processes. Finally, the present-day discussion on the future of artificial intelligence and the rise of superintelligence may point to some research trends for the future of synthetic biology and help to better define the boundary of notions such as "life", "cognition", "artificial" and "natural", as well as their interconnections in theoretical synthetic biology. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Information entropy and dark energy evolution
NASA Astrophysics Data System (ADS)
Capozziello, Salvatore; Luongo, Orlando
Here, the information entropy is investigated in the context of early and late cosmology under the hypothesis that distinct phases of universe evolution are entangled between them. The approach is based on the entangled state ansatz, representing a coarse-grained definition of primordial dark temperature associated to an effective entangled energy density. The dark temperature definition comes from assuming either Von Neumann or linear entropy as sources of cosmological thermodynamics. We interpret the involved information entropies by means of probabilities of forming structures during cosmic evolution. Following this recipe, we propose that quantum entropy is simply associated to the thermodynamical entropy and we investigate the consequences of our approach using the adiabatic sound speed. As byproducts, we analyze two phases of universe evolution: the late and early stages. To do so, we first recover that dark energy reduces to a pure cosmological constant, as zero-order entanglement contribution, and second that inflation is well-described by means of an effective potential. In both cases, we infer numerical limits which are compatible with current observations.
Faithful Pointer for Qubit Measurement
NASA Astrophysics Data System (ADS)
Kumari, Asmita; Pan, A. K.
2018-02-01
In the context of von Neumann projective measurement scenario for a qubit system, it is widely believed that the mutual orthogonality between the post-interaction pointer states is the sufficient condition for achieving the ideal measurement situation. However, for experimentally verifying the observable probabilities, the real space distinction between the pointer distributions corresponding to post-interaction pointer states play crucial role. It is implicitly assumed that mutual orthogonality ensures the support between the post-interaction pointer distributions to be disjoint. We point out that mutual orthogonality (formal idealness) does not necessarily imply the real space distinguishability (operational idealness), but converse is true. In fact, for the commonly referred Gaussian wavefunction, it is possible to obtain a measurement situation which is formally ideal but fully nonideal operationally. In this paper, we derive a class of pointer states, that we call faithful pointers, for which the degree of formal (non)idealness is equal to the operational (non)idealness. In other words, for the faithful pointers, if a measurement situation is formally ideal then it is operationally ideal and vice versa.
The future of computing--new architectures and new technologies.
Warren, P
2004-02-01
All modern computers are designed using the 'von Neumann' architecture and built using silicon transistor technology. Both architecture and technology have been remarkably successful. Yet there are a range of problems for which this conventional architecture is not particularly well adapted, and new architectures are being proposed to solve these problems, in particular based on insight from nature. Transistor technology has enjoyed 50 years of continuing progress. However, the laws of physics dictate that within a relatively short time period this progress will come to an end. New technologies, based on molecular and biological sciences as well as quantum physics, are vying to replace silicon, or at least coexist with it and extend its capability. The paper describes these novel architectures and technologies, places them in the context of the kinds of problems they might help to solve, and predicts their possible manner and time of adoption. Finally it describes some key questions and research problems associated with their use.
Order, criticality, and excitations in the extended Falicov-Kimball model.
Ejima, S; Kaneko, T; Ohta, Y; Fehske, H
2014-01-17
Using exact numerical techniques, we investigate the nature of excitonic (electron-hole) bound states and the development of exciton coherence in the one-dimensional half-filled extended Falicov-Kimball model. The ground-state phase diagram of the model exhibits, besides band-insulator and staggered orbital ordered phases, an excitonic insulator (EI) with power-law correlations. The criticality of the EI state shows up in the von Neumann entropy. The anomalous spectral function and condensation amplitude provide the binding energy and coherence length of the electron-hole pairs which, on their part, point towards a Coulomb interaction driven crossover from BCS-like electron-hole pairing fluctuations to tightly bound excitons. We show that while a mass imbalance between electrons and holes does not affect the location of the BCS-BEC crossover regime, it favors staggered orbital ordering to the disadvantage of the EI. Within the Bose-Einstein condensation (BEC) regime, the quasiparticle dispersion develops a flat valence-band top, in accord with the experimental finding for Ta2NiSe5.
The Performance of the NAS HSPs in 1st Half of 1994
NASA Technical Reports Server (NTRS)
Bergeron, Robert J.; Walter, Howard (Technical Monitor)
1995-01-01
During the first six months of 1994, the NAS (National Airspace System) 16-CPU Y-MP C90 Von Neumann (VN) delivered an average throughput of 4.045 GFLOPS while the ACSF (Aeronautics Consolidated Supercomputer Facility) 8-CPU Y-MP C90 Eagle averaged 1.658 GFLOPS. The VN rate represents a machine efficiency of 26.3% whereas the Eagle rate corresponds to a machine efficiency of 21.6%. VN displayed a greater efficiency than Eagle primarily because the stronger workload demand for its CPU cycles allowed it to devote more time to user programs and less time to idle. An additional factor increasing VN efficiency was the ability of the UNICOS 8.0 Operating System to deliver a larger fraction of CPU time to user programs. Although measurements indicate increasing vector length for both workloads, insufficient vector lengths continue to hinder HSP (High Speed Processor) performance. To improve HSP performance, NAS should continue to encourage the HSP users to modify their codes to increase program vector length.
Chen, Jing-Ling; Su, Hong-Yi; Xu, Zhen-Peng; Wu, Yu-Chun; Wu, Chunfeng; Ye, Xiang-Jun; Żukowski, Marek; Kwek, L. C.
2015-01-01
We demonstrate here that for a given mixed multi-qubit state if there are at least two observers for whom mutual Einstein-Podolsky-Rosen steering is possible, i.e. each observer is able to steer the other qubits into two different pure states by spontaneous collapses due to von Neumann type measurements on his/her qubit, then nonexistence of local realistic models is fully equivalent to quantum entanglement (this is not so without this condition). This result leads to an enhanced version of Gisin’s theorem (originally: all pure entangled states violate local realism). Local realism is violated by all mixed states with the above steering property. The new class of states allows one e.g. to perform three party secret sharing with just pairs of entangled qubits, instead of three qubit entanglements (which are currently available with low fidelity). This significantly increases the feasibility of having high performance versions of such protocols. Finally, we discuss some possible applications. PMID:26108704
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rossi, Matteo A. C., E-mail: matteo.rossi@unimi.it; Paris, Matteo G. A., E-mail: matteo.paris@fisica.unimi.it; CNISM, Unità Milano Statale, I-20133 Milano
2016-01-14
We address the interaction of single- and two-qubit systems with an external transverse fluctuating field and analyze in detail the dynamical decoherence induced by Gaussian noise and random telegraph noise (RTN). Upon exploiting the exact RTN solution of the time-dependent von Neumann equation, we analyze in detail the behavior of quantum correlations and prove the non-Markovianity of the dynamical map in the full parameter range, i.e., for either fast or slow noise. The dynamics induced by Gaussian noise is studied numerically and compared to the RTN solution, showing the existence of (state dependent) regions of the parameter space where themore » two noises lead to very similar dynamics. We show that the effects of RTN noise and of Gaussian noise are different, i.e., the spectrum alone is not enough to summarize the noise effects, but the dynamics under the effect of one kind of noise may be simulated with high fidelity by the other one.« less
NASA Astrophysics Data System (ADS)
Peller, V. V.
1985-02-01
The main result is the following description of Hankel operators in the Schatten-von Neumann class \\mathfrak{S}_p when 0: \\displaystyle \\Gamma_\\varphi\\in\\mathfrak{S}_p\\Leftrightarrow \\varphi\\in B_p^{1/p},where \\Gamma_\\varphi is the Hankel operator with symbol \\varphi, and B_p^{1/p} is the Besov class. This result extends results obtained earlier for 1\\leqslant p<+\\infty by the author to the case 0. Also described are the Hankel operators in the Schatten-Lorentz classes \\mathfrak{S}_{pq}, 0, 0. Precise descriptions of classes of functions defined in terms of rational approximation in the bounded mean oscillation norm are given as an application, along with a complete investigation of the case where the decrease is of power order, and some precise results on rational approximation in the L^\\infty-norm. Certain other applications are also considered.Bibliography: 57 titles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sollier, A., E-mail: arnaud.sollier@cea.fr; Bouyer, V.; Hébert, P.
We present detonation wave profiles measured in T2 (97 wt. % TATB) and TX1 (52 wt. % TATB and 45 wt. % HMX) high explosives. The experiments consisted in initiating a detonation wave in a 15 mm diameter cylinder of explosive using an explosive wire detonator and an explosive booster. Free surface velocity wave profiles were measured at the explosive/air interface using a Photon Doppler Velocimetry system. We demonstrate that a comparison of these free surface wave profiles with those measured at explosive/window interfaces in similar conditions allows to bracket the von Neumann spike in a narrow range. For T2, our measurements show that the spike pressuremore » lies between 35.9 and 40.1 GPa, whereas for TX1, it lies between 42.3 and 47.0 GPa. The numerical simulations performed in support to these measurements show that they can be used to calibrate reactive burn models and also to check the accuracy of the detonation products equation of state at low pressure.« less
NASA Astrophysics Data System (ADS)
Jiang, Yuning; Kang, Jinfeng; Wang, Xinan
2017-03-01
Resistive switching memory (RRAM) is considered as one of the most promising devices for parallel computing solutions that may overcome the von Neumann bottleneck of today’s electronic systems. However, the existing RRAM-based parallel computing architectures suffer from practical problems such as device variations and extra computing circuits. In this work, we propose a novel parallel computing architecture for pattern recognition by implementing k-nearest neighbor classification on metal-oxide RRAM crossbar arrays. Metal-oxide RRAM with gradual RESET behaviors is chosen as both the storage and computing components. The proposed architecture is tested by the MNIST database. High speed (~100 ns per example) and high recognition accuracy (97.05%) are obtained. The influence of several non-ideal device properties is also discussed, and it turns out that the proposed architecture shows great tolerance to device variations. This work paves a new way to achieve RRAM-based parallel computing hardware systems with high performance.
Proceedings of the second SISAL users` conference
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feo, J T; Frerking, C; Miller, P J
1992-12-01
This report contains papers on the following topics: A sisal code for computing the fourier transform on S{sub N}; five ways to fill your knapsack; simulating material dislocation motion in sisal; candis as an interface for sisal; parallelisation and performance of the burg algorithm on a shared-memory multiprocessor; use of genetic algorithm in sisal to solve the file design problem; implementing FFT`s in sisal; programming and evaluating the performance of signal processing applications in the sisal programming environment; sisal and Von Neumann-based languages: translation and intercommunication; an IF2 code generator for ADAM architecture; program partitioning for NUMA multiprocessor computer systems;more » mapping functional parallelism on distributed memory machines; implicit array copying: prevention is better than cure ; mathematical syntax for sisal; an approach for optimizing recursive functions; implementing arrays in sisal 2.0; Fol: an object oriented extension to the sisal language; twine: a portable, extensible sisal execution kernel; and investigating the memory performance of the optimizing sisal compiler.« less
Symmetry aspects in emergent quantum mechanics
NASA Astrophysics Data System (ADS)
Elze, Hans-Thomas
2009-06-01
We discuss an explicit realization of the dissipative dynamics anticipated in the proof of 't Hooft's existence theorem, which states that 'For any quantum system there exists at least one deterministic model that reproduces all its dynamics after prequantization'. - There is an energy-parity symmetry hidden in the Liouville equation, which mimics the Kaplan-Sundrum protective symmetry for the cosmological constant. This symmetry may be broken by the coarse-graining inherent in physics at scales much larger than the Planck length. We correspondingly modify classical ensemble theory by incorporating dissipative fluctuations (information loss) - which are caused by discrete spacetime continually 'measuring' matter. In this way, aspects of quantum mechanics, such as the von Neumann equation, including a Lindblad term, arise dynamically and expectations of observables agree with the Born rule. However, the resulting quantum coherence is accompanied by an intrinsic decoherence and continuous localization mechanism. Our proposal leads towards a theory that is linear and local at the quantum mechanical level, but the relation to the underlying classical degrees of freedom is nonlocal.
Prof. Hanna Neumann's Inaugural Presidential Address, 1966
ERIC Educational Resources Information Center
Neumann, Hanna
2017-01-01
Prof. Hanna Neumann gave the Presidential Address at Australian Association of Mathematics Teachers inaugural conference in 1966. The conference was held at Monash University and had the theme of "mathematical unity". In this address, Prof. Neumann described some features of the teaching of mathematics in schools. While she did not know…
Jin, Miaomiao; Cheng, Long; Li, Yi; Hu, Siyu; Lu, Ke; Chen, Jia; Duan, Nian; Wang, Zhuorui; Zhou, Yaxiong; Chang, Ting-Chang; Miao, Xiangshui
2018-06-27
Owing to the capability of integrating the information storage and computing in the same physical location, in-memory computing with memristors has become a research hotspot as a promising route for non von Neumann architecture. However, it is still a challenge to develop high performance devices as well as optimized logic methodologies to realize energy-efficient computing. Herein, filamentary Cu/GeTe/TiN memristor is reported to show satisfactory properties with nanosecond switching speed (< 60 ns), low voltage operation (< 2 V), high endurance (>104 cycles) and good retention (>104 s @85℃). It is revealed that the charge carrier conduction mechanisms in high resistance and low resistance states are Schottky emission and hopping transport between the adjacent Cu clusters, respectively, based on the analysis of current-voltage behaviors and resistance-temperature characteristics. An intuitive picture is given to describe the dynamic processes of resistive switching. Moreover, based on the basic material implication (IMP) logic circuit, we proposed a reconfigurable logic method and experimentally implemented IMP, NOT, OR, and COPY logic functions. Design of a one-bit full adder with reduction in computational sequences and its validation in simulation further demonstrate the potential practical application. The results provide important progress towards understanding of resistive switching mechanism and realization of energy-efficient in-memory computing architecture. © 2018 IOP Publishing Ltd.
Effects of strategy-migration direction and noise in the evolutionary spatial prisoner's dilemma
NASA Astrophysics Data System (ADS)
Wu, Zhi-Xi; Holme, Petter
2009-08-01
Spatial games are crucial for understanding patterns of cooperation in nature (and to some extent society). They are known to be more sensitive to local symmetries than, e.g., spin models. This paper concerns the evolution of the prisoner’s dilemma game on regular lattices with three different types of neighborhoods—the von Neumann, Moore, and kagomé types. We investigate two kinds of dynamics for the players to update their strategies (that can be unconditional cooperator or defector). Depending on the payoff difference, an individual can adopt the strategy of a random neighbor [a voter-model-like dynamics (VMLD)] or impose its strategy on a random neighbor, i.e., invasion-process-like dynamics (IPLD). In particular, we focus on the effects of noise, in combination with the strategy dynamics, on the evolution of cooperation. We find that VMLD, compared to IPLD, better supports the spreading and sustaining of cooperation. We see that noise has nontrivial effects on the evolution of cooperation: maximum cooperation density can be realized either at a medium noise level, in the limit of zero noise or in both these regions. The temptation to defect and the local interaction structure determine the outcome. Especially, in the low noise limit, the local interaction plays a crucial role in determining the fate of cooperators. We elucidate these both by numerical simulations and mean-field cluster approximation methods.
Entanglement transitions induced by large deviations
NASA Astrophysics Data System (ADS)
Bhosale, Udaysinh T.
2017-12-01
The probability of large deviations of the smallest Schmidt eigenvalue for random pure states of bipartite systems, denoted as A and B , is computed analytically using a Coulomb gas method. It is shown that this probability, for large N , goes as exp[-β N2Φ (ζ ) ] , where the parameter β is the Dyson index of the ensemble, ζ is the large deviation parameter, while the rate function Φ (ζ ) is calculated exactly. Corresponding equilibrium Coulomb charge density is derived for its large deviations. Effects of the large deviations of the extreme (largest and smallest) Schmidt eigenvalues on the bipartite entanglement are studied using the von Neumann entropy. Effect of these deviations is also studied on the entanglement between subsystems 1 and 2, obtained by further partitioning the subsystem A , using the properties of the density matrix's partial transpose ρ12Γ. The density of states of ρ12Γ is found to be close to the Wigner's semicircle law with these large deviations. The entanglement properties are captured very well by a simple random matrix model for the partial transpose. The model predicts the entanglement transition across a critical large deviation parameter ζ . Log negativity is used to quantify the entanglement between subsystems 1 and 2. Analytical formulas for it are derived using the simple model. Numerical simulations are in excellent agreement with the analytical results.
Entanglement transitions induced by large deviations.
Bhosale, Udaysinh T
2017-12-01
The probability of large deviations of the smallest Schmidt eigenvalue for random pure states of bipartite systems, denoted as A and B, is computed analytically using a Coulomb gas method. It is shown that this probability, for large N, goes as exp[-βN^{2}Φ(ζ)], where the parameter β is the Dyson index of the ensemble, ζ is the large deviation parameter, while the rate function Φ(ζ) is calculated exactly. Corresponding equilibrium Coulomb charge density is derived for its large deviations. Effects of the large deviations of the extreme (largest and smallest) Schmidt eigenvalues on the bipartite entanglement are studied using the von Neumann entropy. Effect of these deviations is also studied on the entanglement between subsystems 1 and 2, obtained by further partitioning the subsystem A, using the properties of the density matrix's partial transpose ρ_{12}^{Γ}. The density of states of ρ_{12}^{Γ} is found to be close to the Wigner's semicircle law with these large deviations. The entanglement properties are captured very well by a simple random matrix model for the partial transpose. The model predicts the entanglement transition across a critical large deviation parameter ζ. Log negativity is used to quantify the entanglement between subsystems 1 and 2. Analytical formulas for it are derived using the simple model. Numerical simulations are in excellent agreement with the analytical results.
Memristive effects in oxygenated amorphous carbon nanodevices
NASA Astrophysics Data System (ADS)
Bachmann, T. A.; Koelmans, W. W.; Jonnalagadda, V. P.; Le Gallo, M.; Santini, C. A.; Sebastian, A.; Eleftheriou, E.; Craciun, M. F.; Wright, C. D.
2018-01-01
Computing with resistive-switching (memristive) memory devices has shown much recent progress and offers an attractive route to circumvent the von-Neumann bottleneck, i.e. the separation of processing and memory, which limits the performance of conventional computer architectures. Due to their good scalability and nanosecond switching speeds, carbon-based resistive-switching memory devices could play an important role in this respect. However, devices based on elemental carbon, such as tetrahedral amorphous carbon or ta-C, typically suffer from a low cycling endurance. A material that has proven to be capable of combining the advantages of elemental carbon-based memories with simple fabrication methods and good endurance performance for binary memory applications is oxygenated amorphous carbon, or a-CO x . Here, we examine the memristive capabilities of nanoscale a-CO x devices, in particular their ability to provide the multilevel and accumulation properties that underpin computing type applications. We show the successful operation of nanoscale a-CO x memory cells for both the storage of multilevel states (here 3-level) and for the provision of an arithmetic accumulator. We implement a base-16, or hexadecimal, accumulator and show how such a device can carry out hexadecimal arithmetic and simultaneously store the computed result in the self-same a-CO x cell, all using fast (sub-10 ns) and low-energy (sub-pJ) input pulses.
NASA Astrophysics Data System (ADS)
Haron, Adib; Mahdzair, Fazren; Luqman, Anas; Osman, Nazmie; Junid, Syed Abdul Mutalib Al
2018-03-01
One of the most significant constraints of Von Neumann architecture is the limited bandwidth between memory and processor. The cost to move data back and forth between memory and processor is considerably higher than the computation in the processor itself. This architecture significantly impacts the Big Data and data-intensive application such as DNA analysis comparison which spend most of the processing time to move data. Recently, the in-memory processing concept was proposed, which is based on the capability to perform the logic operation on the physical memory structure using a crossbar topology and non-volatile resistive-switching memristor technology. This paper proposes a scheme to map digital equality comparator circuit on memristive memory crossbar array. The 2-bit, 4-bit, 8-bit, 16-bit, 32-bit, and 64-bit of equality comparator circuit are mapped on memristive memory crossbar array by using material implication logic in a sequential and parallel method. The simulation results show that, for the 64-bit word size, the parallel mapping exhibits 2.8× better performance in total execution time than sequential mapping but has a trade-off in terms of energy consumption and area utilization. Meanwhile, the total crossbar area can be reduced by 1.2× for sequential mapping and 1.5× for parallel mapping both by using the overlapping technique.
A three-dimensional Dirichlet-to-Neumann operator for water waves over topography
NASA Astrophysics Data System (ADS)
Andrade, D.; Nachbin, A.
2018-06-01
Surface water waves are considered propagating over highly variable non-smooth topographies. For this three dimensional problem a Dirichlet-to-Neumann (DtN) operator is constructed reducing the numerical modeling and evolution to the two dimensional free surface. The corresponding Fourier-type operator is defined through a matrix decomposition. The topographic component of the decomposition requires special care and a Galerkin method is provided accordingly. One dimensional numerical simulations, along the free surface, validate the DtN formulation in the presence of a large amplitude, rapidly varying topography. An alternative, conformal mapping based, method is used for benchmarking. A two dimensional simulation in the presence of a Luneburg lens (a particular submerged mound) illustrates the accurate performance of the three dimensional DtN operator.
Phase retrieval in annulus sector domain by non-iterative methods
NASA Astrophysics Data System (ADS)
Wang, Xiao; Mao, Heng; Zhao, Da-zun
2008-03-01
Phase retrieval could be achieved by solving the intensity transport equation (ITE) under the paraxial approximation. For the case of uniform illumination, Neumann boundary condition is involved and it makes the solving process more complicated. The primary mirror is usually designed segmented in the telescope with large aperture, and the shape of a segmented piece is often like an annulus sector. Accordingly, It is necessary to analyze the phase retrieval in the annulus sector domain. Two non-iterative methods are considered for recovering the phase. The matrix method is based on the decomposition of the solution into a series of orthogonalized polynomials, while the frequency filtering method depends on the inverse computation process of ITE. By the simulation, it is found that both methods can eliminate the effect of Neumann boundary condition, save a lot of computation time and recover the distorted phase well. The wavefront error (WFE) RMS can be less than 0.05 wavelength, even when some noise is added.
On the rate of convergence of the alternating projection method in finite dimensional spaces
NASA Astrophysics Data System (ADS)
Galántai, A.
2005-10-01
Using the results of Smith, Solmon, and Wagner [K. Smith, D. Solomon, S. Wagner, Practical and mathematical aspects of the problem of reconstructing objects from radiographs, Bull. Amer. Math. Soc. 83 (1977) 1227-1270] and Nelson and Neumann [S. Nelson, M. Neumann, Generalizations of the projection method with application to SOR theory for Hermitian positive semidefinite linear systems, Numer. Math. 51 (1987) 123-141] we derive new estimates for the speed of the alternating projection method and its relaxed version in . These estimates can be computed in at most O(m3) arithmetic operations unlike the estimates in papers mentioned above that require spectral information. The new and old estimates are equivalent in many practical cases. In cases when the new estimates are weaker, the numerical testing indicates that they approximate the original bounds in papers mentioned above quite well.
A three dimensional Dirichlet-to-Neumann map for surface waves over topography
NASA Astrophysics Data System (ADS)
Nachbin, Andre; Andrade, David
2016-11-01
We consider three dimensional surface water waves in the potential theory regime. The bottom topography can have a quite general profile. In the case of linear waves the Dirichlet-to-Neumann operator is formulated in a matrix decomposition form. Computational simulations illustrate the performance of the method. Two dimensional periodic bottom variations are considered in both the Bragg resonance regime as well as the rapidly varying (homogenized) regime. In the three-dimensional case we use the Luneburg lens-shaped submerged mound, which promotes the focusing of the underlying rays. FAPERJ Cientistas do Nosso Estado Grant 102917/2011 and ANP/PRH-32.
NASA Astrophysics Data System (ADS)
Ben Amara, Jamel; Bouzidi, Hedi
2018-01-01
In this paper, we consider a linear hybrid system which is composed by two non-homogeneous rods connected by a point mass with Dirichlet boundary conditions on the left end and a boundary control acts on the right end. We prove that this system is null controllable with Dirichlet or Neumann boundary controls. Our approach is mainly based on a detailed spectral analysis together with the moment method. In particular, we show that the associated spectral gap in both cases (Dirichlet or Neumann boundary controls) is positive without further conditions on the coefficients other than the regularities.
Free-Lagrange methods for compressible hydrodynamics in two space dimensions
NASA Astrophysics Data System (ADS)
Crowley, W. E.
1985-03-01
Since 1970 a research and development program in Free-Lagrange methods has been active at Livermore. The initial steps were taken with incompressible flows for simplicity. Since then the effort has been concentrated on compressible flows with shocks in two space dimensions and time. In general, the line integral method has been used to evaluate derivatives and the artificial viscosity method has been used to deal with shocks. Basically, two Free-Lagrange formulations for compressible flows in two space dimensions and time have been tested and both will be described. In method one, all prognostic quantities were node centered and staggered in time. The artificial viscosity was zone centered. One mesh reconnection philosphy was that the mesh should be optimized so that nearest neighbors were connected together. Another was that vertex angles should tend toward equality. In method one, all mesh elements were triangles. In method two, both quadrilateral and triangular mesh elements are permitted. The mesh variables are staggered in space and time as suggested originally by Richtmyer and von Neumann. The mesh reconnection strategy is entirely different in method two. In contrast to the global strategy of nearest neighbors, we now have a more local strategy that reconnects in order to keep the integration time step above a user chosen threshold. An additional strategy reconnects in the vicinity of large relative fluid motions. Mesh reconnection consists of two parts: (1) the tools that permits nodes to be merged and quads to be split into triangles etc. and; (2) the strategy that dictates how and when to use the tools. Both tools and strategies change with time in a continuing effort to expand the capabilities of the method. New ideas are continually being tried and evaluated.
On corrected formula for irradiated graphene quantum conductivity
NASA Astrophysics Data System (ADS)
Firsova, N. E.
2017-09-01
Graphene membrane irradiated by weak activating periodic electric field in terahertz range is considered. The corrected formula for the graphene quantum conductivity is found. The obtained formula gives complex conjugate results when radiation polarization direction is clockwise or it is opposite clockwise. The found formula allows us to see that the graphene membrane is an oscillating contour. Its eigen frequency coincides with a singularity point of the conductivity and depends on the electrons concentration. So the graphene membrane could be used as an antenna or a transistor and its eigen frequency could be tuned by doping in a large terahertz-infrared frequency range. The obtained formula allows us also to calculate the graphene membrane quantum inductivity and capacitance. The found dependence on electrons concentration is consistent with experiments. The method of the proof is based on study of the time-dependent density matrix. The exact solution of von Neumann equation for density matrix is found for our case in linear approximation on the external field. On this basis the induced current is studied and then the formula for quantum conductivity as a function of external field frequency and temperature is obtained. The method of the proof suggested in this paper could be used to study other problems. The found formula for quantum conductivity can be used to correct the SPPs Dispersion Relation and for the description of radiation process. It would be useful to take the obtained results into account when constructing devices containing graphene membrane nanoantenna. Such project could make it possible to create wireless communications among nanosystems. This would be promising research area of energy harvesting applications.
Integral approximations to classical diffusion and smoothed particle hydrodynamics
Du, Qiang; Lehoucq, R. B.; Tartakovsky, A. M.
2014-12-31
The contribution of the paper is the approximation of a classical diffusion operator by an integral equation with a volume constraint. A particular focus is on classical diffusion problems associated with Neumann boundary conditions. By exploiting this approximation, we can also approximate other quantities such as the flux out of a domain. Our analysis of the model equation on the continuum level is closely related to the recent work on nonlocal diffusion and peridynamic mechanics. In particular, we elucidate the role of a volumetric constraint as an approximation to a classical Neumann boundary condition in the presence of physical boundary.more » The volume-constrained integral equation then provides the basis for accurate and robust discretization methods. As a result, an immediate application is to the understanding and improvement of the Smoothed Particle Hydrodynamics (SPH) method.« less
NASA Astrophysics Data System (ADS)
Gosses, Moritz; Nowak, Wolfgang; Wöhling, Thomas
2018-05-01
In recent years, proper orthogonal decomposition (POD) has become a popular model reduction method in the field of groundwater modeling. It is used to mitigate the problem of long run times that are often associated with physically-based modeling of natural systems, especially for parameter estimation and uncertainty analysis. POD-based techniques reproduce groundwater head fields sufficiently accurate for a variety of applications. However, no study has investigated how POD techniques affect the accuracy of different boundary conditions found in groundwater models. We show that the current treatment of boundary conditions in POD causes inaccuracies for these boundaries in the reduced models. We provide an improved method that splits the POD projection space into a subspace orthogonal to the boundary conditions and a separate subspace that enforces the boundary conditions. To test the method for Dirichlet, Neumann and Cauchy boundary conditions, four simple transient 1D-groundwater models, as well as a more complex 3D model, are set up and reduced both by standard POD and POD with the new extension. We show that, in contrast to standard POD, the new method satisfies both Dirichlet and Neumann boundary conditions. It can also be applied to Cauchy boundaries, where the flux error of standard POD is reduced by its head-independent contribution. The extension essentially shifts the focus of the projection towards the boundary conditions. Therefore, we see a slight trade-off between errors at model boundaries and overall accuracy of the reduced model. The proposed POD extension is recommended where exact treatment of boundary conditions is required.
Optimal implicit 2-D finite differences to model wave propagation in poroelastic media
NASA Astrophysics Data System (ADS)
Itzá, Reymundo; Iturrarán-Viveros, Ursula; Parra, Jorge O.
2016-08-01
Numerical modeling of seismic waves in heterogeneous porous reservoir rocks is an important tool for the interpretation of seismic surveys in reservoir engineering. We apply globally optimal implicit staggered-grid finite differences (FD) to model 2-D wave propagation in heterogeneous poroelastic media at a low-frequency range (<10 kHz). We validate the numerical solution by comparing it to an analytical-transient solution obtaining clear seismic wavefields including fast P and slow P and S waves (for a porous media saturated with fluid). The numerical dispersion and stability conditions are derived using von Neumann analysis, showing that over a wide range of porous materials the Courant condition governs the stability and this optimal implicit scheme improves the stability of explicit schemes. High-order explicit FD can be replaced by some lower order optimal implicit FD so computational cost will not be as expensive while maintaining the accuracy. Here, we compute weights for the optimal implicit FD scheme to attain an accuracy of γ = 10-8. The implicit spatial differentiation involves solving tridiagonal linear systems of equations through Thomas' algorithm.
Effect of the magnetic dipole interaction on a spin-1 system
NASA Astrophysics Data System (ADS)
Hu, Fangqi; Jia, Wei; Zhao, Qing
2018-05-01
We consider a hybrid system composed of a spin-1 triplet coupled to a nuclear spin. We study the effect of the axisymmetric and the quadrupole term of the magnetic dipole interaction between the two electrons forming the triplet on the energy spectrum in a static magnetic field. The energy spectrum obtained by directly diagonalizing the Hamiltonian of the system shows that these two terms not only remove the special crossings that appear in the absence of the magnetic dipole interaction, but also produce new (avoided) crossings by lifting the relevant levels. Specially, the gaps between the avoided crossing levels increase with the strength of the quadrupole term. In order to accurately illustrate these effects, we present the results for the discriminant and von Neumann entropy of one electron interacting with the rest of the whole system. Finally, by numerically solving the time-dependent Schrödinger equations of the system, we discover that the polarization oscillation of electron and nuclear spin is in-phase and the total average longitudinal spin is not conserved at location of avoided crossing, but the two results are opposite beyond that.
Fusion of Positive Energy Representations of LSpin(2n)
NASA Astrophysics Data System (ADS)
Toledano-Laredo, V.
2004-09-01
Building upon the Jones-Wassermann program of studying Conformal Field Theory using operator algebraic tools, and the work of A. Wassermann on the loop group of LSU(n) (Invent. Math. 133 (1998), 467-538), we give a solution to the problem of fusion for the loop group of Spin(2n). Our approach relies on the use of A. Connes' tensor product of bimodules over a von Neumann algebra to define a multiplicative operation (Connes fusion) on the (integrable) positive energy representations of a given level. The notion of bimodules arises by restricting these representations to loops with support contained in an interval I of the circle or its complement. We study the corresponding Grothendieck ring and show that fusion with the vector representation is given by the Verlinde rules. The computation rests on 1) the solution of a 6-parameter family of Knizhnik-Zamolodchikhov equations and the determination of its monodromy, 2) the explicit construction of the primary fields of the theory, which allows to prove that they define operator-valued distributions and 3) the algebraic theory of superselection sectors developed by Doplicher-Haag-Roberts.
The equivalence of a human observer and an ideal observer in binary diagnostic tasks
NASA Astrophysics Data System (ADS)
He, Xin; Samuelson, Frank; Gallas, Brandon D.; Sahiner, Berkman; Myers, Kyle
2013-03-01
The Ideal Observer (IO) is "ideal" for given data populations. In the image perception process, as the raw images are degraded by factors such as display and eye optics, there is an equivalent IO (EIO). The EIO uses the statistical information that exits the perception/cognitive degradations as the data. We assume a human observer who received sufficient training, e.g., radiologists, and hypothesize that such a human observer can be modeled as if he is an EIO. To measure the likelihood ratio (LR) distributions of an EIO, we formalize experimental design principles that encourage rationality based on von Neumann and Morgenstern's (vNM) axioms. We present examples to show that many observer study design refinements, although motivated by empirical principles explicitly, implicitly encourage rationality. Our hypothesis is supported by a recent review paper on ROC curve convexity by Pesce, Metz, and Berbaum. We also provide additional evidence based on a collection of observer studies in medical imaging. EIO theory shows that the "sub-optimal" performance of a human observer can be mathematically formalized in the form of an IO, and measured through rationality encouragement.
NASA Astrophysics Data System (ADS)
Cosme, Jayson G.
2015-09-01
We numerically investigate the relaxation dynamics in an isolated quantum system of interacting bosons trapped in a double-well potential after an integrability breaking quench. Using the statistics of the spectrum, we identify the postquench Hamiltonian as nonchaotic and close to integrability over a wide range of interaction parameters. We demonstrate that the system exhibits thermalization in the context of the eigenstate thermalization hypothesis (ETH). We also explore the possibility of an initial state to delocalize with respect to the eigenstates of the postquench Hamiltonian even for energies away from the middle of the spectrum. We observe distinct regimes of equilibration process depending on the initial energy. For low energies, the system rapidly relaxes in a single step to a thermal state. As the energy increases towards the middle of the spectrum, the relaxation dynamics exhibits prethermalization and the lifetime of the metastable states grows. Time evolution of the occupation numbers and the von Neumann entropy in the mode-partitioned system underpins the analyses of the relaxation dynamics.
NASA Astrophysics Data System (ADS)
Dai, Yan-Wei; Hu, Bing-Quan; Zhao, Jian-Hui; Zhou, Huan-Qiang
2010-09-01
The ground-state fidelity per lattice site is computed for the quantum three-state Potts model in a transverse magnetic field on an infinite-size lattice in one spatial dimension in terms of the infinite matrix product state algorithm. It is found that, on the one hand, a pinch point is identified on the fidelity surface around the critical point, and on the other hand, the ground-state fidelity per lattice site exhibits bifurcations at pseudo critical points for different values of the truncation dimension, which in turn approach the critical point as the truncation dimension becomes large. This implies that the ground-state fidelity per lattice site enables us to capture spontaneous symmetry breaking when the control parameter crosses the critical value. In addition, a finite-entanglement scaling of the von Neumann entropy is performed with respect to the truncation dimension, resulting in a precise determination of the central charge at the critical point. Finally, we compute the transverse magnetization, from which the critical exponent β is extracted from the numerical data.
Differentiability of correlations in realistic quantum mechanics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cabrera, Alejandro; Faria, Edson de; Pujals, Enrique
2015-09-15
We prove a version of Bell’s theorem in which the locality assumption is weakened. We start by assuming theoretical quantum mechanics and weak forms of relativistic causality and of realism (essentially the fact that observable values are well defined independently of whether or not they are measured). Under these hypotheses, we show that only one of the correlation functions that can be formulated in the framework of the usual Bell theorem is unknown. We prove that this unknown function must be differentiable at certain angular configuration points that include the origin. We also prove that, if this correlation is assumedmore » to be twice differentiable at the origin, then we arrive at a version of Bell’s theorem. On the one hand, we are showing that any realistic theory of quantum mechanics which incorporates the kinematic aspects of relativity must lead to this type of rough correlation function that is once but not twice differentiable. On the other hand, this study brings us a single degree of differentiability away from a relativistic von Neumann no hidden variables theorem.« less
Spike-Timing Dependent Plasticity in Unipolar Silicon Oxide RRAM Devices
Zarudnyi, Konstantin; Mehonic, Adnan; Montesi, Luca; Buckwell, Mark; Hudziak, Stephen; Kenyon, Anthony J.
2018-01-01
Resistance switching, or Resistive RAM (RRAM) devices show considerable potential for application in hardware spiking neural networks (neuro-inspired computing) by mimicking some of the behavior of biological synapses, and hence enabling non-von Neumann computer architectures. Spike-timing dependent plasticity (STDP) is one such behavior, and one example of several classes of plasticity that are being examined with the aim of finding suitable algorithms for application in many computing tasks such as coincidence detection, classification and image recognition. In previous work we have demonstrated that the neuromorphic capabilities of silicon-rich silicon oxide (SiOx) resistance switching devices extend beyond plasticity to include thresholding, spiking, and integration. We previously demonstrated such behaviors in devices operated in the unipolar mode, opening up the question of whether we could add plasticity to the list of features exhibited by our devices. Here we demonstrate clear STDP in unipolar devices. Significantly, we show that the response of our devices is broadly similar to that of biological synapses. This work further reinforces the potential of simple two-terminal RRAM devices to mimic neuronal functionality in hardware spiking neural networks. PMID:29472837
Clausius inequality beyond the weak-coupling limit: the quantum Brownian oscillator.
Kim, Ilki; Mahler, Günter
2010-01-01
We consider a quantum linear oscillator coupled at an arbitrary strength to a bath at an arbitrary temperature. We find an exact closed expression for the oscillator density operator. This state is noncanonical but can be shown to be equivalent to that of an uncoupled linear oscillator at an effective temperature T*(eff) with an effective mass and an effective spring constant. We derive an effective Clausius inequality deltaQ*(eff)< or =T*(eff)dS , where deltaQ*(eff) is the heat exchanged between the effective (weakly coupled) oscillator and the bath, and S represents a thermal entropy of the effective oscillator, being identical to the von-Neumann entropy of the coupled oscillator. Using this inequality (for a cyclic process in terms of a variation of the coupling strength) we confirm the validity of the second law. For a fixed coupling strength this inequality can also be tested for a process in terms of a variation of either the oscillator mass or its spring constant. Then it is never violated. The properly defined Clausius inequality is thus more robust than assumed previously.
A living mesoscopic cellular automaton made of skin scales.
Manukyan, Liana; Montandon, Sophie A; Fofonjka, Anamarija; Smirnov, Stanislav; Milinkovitch, Michel C
2017-04-12
In vertebrates, skin colour patterns emerge from nonlinear dynamical microscopic systems of cell interactions. Here we show that in ocellated lizards a quasi-hexagonal lattice of skin scales, rather than individual chromatophore cells, establishes a green and black labyrinthine pattern of skin colour. We analysed time series of lizard scale colour dynamics over four years of their development and demonstrate that this pattern is produced by a cellular automaton (a grid of elements whose states are iterated according to a set of rules based on the states of neighbouring elements) that dynamically computes the colour states of individual mesoscopic skin scales to produce the corresponding macroscopic colour pattern. Using numerical simulations and mathematical derivation, we identify how a discrete von Neumann cellular automaton emerges from a continuous Turing reaction-diffusion system. Skin thickness variation generated by three-dimensional morphogenesis of skin scales causes the underlying reaction-diffusion dynamics to separate into microscopic and mesoscopic spatial scales, the latter generating a cellular automaton. Our study indicates that cellular automata are not merely abstract computational systems, but can directly correspond to processes generated by biological evolution.
From quantum coherence to quantum correlations
NASA Astrophysics Data System (ADS)
Sun, Yuan; Mao, Yuanyuan; Luo, Shunlong
2017-06-01
In quantum mechanics, quantum coherence of a state relative to a quantum measurement can be identified with the quantumness that has to be destroyed by the measurement. In particular, quantum coherence of a bipartite state relative to a local quantum measurement encodes quantum correlations in the state. If one takes minimization with respect to the local measurements, then one is led to quantifiers which capture quantum correlations from the perspective of coherence. In this vein, quantum discord, which quantifies the minimal correlations that have to be destroyed by quantum measurements, can be identified as the minimal coherence, with the coherence measured by the relative entropy of coherence. To advocate and formulate this idea in a general context, we first review coherence relative to Lüders measurements which extends the notion of coherence relative to von Neumann measurements (or equivalently, orthonomal bases), and highlight the observation that quantum discord arises as minimal coherence through two prototypical examples. Then, we introduce some novel measures of quantum correlations in terms of coherence, illustrate them through examples, investigate their fundamental properties and implications, and indicate their applications to quantum metrology.
Universality of quantum information in chaotic CFTs
NASA Astrophysics Data System (ADS)
Lashkari, Nima; Dymarsky, Anatoly; Liu, Hong
2018-03-01
We study the Eigenstate Thermalization Hypothesis (ETH) in chaotic conformal field theories (CFTs) of arbitrary dimensions. Assuming local ETH, we compute the reduced density matrix of a ball-shaped subsystem of finite size in the infinite volume limit when the full system is an energy eigenstate. This reduced density matrix is close in trace distance to a density matrix, to which we refer as the ETH density matrix, that is independent of all the details of an eigenstate except its energy and charges under global symmetries. In two dimensions, the ETH density matrix is universal for all theories with the same value of central charge. We argue that the ETH density matrix is close in trace distance to the reduced density matrix of the (micro)canonical ensemble. We support the argument in higher dimensions by comparing the Von Neumann entropy of the ETH density matrix with the entropy of a black hole in holographic systems in the low temperature limit. Finally, we generalize our analysis to the coherent states with energy density that varies slowly in space, and show that locally such states are well described by the ETH density matrix.
Entropy, a Unifying Concept: from Physics to Cognitive Psychology
NASA Astrophysics Data System (ADS)
Tsallis, Constantino; Tsallis, Alexandra C.
Together with classical, relativistic and quantum mechanics, as well as Maxwell electromagnetism, Boltzmann-Gibbs (BG) statistical mechanics constitutes one of the main theories of contemporary physics. This theory primarily concerns inanimate matter, and at its generic foundation we find nonlinear dynamical systems satisfying the ergodic hypothesis. This hypothesis is typically guaranteed for systems whose maximal Lyapunov exponent is positive. What happens when this crucial quantity is zero instead? We suggest here that, in what concerns thermostatistical properties, we typically enter what in some sense may be considered as a new world — the world of living systems — . The need emerges, at least for many systems, for generalizing the basis of BG statistical mechanics, namely the Boltzmann-Gibbs-von Neumann-Shannon en-tropic functional form, which connects the oscopic, thermodynamic quantity, with the occurrence probabilities of microscopic configurations. This unifying approach is briefly reviewed here, and its widespread applications — from physics to cognitive psychology — are overviewed. Special attention is dedicated to the learning/memorizing process in humans and computers. The present observations might be related to the gestalt theory of visual perceptions and the actor-network theory.
Unresolved Problems by Shock Capturing: Taming the Overheating Problem
NASA Technical Reports Server (NTRS)
Liou, Meng-Sing
2012-01-01
The overheating problem, first observed by von Neumann [1] and later studied extensively by Noh [2] using both Eulerian and Lagrangian formulations, remains to be one of the unsolved problems by shock capturing. It is historically well known to occur when a flow is under compression, such as when a shock wave hits and reflects from a wall or when two streams collides with each other. The overheating phenomenon is also found numerically in a smooth flow undergoing rarefaction created by two streams receding from each other. This is in contrary to one s intuition expecting a decrease in internal energy. The excessive amount in the temperature increase does not reduce by refining the mesh size or increasing the order of accuracy. This study finds that the overheating in the receding flow correlates with the entropy generation. By requiring entropy preservation, the overheating is eliminated and the solution is grid convergent. The shock-capturing scheme, as being practiced today, gives rise to the entropy generation, which in turn causes the overheating. This assertion stands up to the convergence test.
Clark, Edward B; Hickinbotham, Simon J; Stepney, Susan
2017-05-01
We present a novel stringmol-based artificial chemistry system modelled on the universal constructor architecture (UCA) first explored by von Neumann. In a UCA, machines interact with an abstract description of themselves to replicate by copying the abstract description and constructing the machines that the abstract description encodes. DNA-based replication follows this architecture, with DNA being the abstract description, the polymerase being the copier, and the ribosome being the principal machine in expressing what is encoded on the DNA. This architecture is semantically closed as the machine that defines what the abstract description means is itself encoded on that abstract description. We present a series of experiments with the stringmol UCA that show the evolution of the meaning of genomic material, allowing the concept of semantic closure and transitions between semantically closed states to be elucidated in the light of concrete examples. We present results where, for the first time in an in silico system, simultaneous evolution of the genomic material, copier and constructor of a UCA, giving rise to viable offspring. © 2017 The Author(s).
Two-cylinder entanglement entropy under a twist
NASA Astrophysics Data System (ADS)
Chen, Xiao; Witczak-Krempa, William; Faulkner, Thomas; Fradkin, Eduardo
2017-04-01
We study the von Neumann and Rényi entanglement entropy (EE) of the scale-invariant theories defined on the tori in 2 + 1 and 3 + 1 spacetime dimensions. We focus on the spatial bi-partitions of the torus into two cylinders, and allow for twisted boundary conditions along the non-contractible cycles. Various analytical and numerical results are obtained for the universal EE of the relativistic boson and Dirac fermion conformal field theories (CFTs), the fermionic quadratic band touching and the boson with z = 2 Lifshitz scaling. The shape dependence of the EE clearly distinguishes these theories, although intriguing similarities are found in certain limits. We also study the evolution of the EE when a mass is introduced to detune the system from its scale-invariant point, by employing a renormalized EE that goes beyond a naive subtraction of the area law. In certain cases we find the non-monotonic behavior of the torus EE under RG flow, which distinguishes it from the EE of a disk.
Thermodynamics and the structure of quantum theory
NASA Astrophysics Data System (ADS)
Krumm, Marius; Barnum, Howard; Barrett, Jonathan; Müller, Markus P.
2017-04-01
Despite its enormous empirical success, the formalism of quantum theory still raises fundamental questions: why is nature described in terms of complex Hilbert spaces, and what modifications of it could we reasonably expect to find in some regimes of physics? Here we address these questions by studying how compatibility with thermodynamics constrains the structure of quantum theory. We employ two postulates that any probabilistic theory with reasonable thermodynamic behaviour should arguably satisfy. In the framework of generalised probabilistic theories, we show that these postulates already imply important aspects of quantum theory, like self-duality and analogues of projective measurements, subspaces and eigenvalues. However, they may still admit a class of theories beyond quantum mechanics. Using a thought experiment by von Neumann, we show that these theories admit a consistent thermodynamic notion of entropy, and prove that the second law holds for projective measurements and mixing procedures. Furthermore, we study additional entropy-like quantities based on measurement probabilities and convex decomposition probabilities, and uncover a relation between one of these quantities and Sorkin’s notion of higher-order interference.
Linear maps preserving maximal deviation and the Jordan structure of quantum systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamhalter, Jan
2012-12-15
In the algebraic approach to quantum theory, a quantum observable is given by an element of a Jordan algebra and a state of the system is modelled by a normalized positive functional on the underlying algebra. Maximal deviation of a quantum observable is the largest statistical deviation one can obtain in a particular state of the system. The main result of the paper shows that each linear bijective transformation between JBW algebras preserving maximal deviations is formed by a Jordan isomorphism or a minus Jordan isomorphism perturbed by a linear functional multiple of an identity. It shows that only onemore » numerical statistical characteristic has the power to determine the Jordan algebraic structure completely. As a consequence, we obtain that only very special maps can preserve the diameter of the spectra of elements. Nonlinear maps preserving the pseudometric given by maximal deviation are also described. The results generalize hitherto known theorems on preservers of maximal deviation in the case of self-adjoint parts of von Neumann algebras proved by Molnar.« less
Verification Test of the SURF and SURFplus Models in xRage: Part II
DOE Office of Scientific and Technical Information (OSTI.GOV)
Menikoff, Ralph
2016-06-20
The previous study used an underdriven detonation wave (steady ZND reaction zone profile followed by a scale invariant rarefaction wave) for PBX 9502 as a validation test of the implementation of the SURF and SURFplus models in the xRage code. Even with a fairly fine uniform mesh (12,800 cells for 100mm) the detonation wave profile had limited resolution due to the thin reaction zone width (0.18mm) for the fast SURF burn rate. Here we study the effect of finer resolution by comparing results of simulations with cell sizes of 8, 2 and 1 μm, which corresponds to 25, 100 andmore » 200 points within the reaction zone. With finer resolution the lead shock pressure is closer to the von Neumann spike pressure, and there is less noise in the rarefaction wave due to fluctuations within the reaction zone. As a result the average error decreases. The pointwise error is still dominated by the smearing the pressure kink in the vicinity of the sonic point which occurs at the end of the reaction zone.« less
Dynamical emergence of Markovianity in local time scheme.
Jeknić-Dugić, J; Arsenijević, M; Dugić, M
2016-06-01
Recently we pointed out the so-called local time scheme as a novel approach to quantum foundations that solves the preferred pointer-basis problem. In this paper, we introduce and analyse in depth a rather non-standard dynamical map that is imposed by the scheme. On the one hand, the map does not allow for introducing a properly defined generator of the evolution nor does it represent a quantum channel. On the other hand, the map is linear, positive, trace preserving and unital as well as completely positive, but is not divisible and therefore non-Markovian. Nevertheless, we provide quantitative criteria for dynamical emergence of time-coarse-grained Markovianity, for exact dynamics of an open system, as well as for operationally defined approximation of a closed or open many-particle system. A closed system never reaches a steady state, whereas an open system may reach a unique steady state given by the Lüders-von Neumann formula; where the smaller the open system, the faster a steady state is attained. These generic findings extend the standard open quantum systems theory and substantially tackle certain cosmological issues.
NASA Astrophysics Data System (ADS)
Faghihi, M. J.; Tavassoly, M. K.; Hatami, M.
In this paper, a model by which we study the interaction between a motional three-level atom and two-mode field injected simultaneously in a bichromatic cavity is considered; the three-level atom is assumed to be in a Λ-type configuration. As a result, the atom-field and the field-field interaction (parametric down conversion) will be appeared. It is shown that, by applying a canonical transformation, the introduced model can be reduced to a well-known form of the generalized Jaynes-Cummings model. Under particular initial conditions, which may be prepared for the atom and the field, the time evolution of state vector of the entire system is analytically evaluated. Then, the dynamics of atom by considering ‘atomic population inversion’ and two different measures of entanglement, i.e., ‘von Neumann entropy’ and ‘idempotency defect’ is discussed, in detail. It is deduced from the numerical results that, the duration and the maximum amount of the considered physical quantities can be suitably tuned by selecting the proper field-mode structure parameter p and the detuning parameters.
Entanglement Criteria of Two Two-Level Atoms Interacting with Two Coupled Modes
NASA Astrophysics Data System (ADS)
Baghshahi, Hamid Reza; Tavassoly, Mohammad Kazem; Faghihi, Mohammad Javad
2015-08-01
In this paper, we study the interaction between two two-level atoms and two coupled modes of a quantized radiation field in the form of parametric frequency converter injecting within an optical cavity enclosed by a medium with Kerr nonlinearity. It is demonstrated that, by applying the Bogoliubov-Valatin canonical transformation, the introduced model is reduced to a well-known form of the generalized Jaynes-Cummings model. Then, under particular initial conditions for the atoms (in a coherent superposition of its ground and upper states) and the fields (in a standard coherent state) which may be prepared, the time evolution of state vector of the entire system is analytically evaluated. In order to understand the degree of entanglement between subsystems (atom-field and atom-atom), the dynamics of entanglement through different measures, namely, von Neumann reduced entropy, concurrence and negativity is evaluated. In each case, the effects of Kerr nonlinearity and detuning parameter on the above measures are numerically analyzed, in detail. It is illustrated that the amount of entanglement can be tuned by choosing the evolved parameters, appropriately.
A living mesoscopic cellular automaton made of skin scales
NASA Astrophysics Data System (ADS)
Manukyan, Liana; Montandon, Sophie A.; Fofonjka, Anamarija; Smirnov, Stanislav; Milinkovitch, Michel C.
2017-04-01
In vertebrates, skin colour patterns emerge from nonlinear dynamical microscopic systems of cell interactions. Here we show that in ocellated lizards a quasi-hexagonal lattice of skin scales, rather than individual chromatophore cells, establishes a green and black labyrinthine pattern of skin colour. We analysed time series of lizard scale colour dynamics over four years of their development and demonstrate that this pattern is produced by a cellular automaton (a grid of elements whose states are iterated according to a set of rules based on the states of neighbouring elements) that dynamically computes the colour states of individual mesoscopic skin scales to produce the corresponding macroscopic colour pattern. Using numerical simulations and mathematical derivation, we identify how a discrete von Neumann cellular automaton emerges from a continuous Turing reaction-diffusion system. Skin thickness variation generated by three-dimensional morphogenesis of skin scales causes the underlying reaction-diffusion dynamics to separate into microscopic and mesoscopic spatial scales, the latter generating a cellular automaton. Our study indicates that cellular automata are not merely abstract computational systems, but can directly correspond to processes generated by biological evolution.
Quasi-exact solvability and entropies of the one-dimensional regularised Calogero model
NASA Astrophysics Data System (ADS)
Pont, Federico M.; Osenda, Omar; Serra, Pablo
2018-05-01
The Calogero model can be regularised through the introduction of a cutoff parameter which removes the divergence in the interaction term. In this work we show that the one-dimensional two-particle regularised Calogero model is quasi-exactly solvable and that for certain values of the Hamiltonian parameters the eigenfunctions can be written in terms of Heun’s confluent polynomials. These eigenfunctions are such that the reduced density matrix of the two-particle density operator can be obtained exactly as well as its entanglement spectrum. We found that the number of non-zero eigenvalues of the reduced density matrix is finite in these cases. The limits for the cutoff distance going to zero (Calogero) and infinity are analysed and all the previously obtained results for the Calogero model are reproduced. Once the exact eigenfunctions are obtained, the exact von Neumann and Rényi entanglement entropies are studied to characterise the physical traits of the model. The quasi-exactly solvable character of the model is assessed studying the numerically calculated Rényi entropy and entanglement spectrum for the whole parameter space.
Exobiology, SETI, von Neumann and geometric phase control.
Hansson, P A
1995-11-01
The central difficulties confronting us at present in exobiology are the problems of the physical forces which sustain three-dimensional organisms, i.e., how one dimensional systems with only nearest interaction and two dimensional ones with its regular vibrations results in an integrated three-dimensional functionality. For example, a human lung has a dimensionality of 2.9 and thus should be measured in m2.9. According to thermodynamics, the first life-like system should have a small number of degrees of freedom, so how can evolution, via cycles of matter, lead to intelligence and theoretical knowledge? Or, more generally, what mechanisms constrain and drive this evolution? We are now on the brink of reaching an understanding below the photon level, into the domain where quantum events implode to the geometric phase which maintains the history of a quantum object. Even if this would exclude point to point communication, it could make it possible to manipulate the molecular level from below, in the physical scale, and result in a new era of geometricised engineering. As such, it would have a significant impact on space exploration and exobiology.
Strong majorization entropic uncertainty relations
NASA Astrophysics Data System (ADS)
Rudnicki, Łukasz; Puchała, Zbigniew; Życzkowski, Karol
2014-05-01
We analyze entropic uncertainty relations in a finite-dimensional Hilbert space and derive several strong bounds for the sum of two entropies obtained in projective measurements with respect to any two orthogonal bases. We improve the recent bounds by Coles and Piani [P. Coles and M. Piani, Phys. Rev. A 89, 022112 (2014), 10.1103/PhysRevA.89.022112], which are known to be stronger than the well-known result of Maassen and Uffink [H. Maassen and J. B. M. Uffink, Phys. Rev. Lett. 60, 1103 (1988), 10.1103/PhysRevLett.60.1103]. Furthermore, we find a bound based on majorization techniques, which also happens to be stronger than the recent results involving the largest singular values of submatrices of the unitary matrix connecting both bases. The first set of bounds gives better results for unitary matrices close to the Fourier matrix, while the second one provides a significant improvement in the opposite sectors. Some results derived admit generalization to arbitrary mixed states, so that corresponding bounds are increased by the von Neumann entropy of the measured state. The majorization approach is finally extended to the case of several measurements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mejri, Youssef, E-mail: josef-bizert@hotmail.fr; Dép. des Mathématiques, Faculté des Sciences de Bizerte, 7021 Jarzouna; Laboratoire de Modélisation Mathématique et Numérique dans les Sciences de l’Ingénieur, ENIT BP 37, Le Belvedere, 1002 Tunis
In this article, we study the boundary inverse problem of determining the aligned magnetic field appearing in the magnetic Schrödinger equation in a periodic quantum cylindrical waveguide, by knowledge of the Dirichlet-to-Neumann map. We prove a Hölder stability estimate with respect to the Dirichlet-to-Neumann map, by means of the geometrical optics solutions of the magnetic Schrödinger equation.
Tweaking one-loop determinants in AdS3
NASA Astrophysics Data System (ADS)
Castro, Alejandra; Keeler, Cynthia; Szepietowski, Phillip
2017-10-01
We revisit the subject of one-loop determinants in AdS3 gravity via the quasi-normal mode method. Our goal is to evaluate a one-loop determinant with chiral boundary conditions for the metric field; chirality is achieved by imposing Dirichlet boundary conditions on certain components while others satisfy Neumann. Along the way, we give a generalization of the quasinormal mode method for stationary (non-static) thermal backgrounds, and propose a treatment for Neumann boundary conditions in this framework. We evaluate the graviton one-loop determinant on the Euclidean BTZ background with parity-violating boundary conditions (CSS), and find excellent agreement with the dual warped CFT. We also discuss a more general falloff in AdS3 that is related to two dimensional quantum gravity in lightcone gauge. The behavior of the ghost fields under both sets of boundary conditions is novel and we discuss potential interpretations.
NASA Technical Reports Server (NTRS)
Bristow, D. R.; Grose, G. G.
1978-01-01
The Douglas Neumann method for low-speed potential flow on arbitrary three-dimensional lifting bodies was modified by substituting the combined source and doublet surface paneling based on Green's identity for the original source panels. Numerical studies show improved accuracy and stability for thin lifting surfaces, permitting reduced panel number for high-lift devices and supercritical airfoil sections. The accuracy of flow in concave corners is improved. A method of airfoil section design for a given pressure distribution, based on Green's identity, was demonstrated. The program uses panels on the body surface with constant source strength and parabolic distribution of doublet strength, and a doublet sheet on the wake. The program is written for the CDC CYBER 175 computer. Results of calculations are presented for isolated bodies, wings, wing-body combinations, and internal flow.
NASA Astrophysics Data System (ADS)
Price-Whelan, Adrian M.; Agueros, M. A.; Fournier, A.; Street, R.; Ofek, E.; Levitan, D. B.; PTF Collaboration
2013-01-01
Many current photometric, time-domain surveys are driven by specific goals such as searches for supernovae or transiting exoplanets, or studies of stellar variability. These goals in turn set the cadence with which individual fields are re-imaged. In the case of the Palomar Transient Factory (PTF), several such sub-surveys are being conducted in parallel, leading to extremely non-uniform sampling over the survey's nearly 20,000 sq. deg. footprint. While the typical 7.26 sq. deg. PTF field has been imaged 20 times in R-band, ~2300 sq. deg. have been observed more than 100 times. We use the existing PTF data 6.4x107 light curves) to study the trade-off that occurs when searching for microlensing events when one has access to a large survey footprint with irregular sampling. To examine the probability that microlensing events can be recovered in these data, we also test previous statistics used on uniformly sampled data to identify variables and transients. We find that one such statistic, the von Neumann ratio, performs best for identifying simulated microlensing events. We develop a selection method using this statistic and apply it to data from all PTF fields with >100 observations to uncover a number of interesting candidate events. This work can help constrain all-sky event rate predictions and tests microlensing signal recovery in large datasets, both of which will be useful to future wide-field, time-domain surveys such as the LSST.
Calvani, Dario; Cuccoli, Alessandro; Gidopoulos, Nikitas I; Verrucchi, Paola
2013-04-23
The behavior of most physical systems is affected by their natural surroundings. A quantum system with an environment is referred to as open, and its study varies according to the classical or quantum description adopted for the environment. We propose an approach to open quantum systems that allows us to follow the cross-over from quantum to classical environments; to achieve this, we devise an exact parametric representation of the principal system, based on generalized coherent states for the environment. The method is applied to the s = 1/2 Heisenberg star with frustration, where the quantum character of the environment varies with the couplings entering the Hamiltonian H. We find that when the star is in an eigenstate of H, the central spin behaves as if it were in an effective magnetic field, pointing in the direction set by the environmental coherent-state angle variables (θ, ϕ), and broadened according to their quantum probability distribution. Such distribution is independent of ϕ, whereas as a function of θ is seen to get narrower as the quantum character of the environment is reduced, collapsing into a Dirac-δ function in the classical limit. In such limit, because ϕ is left undetermined, the Von Neumann entropy of the central spin remains finite; in fact, it is equal to the entanglement of the original fully quantum model, a result that establishes a relation between this latter quantity and the Berry phase characterizing the dynamics of the central spin in the effective magnetic field.
Computation of nonlinear ultrasound fields using a linearized contrast source method.
Verweij, Martin D; Demi, Libertario; van Dongen, Koen W A
2013-08-01
Nonlinear ultrasound is important in medical diagnostics because imaging of the higher harmonics improves resolution and reduces scattering artifacts. Second harmonic imaging is currently standard, and higher harmonic imaging is under investigation. The efficient development of novel imaging modalities and equipment requires accurate simulations of nonlinear wave fields in large volumes of realistic (lossy, inhomogeneous) media. The Iterative Nonlinear Contrast Source (INCS) method has been developed to deal with spatiotemporal domains measuring hundreds of wavelengths and periods. This full wave method considers the nonlinear term of the Westervelt equation as a nonlinear contrast source, and solves the equivalent integral equation via the Neumann iterative solution. Recently, the method has been extended with a contrast source that accounts for spatially varying attenuation. The current paper addresses the problem that the Neumann iterative solution converges badly for strong contrast sources. The remedy is linearization of the nonlinear contrast source, combined with application of more advanced methods for solving the resulting integral equation. Numerical results show that linearization in combination with a Bi-Conjugate Gradient Stabilized method allows the INCS method to deal with fairly strong, inhomogeneous attenuation, while the error due to the linearization can be eliminated by restarting the iterative scheme.
Standing in the gap: ref lections on translating the Jung-Neumann correspondence.
McCartney, Heather
2016-04-01
This paper considers the experience of translating the correspondence between C.G. Jung and Erich Neumann as part of the Philemon series. The translator explores the similarities between analytical work and the task of translation by means of the concepts of the dialectical third and the interactional field. The history and politics of the translation of analytic writing and their consequences for the lingua franca of analysis are discussed. Key themes within the correspondence are outlined, including Jung and Neumann's pre-war exploration of Judaism and the unconscious, the post-war difficulties around the publication of Neumann's Depth Psychology and a New Ethic set against the early years of the C.G. Jung Institute in Zurich, and the development of the correspondents' relationship over time. © 2016, The Society of Analytical Psychology.
The challenges of editorship: a reflection on editing the Jung-Neumann correspondence.
Liebscher, Martin
2016-04-01
The complete correspondence between C.G. Jung and Erich Neumann was published in 2015. This article attempts to provide insight into the practical task, as well as the theoretical background, of the editing process. The advantages and possibilities of an unabridged edition with an extensive historical contextualization are demonstrated, and compared to the approach of the editors of the Jung Letters and their selection therein of Jung's letters to Neumann. The practical points under consideration include the establishment of the letter corpus, the ascertainment of dates and the chronological arrangement of the letter exchange, as well as the deciphering of handwritten letters. Theoretical aspects under discussion involve the question of the merits of a critical contextualisation and the position of the editor vis-à-vis the research object. The example of the selecting and editing of Jung's letters to Neumann by Aniela Jaffé and Gerhard Adler reveals how drastically the close ties of those editors with Jung, Neumann, and members of the Zurich analytical circles compromised their editorial work at times. The advantage for an editor being able to work from an historical distance is appreciated. © 2016, The Society of Analytical Psychology.
Carl Neumann versus Rudolf Clausius on the propagation of electrodynamic potentials
NASA Astrophysics Data System (ADS)
Archibald, Thomas
1986-09-01
In the late 1860's, German electromagnetic theorists employing W. Weber's velocity-dependent force law were forced to confront the issue of energy conservation. One attempt to formulate a conservation law for such forces was due to Carl Neumann, who introduced a model employing retarded potentials in 1868. Rudolf Clausius quickly pointed out certain problems with the physical interpretation of Neumann's mathematical formalism. The debate between the two men continued until the 1880's and illustrates the strictures facing mathematical approaches to physical problems during this prerelativistic, pre-Maxwellian period.
Non-parametric characterization of long-term rainfall time series
NASA Astrophysics Data System (ADS)
Tiwari, Harinarayan; Pandey, Brij Kishor
2018-03-01
The statistical study of rainfall time series is one of the approaches for efficient hydrological system design. Identifying, and characterizing long-term rainfall time series could aid in improving hydrological systems forecasting. In the present study, eventual statistics was applied for the long-term (1851-2006) rainfall time series under seven meteorological regions of India. Linear trend analysis was carried out using Mann-Kendall test for the observed rainfall series. The observed trend using the above-mentioned approach has been ascertained using the innovative trend analysis method. Innovative trend analysis has been found to be a strong tool to detect the general trend of rainfall time series. Sequential Mann-Kendall test has also been carried out to examine nonlinear trends of the series. The partial sum of cumulative deviation test is also found to be suitable to detect the nonlinear trend. Innovative trend analysis, sequential Mann-Kendall test and partial cumulative deviation test have potential to detect the general as well as nonlinear trend for the rainfall time series. Annual rainfall analysis suggests that the maximum changes in mean rainfall is 11.53% for West Peninsular India, whereas the maximum fall in mean rainfall is 7.8% for the North Mountainous Indian region. The innovative trend analysis method is also capable of finding the number of change point available in the time series. Additionally, we have performed von Neumann ratio test and cumulative deviation test to estimate the departure from homogeneity. Singular spectrum analysis has been applied in this study to evaluate the order of departure from homogeneity in the rainfall time series. Monsoon season (JS) of North Mountainous India and West Peninsular India zones has higher departure from homogeneity and singular spectrum analysis shows the results to be in coherence with the same.
NASA Astrophysics Data System (ADS)
Tokman, M. D.
2009-05-01
We discuss specific features of the electrodynamic characteristics of quantum systems within the framework of models that include a phenomenological description of the relaxation processes. As is shown by W. E. Lamb, Jr., R. R. Schlicher, and M. O. Scully [Phys. Rev. A 36, 2763 (1987)], the use of phenomenological relaxation operators, which adequately describe the attenuation of eigenvibrations of a quantum system, may lead to incorrect solutions in the presence of external electromagnetic fields determined by the vector potential for different resonance processes. This incorrectness can be eliminated by giving a gauge-invariant form to the relaxation operator. Lamb, Jr., proposed the corresponding gauge-invariant modification for the Weisskopf-Wigner relaxation operator, which is introduced directly into the Schrödinger equation within the framework of the two-level approximation. In the present paper, this problem is studied for the von Neumann equation supplemented by a relaxation operator. First, we show that the solution of the equation for the density matrix with the relaxation operator correctly obtained “from the first principles” has properties that ensure gauge invariance for the observables. Second, we propose a common recipe for transformation of the phenomenological relaxation operator into the correct (gauge-invariant) form in the density-matrix equations for a multilevel system. Also, we discuss the methods of elimination of other inaccuracies (not related to the gauge-invariance problem) which arise if the electrodynamic response of a dissipative quantum system is calculated within the framework of simplified relaxation models (first of all, the model corresponding to constant relaxation rates of coherences in quantum transitions). Examples illustrating the correctness of the results obtained within the framework of the proposed methods in contrast to inaccuracy of the results of the standard calculation techniques are given.
NASA Astrophysics Data System (ADS)
Reimer, Ashton S.; Cheviakov, Alexei F.
2013-03-01
A Matlab-based finite-difference numerical solver for the Poisson equation for a rectangle and a disk in two dimensions, and a spherical domain in three dimensions, is presented. The solver is optimized for handling an arbitrary combination of Dirichlet and Neumann boundary conditions, and allows for full user control of mesh refinement. The solver routines utilize effective and parallelized sparse vector and matrix operations. Computations exhibit high speeds, numerical stability with respect to mesh size and mesh refinement, and acceptable error values even on desktop computers. Catalogue identifier: AENQ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENQ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License v3.0 No. of lines in distributed program, including test data, etc.: 102793 No. of bytes in distributed program, including test data, etc.: 369378 Distribution format: tar.gz Programming language: Matlab 2010a. Computer: PC, Macintosh. Operating system: Windows, OSX, Linux. RAM: 8 GB (8, 589, 934, 592 bytes) Classification: 4.3. Nature of problem: To solve the Poisson problem in a standard domain with “patchy surface”-type (strongly heterogeneous) Neumann/Dirichlet boundary conditions. Solution method: Finite difference with mesh refinement. Restrictions: Spherical domain in 3D; rectangular domain or a disk in 2D. Unusual features: Choice between mldivide/iterative solver for the solution of large system of linear algebraic equations that arise. Full user control of Neumann/Dirichlet boundary conditions and mesh refinement. Running time: Depending on the number of points taken and the geometry of the domain, the routine may take from less than a second to several hours to execute.
A numerical technique for linear elliptic partial differential equations in polygonal domains.
Hashemzadeh, P; Fokas, A S; Smitheman, S A
2015-03-08
Integral representations for the solution of linear elliptic partial differential equations (PDEs) can be obtained using Green's theorem. However, these representations involve both the Dirichlet and the Neumann values on the boundary, and for a well-posed boundary-value problem (BVPs) one of these functions is unknown. A new transform method for solving BVPs for linear and integrable nonlinear PDEs usually referred to as the unified transform ( or the Fokas transform ) was introduced by the second author in the late Nineties. For linear elliptic PDEs, this method can be considered as the analogue of Green's function approach but now it is formulated in the complex Fourier plane instead of the physical plane. It employs two global relations also formulated in the Fourier plane which couple the Dirichlet and the Neumann boundary values. These relations can be used to characterize the unknown boundary values in terms of the given boundary data, yielding an elegant approach for determining the Dirichlet to Neumann map . The numerical implementation of the unified transform can be considered as the counterpart in the Fourier plane of the well-known boundary integral method which is formulated in the physical plane. For this implementation, one must choose (i) a suitable basis for expanding the unknown functions and (ii) an appropriate set of complex values, which we refer to as collocation points, at which to evaluate the global relations. Here, by employing a variety of examples we present simple guidelines of how the above choices can be made. Furthermore, we provide concrete rules for choosing the collocation points so that the condition number of the matrix of the associated linear system remains low.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yao, Jie, E-mail: yjie2@uh.edu; Lesage, Anne-Cécile; Hussain, Fazle
2014-12-15
The reversion of the Born-Neumann series of the Lippmann-Schwinger equation is one of the standard ways to solve the inverse acoustic scattering problem. One limitation of the current inversion methods based on the reversion of the Born-Neumann series is that the velocity potential should have compact support. However, this assumption cannot be satisfied in certain cases, especially in seismic inversion. Based on the idea of distorted wave scattering, we explore an inverse scattering method for velocity potentials without compact support. The strategy is to decompose the actual medium as a known single interface reference medium, which has the same asymptoticmore » form as the actual medium and a perturbative scattering potential with compact support. After introducing the method to calculate the Green’s function for the known reference potential, the inverse scattering series and Volterra inverse scattering series are derived for the perturbative potential. Analytical and numerical examples demonstrate the feasibility and effectiveness of this method. Besides, to ensure stability of the numerical computation, the Lanczos averaging method is employed as a filter to reduce the Gibbs oscillations for the truncated discrete inverse Fourier transform of each order. Our method provides a rigorous mathematical framework for inverse acoustic scattering with a non-compact support velocity potential.« less
Milde, Moritz B.; Blum, Hermann; Dietmüller, Alexander; Sumislawska, Dora; Conradt, Jörg; Indiveri, Giacomo; Sandamirskaya, Yulia
2017-01-01
Neuromorphic hardware emulates dynamics of biological neural networks in electronic circuits offering an alternative to the von Neumann computing architecture that is low-power, inherently parallel, and event-driven. This hardware allows to implement neural-network based robotic controllers in an energy-efficient way with low latency, but requires solving the problem of device variability, characteristic for analog electronic circuits. In this work, we interfaced a mixed-signal analog-digital neuromorphic processor ROLLS to a neuromorphic dynamic vision sensor (DVS) mounted on a robotic vehicle and developed an autonomous neuromorphic agent that is able to perform neurally inspired obstacle-avoidance and target acquisition. We developed a neural network architecture that can cope with device variability and verified its robustness in different environmental situations, e.g., moving obstacles, moving target, clutter, and poor light conditions. We demonstrate how this network, combined with the properties of the DVS, allows the robot to avoid obstacles using a simple biologically-inspired dynamics. We also show how a Dynamic Neural Field for target acquisition can be implemented in spiking neuromorphic hardware. This work demonstrates an implementation of working obstacle avoidance and target acquisition using mixed signal analog/digital neuromorphic hardware. PMID:28747883
Random density matrices versus random evolution of open system
NASA Astrophysics Data System (ADS)
Pineda, Carlos; Seligman, Thomas H.
2015-10-01
We present and compare two families of ensembles of random density matrices. The first, static ensemble, is obtained foliating an unbiased ensemble of density matrices. As criterion we use fixed purity as the simplest example of a useful convex function. The second, dynamic ensemble, is inspired in random matrix models for decoherence where one evolves a separable pure state with a random Hamiltonian until a given value of purity in the central system is achieved. Several families of Hamiltonians, adequate for different physical situations, are studied. We focus on a two qubit central system, and obtain exact expressions for the static case. The ensemble displays a peak around Werner-like states, modulated by nodes on the degeneracies of the density matrices. For moderate and strong interactions good agreement between the static and the dynamic ensembles is found. Even in a model where one qubit does not interact with the environment excellent agreement is found, but only if there is maximal entanglement with the interacting one. The discussion is started recalling similar considerations for scattering theory. At the end, we comment on the reach of the results for other convex functions of the density matrix, and exemplify the situation with the von Neumann entropy.
Open-quantum-systems approach to complementarity in neutral-kaon interferometry
NASA Astrophysics Data System (ADS)
de Souza, Gustavo; de Oliveira, J. G. G.; Varizi, Adalberto D.; Nogueira, Edson C.; Sampaio, Marcos D.
2016-12-01
In bipartite quantum systems, entanglement correlations between the parties exerts direct influence in the phenomenon of wave-particle duality. This effect has been quantitatively analyzed in the context of two qubits by Jakob and Bergou [Opt. Commun. 283, 827 (2010), 10.1016/j.optcom.2009.10.044]. Employing a description of the K -meson propagation in free space where its weak decay states are included as a second party, we study here this effect in the kaon-antikaon oscillations. We show that a new quantitative "triality" relation holds, similar to the one considered by Jakob and Bergou. In our case, it relates the distinguishability between the decay-product states corresponding to the distinct kaon propagation modes KS, KL, the amount of wave-like path interference between these states, and the amount of entanglement given by the reduced von Neumann entropy. The inequality can account for the complementarity between strangeness oscillations and lifetime information previously considered in the literature, therefore allowing one to see how it is affected by entanglement correlations. As we will discuss, it allows one to visualize clearly through the K0-K ¯0 oscillations the fundamental role of entanglement in quantum complementarity.
NASA Astrophysics Data System (ADS)
Yu, Li-Wei; Ge, Mo-Lin
2017-03-01
The relationships between quantum entangled states and braid matrices have been well studied in recent years. However, most of the results are based on qubits. In this paper, we investigate the applications of 2-qutrit entanglement in the braiding associated with Z3 parafermion. The 2-qutrit entangled state | Ψ (θ) >, generated by the action of the localized unitary solution R ˘ (θ) of YBE on 2-qutrit natural basis, achieves its maximal ℓ1-norm and maximal von Neumann entropy simultaneously at θ = π / 3. Meanwhile, at θ = π / 3, the solutions of YBE reduces braid matrices, which implies the role of ℓ1-norm and entropy plays in determining real physical quantities. On the other hand, we give a new realization of 4-anyon topological basis by qutrit entangled states, then the 9 × 9 localized braid representation in 4-qutrit tensor product space (C3) ⊗ 4 is reduced to Jones representation of braiding in the 4-anyon topological basis. Hence, we conclude that the entangled states are powerful tools in analysing the characteristics of braiding and R ˘ -matrix.
On Entropy Production in the Madelung Fluid and the Role of Bohm's Potential in Classical Diffusion
NASA Astrophysics Data System (ADS)
Heifetz, Eyal; Tsekov, Roumen; Cohen, Eliahu; Nussinov, Zohar
2016-07-01
The Madelung equations map the non-relativistic time-dependent Schrödinger equation into hydrodynamic equations of a virtual fluid. While the von Neumann entropy remains constant, we demonstrate that an increase of the Shannon entropy, associated with this Madelung fluid, is proportional to the expectation value of its velocity divergence. Hence, the Shannon entropy may grow (or decrease) due to an expansion (or compression) of the Madelung fluid. These effects result from the interference between solutions of the Schrödinger equation. Growth of the Shannon entropy due to expansion is common in diffusive processes. However, in the latter the process is irreversible while the processes in the Madelung fluid are always reversible. The relations between interference, compressibility and variation of the Shannon entropy are then examined in several simple examples. Furthermore, we demonstrate that for classical diffusive processes, the "force" accelerating diffusion has the form of the positive gradient of the quantum Bohm potential. Expressing then the diffusion coefficient in terms of the Planck constant reveals the lower bound given by the Heisenberg uncertainty principle in terms of the product between the gas mean free path and the Brownian momentum.
The Conditional Entropy Power Inequality for Bosonic Quantum Systems
NASA Astrophysics Data System (ADS)
De Palma, Giacomo; Trevisan, Dario
2018-06-01
We prove the conditional Entropy Power Inequality for Gaussian quantum systems. This fundamental inequality determines the minimum quantum conditional von Neumann entropy of the output of the beam-splitter or of the squeezing among all the input states where the two inputs are conditionally independent given the memory and have given quantum conditional entropies. We also prove that, for any couple of values of the quantum conditional entropies of the two inputs, the minimum of the quantum conditional entropy of the output given by the conditional Entropy Power Inequality is asymptotically achieved by a suitable sequence of quantum Gaussian input states. Our proof of the conditional Entropy Power Inequality is based on a new Stam inequality for the quantum conditional Fisher information and on the determination of the universal asymptotic behaviour of the quantum conditional entropy under the heat semigroup evolution. The beam-splitter and the squeezing are the central elements of quantum optics, and can model the attenuation, the amplification and the noise of electromagnetic signals. This conditional Entropy Power Inequality will have a strong impact in quantum information and quantum cryptography. Among its many possible applications there is the proof of a new uncertainty relation for the conditional Wehrl entropy.
Afzal, Muhammad Imran; Lee, Yong Tak
2016-01-01
Von Neumann and Wigner theorized the bounding and anti-crossing of eigenstates. Experiments have demonstrated that owing to anti-crossing and similar radiation rates, the graphene-like resonance of inhomogeneously strained photonic eigenstates can generate a pseudomagnetic field, bandgaps and Landau levels, whereas exponential or dissimilar rates induce non-Hermicity. Here, we experimentally demonstrate higher-order supersymmetry and quantum phase transitions by resonance between similar one-dimensional lattices. The lattices consisted of inhomogeneous strain-like phases of triangular solitons. The resonance created two-dimensional, inhomogeneously deformed photonic graphene. All parent eigenstates were annihilated. Eigenstates of mildly strained solitons were annihilated at similar rates through one tail and generated Hermitian bounded eigenstates. The strongly strained solitons with positive phase defects were annihilated at exponential rates through one tail, which bounded eigenstates through non-Hermitianally generated exceptional points. Supersymmetry was evident, with preservation of the shapes and relative phase differences of the parent solitons. Localizations of energies generated from annihilations of mildly and strongly strained soliton eigenstates were responsible for geometrical (Berry) and topological phase transitions, respectively. Both contributed to generating a quantum Zeno phase, whereas only strong twists generated topological (Anderson) localization. Anti-bunching-like condensation was also observed. PMID:27966596
NASA Astrophysics Data System (ADS)
Wu, Shao-xiong; Zhang, Yang; Yu, Chang-shui
2018-03-01
Quantum Fisher information (QFI) is an important feature for the precision of quantum parameter estimation based on the quantum Cramér-Rao inequality. When the quantum state satisfies the von Neumann-Landau equation, the local quantum uncertainty (LQU), as a kind of quantum correlation, present in a bipartite mixed state guarantees a lower bound on QFI in the optimal phase estimation protocol (Girolami et al., 2013). However, in the open quantum systems, there is not an explicit relation between LQU and QFI generally. In this paper, we study the relation between LQU and QFI in open systems which is composed of two interacting two-level systems coupled to independent non-Markovian environments with the entangled initial state embedded by a phase parameter θ. The analytical calculations show that the QFI does not depend on the phase parameter θ, and its decay can be restrained through enhancing the coupling strength or non-Markovianity. Meanwhile, the LQU is related to the phase parameter θ and shows plentiful phenomena. In particular, we find that the LQU can well bound the QFI when the coupling between the two systems is switched off or the initial state is Bell state.
Dynamical manifestations of quantum chaos: correlation hole and bulge
NASA Astrophysics Data System (ADS)
Torres-Herrera, E. J.; Santos, Lea F.
2017-10-01
A main feature of a chaotic quantum system is a rigid spectrum where the levels do not cross. We discuss how the presence of level repulsion in lattice many-body quantum systems can be detected from the analysis of their time evolution instead of their energy spectra. This approach is advantageous to experiments that deal with dynamics, but have limited or no direct access to spectroscopy. Dynamical manifestations of avoided crossings occur at long times. They correspond to a drop, referred to as correlation hole, below the asymptotic value of the survival probability and to a bulge above the saturation point of the von Neumann entanglement entropy and the Shannon information entropy. By contrast, the evolution of these quantities at shorter times reflects the level of delocalization of the initial state, but not necessarily a rigid spectrum. The correlation hole is a general indicator of the integrable-chaos transition in disordered and clean models and as such can be used to detect the transition to the many-body localized phase in disordered interacting systems. This article is part of the themed issue 'Breakdown of ergodicity in quantum systems: from solids to synthetic matter'.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giampaolo, Salvatore M.; CNR-INFM Coherentia, Naples; CNISM Unita di Salerno and INFN Sezione di Napoli, Gruppo collegato di Salerno, Baronissi
2007-10-15
We investigate the geometric characterization of pure state bipartite entanglement of (2xD)- and (3xD)-dimensional composite quantum systems. To this aim, we analyze the relationship between states and their images under the action of particular classes of local unitary operations. We find that invariance of states under the action of single-qubit and single-qutrit transformations is a necessary and sufficient condition for separability. We demonstrate that in the (2xD)-dimensional case the von Neumann entropy of entanglement is a monotonic function of the minimum squared Euclidean distance between states and their images over the set of single qubit unitary transformations. Moreover, both inmore » the (2xD)- and in the (3xD)-dimensional cases the minimum squared Euclidean distance exactly coincides with the linear entropy [and thus as well with the tangle measure of entanglement in the (2xD)-dimensional case]. These results provide a geometric characterization of entanglement measures originally established in informational frameworks. Consequences and applications of the formalism to quantum critical phenomena in spin systems are discussed.« less
NASA Astrophysics Data System (ADS)
Hsu, Fei-Man; Chen, Pao-Yang
2017-03-01
Von Neumann and Morgenstern published the Theory of Games and Economic Behavior in 1944, describing game theory as a model in which intelligent rational decision-makers manage to find their best strategies in conflict, cooperative or other mutualistic relationships to acquire the greatest benefit [1]. This model was subsequently incorporated in ecology to simulate the ;fitness; of a species during natural selection, designated evolutionary game theory (EGT) [2]. Wang et al. proposed ;epiGame;, taking paternal and maternal genomes as ;intelligent; players that compete, cooperate or both during embryogenesis to maximize the fitness of the embryo [3]. They further extended game theory to an individual or single cell environment. During early zygote development, DNA methylation is reprogrammed such that the paternal genome is demethylated before the maternal genome. After the reset, the blastocyst is re-methylated during embryogenesis. At that time, the paternal and maternal genomes have a conflict of interest related to the expression of their own genes. The proposed epiGame models such interactive regulation between the parental genomes to reach a balance for embryo development (equation (2)).
Absolute continuity for operator valued completely positive maps on C∗-algebras
NASA Astrophysics Data System (ADS)
Gheondea, Aurelian; Kavruk, Ali Şamil
2009-02-01
Motivated by applicability to quantum operations, quantum information, and quantum probability, we investigate the notion of absolute continuity for operator valued completely positive maps on C∗-algebras, previously introduced by Parthasarathy [in Athens Conference on Applied Probability and Time Series Analysis I (Springer-Verlag, Berlin, 1996), pp. 34-54]. We obtain an intrinsic definition of absolute continuity, we show that the Lebesgue decomposition defined by Parthasarathy is the maximal one among all other Lebesgue-type decompositions and that this maximal Lebesgue decomposition does not depend on the jointly dominating completely positive map, we obtain more flexible formulas for calculating the maximal Lebesgue decomposition, and we point out the nonuniqueness of the Lebesgue decomposition as well as a sufficient condition for uniqueness. In addition, we consider Radon-Nikodym derivatives for absolutely continuous completely positive maps that, in general, are unbounded positive self-adjoint operators affiliated to a certain von Neumann algebra, and we obtain a spectral approximation by bounded Radon-Nikodym derivatives. An application to the existence of the infimum of two completely positive maps is indicated, and formulas in terms of Choi's matrices for the Lebesgue decomposition of completely positive maps in matrix algebras are obtained.
NASA Astrophysics Data System (ADS)
Zhang, Yicheng; Vidmar, Lev; Rigol, Marcos
2018-02-01
We use quantum information measures to study the local quantum phase transition that occurs for trapped spinless fermions in one-dimensional lattices. We focus on the case of a harmonic confinement. The transition occurs upon increasing the characteristic density and results in the formation of a band-insulating domain in the center of the trap. We show that the ground-state bipartite entanglement entropy can be used as an order parameter to characterize this local quantum phase transition. We also study excited eigenstates by calculating the average von Neumann and second Renyi eigenstate entanglement entropies, and compare the results with the thermodynamic entropy and the mutual information of thermal states at the same energy density. While at low temperatures we observe a linear increase of the thermodynamic entropy with temperature at all characteristic densities, the average eigenstate entanglement entropies exhibit a strikingly different behavior as functions of temperature below and above the transition. They are linear in temperature below the transition but exhibit activated behavior above it. Hence, at nonvanishing energy densities above the ground state, the average eigenstate entanglement entropies carry fingerprints of the local quantum phase transition.
Effects of different feedback types on information integration in repeated monetary gambles
Haffke, Peter; Hübner, Ronald
2015-01-01
Most models of risky decision making assume that all relevant information is taken into account (e.g., von Neumann and Morgenstern, 1944; Kahneman and Tversky, 1979). However, there are also some models supposing that only part of the information is considered (e.g., Brandstätter et al., 2006; Gigerenzer and Gaissmaier, 2011). To further investigate the amount of information that is usually used for decision making, and how the use depends on feedback, we conducted a series of three experiments in which participants choose between two lotteries and where no feedback, outcome feedback, and error feedback was provided, respectively. The results show that without feedback participants mostly chose the lottery with the higher winning probability, and largely ignored the potential gains. The same results occurred when the outcome of each decision was fed back. Only after presenting error feedback (i.e., signaling whether a choice was optimal or not), participants considered probabilities as well as gains, resulting in more optimal choices. We propose that outcome feedback was ineffective, because of its probabilistic and ambiguous nature. Participants improve information integration only if provided with a consistent and deterministic signal such as error feedback. PMID:25667576
NASA Astrophysics Data System (ADS)
Chen, Feng; Xu, Ai-Guo; Zhang, Guang-Cai; Gan, Yan-Biao; Cheng, Tao; Li, Ying-Jun
2009-10-01
We present a highly efficient lattice Boltzmann model for simulating compressible flows. This model is based on the combination of an appropriate finite difference scheme, a 16-discrete-velocity model [Kataoka and Tsutahara, Phys. Rev. E 69 (2004) 035701(R)] and reasonable dispersion and dissipation terms. The dispersion term effectively reduces the oscillation at the discontinuity and enhances numerical precision. The dissipation term makes the new model more easily meet with the von Neumann stability condition. This model works for both high-speed and low-speed flows with arbitrary specific-heat-ratio. With the new model simulation results for the well-known benchmark problems get a high accuracy compared with the analytic or experimental ones. The used benchmark tests include (i) Shock tubes such as the Sod, Lax, Sjogreen, Colella explosion wave, and collision of two strong shocks, (ii) Regular and Mach shock reflections, and (iii) Shock wave reaction on cylindrical bubble problems. With a more realistic equation of state or free-energy functional, the new model has the potential tostudy the complex procedure of shock wave reaction on porous materials.
Milde, Moritz B; Blum, Hermann; Dietmüller, Alexander; Sumislawska, Dora; Conradt, Jörg; Indiveri, Giacomo; Sandamirskaya, Yulia
2017-01-01
Neuromorphic hardware emulates dynamics of biological neural networks in electronic circuits offering an alternative to the von Neumann computing architecture that is low-power, inherently parallel, and event-driven. This hardware allows to implement neural-network based robotic controllers in an energy-efficient way with low latency, but requires solving the problem of device variability, characteristic for analog electronic circuits. In this work, we interfaced a mixed-signal analog-digital neuromorphic processor ROLLS to a neuromorphic dynamic vision sensor (DVS) mounted on a robotic vehicle and developed an autonomous neuromorphic agent that is able to perform neurally inspired obstacle-avoidance and target acquisition. We developed a neural network architecture that can cope with device variability and verified its robustness in different environmental situations, e.g., moving obstacles, moving target, clutter, and poor light conditions. We demonstrate how this network, combined with the properties of the DVS, allows the robot to avoid obstacles using a simple biologically-inspired dynamics. We also show how a Dynamic Neural Field for target acquisition can be implemented in spiking neuromorphic hardware. This work demonstrates an implementation of working obstacle avoidance and target acquisition using mixed signal analog/digital neuromorphic hardware.
On the existence of a scaling relation in the evolution of cellular systems
NASA Astrophysics Data System (ADS)
Fortes, M. A.
1994-05-01
A mean field approximation is used to analyze the evolution of the distribution of sizes in systems formed by individual 'cells,' each of which grows or shrinks, in such a way that the total number of cells decreases (e.g. polycrystals, soap froths, precipitate particles in a matrix). The rate of change of the size of a cell is defined by a growth function that depends on the size (x) of the cell and on moments of the size distribution, such as the average size (bar-x). Evolutionary equations for the distribution of sizes and of reduced sizes (i.e. x/bar-x) are established. The stationary (or steady state) solutions of the equations are obtained for various particular forms of the growth function. A steady state of the reduced size distribution is equivalent to a scaling behavior. It is found that there are an infinity of steady state solutions which form a (continuous) one-parameter family of functions, but they are not, in general, reached from an arbitrary initial state. These properties are at variance from those that can be derived from models based on von Neumann-Mullins equation.
Exact and approximate many-body dynamics with stochastic one-body density matrix evolution
NASA Astrophysics Data System (ADS)
Lacroix, Denis
2005-06-01
We show that the dynamics of interacting fermions can be exactly replaced by a quantum jump theory in the many-body density matrix space. In this theory, jumps occur between densities formed of pairs of Slater determinants, Dab=|Φa><Φb|, where each state evolves according to the stochastic Schrödinger equation given by O. Juillet and Ph. Chomaz [Phys. Rev. Lett. 88, 142503 (2002)]. A stochastic Liouville-von Neumann equation is derived as well as the associated. Bogolyubov-Born-Green-Kirwood-Yvon hierarchy. Due to the specific form of the many-body density along the path, the presented theory is equivalent to a stochastic theory in one-body density matrix space, in which each density matrix evolves according to its own mean-field augmented by a one-body noise. Guided by the exact reformulation, a stochastic mean-field dynamics valid in the weak coupling approximation is proposed. This theory leads to an approximate treatment of two-body effects similar to the extended time-dependent Hartree-Fock scheme. In this stochastic mean-field dynamics, statistical mixing can be directly considered and jumps occur on a coarse-grained time scale. Accordingly, numerical effort is expected to be significantly reduced for applications.
Effects of different feedback types on information integration in repeated monetary gambles.
Haffke, Peter; Hübner, Ronald
2014-01-01
Most models of risky decision making assume that all relevant information is taken into account (e.g., von Neumann and Morgenstern, 1944; Kahneman and Tversky, 1979). However, there are also some models supposing that only part of the information is considered (e.g., Brandstätter et al., 2006; Gigerenzer and Gaissmaier, 2011). To further investigate the amount of information that is usually used for decision making, and how the use depends on feedback, we conducted a series of three experiments in which participants choose between two lotteries and where no feedback, outcome feedback, and error feedback was provided, respectively. The results show that without feedback participants mostly chose the lottery with the higher winning probability, and largely ignored the potential gains. The same results occurred when the outcome of each decision was fed back. Only after presenting error feedback (i.e., signaling whether a choice was optimal or not), participants considered probabilities as well as gains, resulting in more optimal choices. We propose that outcome feedback was ineffective, because of its probabilistic and ambiguous nature. Participants improve information integration only if provided with a consistent and deterministic signal such as error feedback.
Optimal control of the population dynamics of the ground vibrational state of a polyatomic molecule
NASA Astrophysics Data System (ADS)
de Clercq, Ludwig E.; Botha, Lourens R.; Rohwer, Erich G.; Uys, Hermann; Du Plessis, Anton
2011-03-01
Simulating coherent control with femtosecond pulses on a polyatomic molecule with anharmonic splitting was demonstrated. The simulation mimicked pulse shaping of a Spatial Light Modulator (SLM) and the interaction was described with the Von Neumann equation. A transform limited pulse with a fluence of 600 J/m2 produced 18% of the population in an arbitrarily chosen upper vibrational state, n =2. Phase only and amplitude only shaped pulse produced optimum values of 60% and 40% respectively, of the population in the vibrational state, n=2, after interaction with the ultra short pulse. The combination of phase and amplitude shaping produced the best results, 80% of the population was in the targeted vibrational state, n=2, after interaction. These simulations were carried out with all the population initially in the ground vibrational level. It was found that even at room temperatures (300 Kelvin) that the population in the selected level is comparable with the case where all population is initially in the ground vibrational state. With a 10% noise added to the amplitude and phase masks, selective excitation of the targeted vibrational state is still possible.
Approximate symmetries of Hamiltonians
NASA Astrophysics Data System (ADS)
Chubb, Christopher T.; Flammia, Steven T.
2017-08-01
We explore the relationship between approximate symmetries of a gapped Hamiltonian and the structure of its ground space. We start by considering approximate symmetry operators, defined as unitary operators whose commutators with the Hamiltonian have norms that are sufficiently small. We show that when approximate symmetry operators can be restricted to the ground space while approximately preserving certain mutual commutation relations. We generalize the Stone-von Neumann theorem to matrices that approximately satisfy the canonical (Heisenberg-Weyl-type) commutation relations and use this to show that approximate symmetry operators can certify the degeneracy of the ground space even though they only approximately form a group. Importantly, the notions of "approximate" and "small" are all independent of the dimension of the ambient Hilbert space and depend only on the degeneracy in the ground space. Our analysis additionally holds for any gapped band of sufficiently small width in the excited spectrum of the Hamiltonian, and we discuss applications of these ideas to topological quantum phases of matter and topological quantum error correcting codes. Finally, in our analysis, we also provide an exponential improvement upon bounds concerning the existence of shared approximate eigenvectors of approximately commuting operators under an added normality constraint, which may be of independent interest.
The Quantum Logical Challenge: Peter Mittelstaedt's Contributions to Logic and Philosophy of Science
NASA Astrophysics Data System (ADS)
Beltrametti, E.; Dalla Chiara, M. L.; Giuntini, R.
2017-12-01
Peter Mittelstaedt's contributions to quantum logic and to the foundational problems of quantum theory have significantly realized the most authentic spirit of the International Quantum Structures Association: an original research about hard technical problems, which are often "entangled" with the emergence of important changes in our general world-conceptions. During a time where both the logical and the physical community often showed a skeptical attitude towards Birkhoff and von Neumann's quantum logic, Mittelstaedt brought into light the deeply innovating features of a quantum logical thinking that allows us to overcome some strong and unrealistic assumptions of classical logical arguments. Later on his intense research on the unsharp approach to quantum theory and to the measurement problem stimulated the increasing interest for unsharp forms of quantum logic, creating a fruitful interaction between the work of quantum logicians and of many-valued logicians. Mittelstaedt's general views about quantum logic and quantum theory seem to be inspired by a conjecture that is today more and more confirmed: there is something universal in the quantum theoretic formalism that goes beyond the limits of microphysics, giving rise to interesting applications to a number of different fields.
DANoC: An Efficient Algorithm and Hardware Codesign of Deep Neural Networks on Chip.
Zhou, Xichuan; Li, Shengli; Tang, Fang; Hu, Shengdong; Lin, Zhi; Zhang, Lei
2017-07-18
Deep neural networks (NNs) are the state-of-the-art models for understanding the content of images and videos. However, implementing deep NNs in embedded systems is a challenging task, e.g., a typical deep belief network could exhaust gigabytes of memory and result in bandwidth and computational bottlenecks. To address this challenge, this paper presents an algorithm and hardware codesign for efficient deep neural computation. A hardware-oriented deep learning algorithm, named the deep adaptive network, is proposed to explore the sparsity of neural connections. By adaptively removing the majority of neural connections and robustly representing the reserved connections using binary integers, the proposed algorithm could save up to 99.9% memory utility and computational resources without undermining classification accuracy. An efficient sparse-mapping-memory-based hardware architecture is proposed to fully take advantage of the algorithmic optimization. Different from traditional Von Neumann architecture, the deep-adaptive network on chip (DANoC) brings communication and computation in close proximity to avoid power-hungry parameter transfers between on-board memory and on-chip computational units. Experiments over different image classification benchmarks show that the DANoC system achieves competitively high accuracy and efficiency comparing with the state-of-the-art approaches.
Entanglement entropy between real and virtual particles in ϕ4 quantum field theory
NASA Astrophysics Data System (ADS)
Ardenghi, Juan Sebastián
2015-04-01
The aim of this work is to compute the entanglement entropy of real and virtual particles by rewriting the generating functional of ϕ4 theory as a mean value between states and observables defined through the correlation functions. Then the von Neumann definition of entropy can be applied to these quantum states and in particular, for the partial traces taken over the internal or external degrees of freedom. This procedure can be done for each order in the perturbation expansion showing that the entanglement entropy for real and virtual particles behaves as ln (m0). In particular, entanglement entropy is computed at first order for the correlation function of two external points showing that mutual information is identical to the external entropy and that conditional entropies are negative for all the domain of m0. In turn, from the definition of the quantum states, it is possible to obtain general relations between total traces between different quantum states of a ϕr theory. Finally, discussion about the possibility of taking partial traces over external degrees of freedom is considered, which implies the introduction of some observables that measure space-time points where an interaction occurs.
The Conditional Entropy Power Inequality for Bosonic Quantum Systems
NASA Astrophysics Data System (ADS)
De Palma, Giacomo; Trevisan, Dario
2018-01-01
We prove the conditional Entropy Power Inequality for Gaussian quantum systems. This fundamental inequality determines the minimum quantum conditional von Neumann entropy of the output of the beam-splitter or of the squeezing among all the input states where the two inputs are conditionally independent given the memory and have given quantum conditional entropies. We also prove that, for any couple of values of the quantum conditional entropies of the two inputs, the minimum of the quantum conditional entropy of the output given by the conditional Entropy Power Inequality is asymptotically achieved by a suitable sequence of quantum Gaussian input states. Our proof of the conditional Entropy Power Inequality is based on a new Stam inequality for the quantum conditional Fisher information and on the determination of the universal asymptotic behaviour of the quantum conditional entropy under the heat semigroup evolution. The beam-splitter and the squeezing are the central elements of quantum optics, and can model the attenuation, the amplification and the noise of electromagnetic signals. This conditional Entropy Power Inequality will have a strong impact in quantum information and quantum cryptography. Among its many possible applications there is the proof of a new uncertainty relation for the conditional Wehrl entropy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guha, Saikat; Shapiro, Jeffrey H.; Erkmen, Baris I.
Previous work on the classical information capacities of bosonic channels has established the capacity of the single-user pure-loss channel, bounded the capacity of the single-user thermal-noise channel, and bounded the capacity region of the multiple-access channel. The latter is a multiple-user scenario in which several transmitters seek to simultaneously and independently communicate to a single receiver. We study the capacity region of the bosonic broadcast channel, in which a single transmitter seeks to simultaneously and independently communicate to two different receivers. It is known that the tightest available lower bound on the capacity of the single-user thermal-noise channel is thatmore » channel's capacity if, as conjectured, the minimum von Neumann entropy at the output of a bosonic channel with additive thermal noise occurs for coherent-state inputs. Evidence in support of this minimum output entropy conjecture has been accumulated, but a rigorous proof has not been obtained. We propose a minimum output entropy conjecture that, if proved to be correct, will establish that the capacity region of the bosonic broadcast channel equals the inner bound achieved using a coherent-state encoding and optimum detection. We provide some evidence that supports this conjecture, but again a full proof is not available.« less
Lower bounds on the violation of the monogamy inequality for quantum correlation measures
NASA Astrophysics Data System (ADS)
Kumar, Asutosh; Dhar, Himadri Shekhar
2016-06-01
In multiparty quantum systems, the monogamy inequality proposes an upper bound on the distribution of bipartite quantum correlation between a single party and each of the remaining parties in the system, in terms of the amount of quantum correlation shared by that party with the rest of the system taken as a whole. However, it is well known that not all quantum correlation measures universally satisfy the monogamy inequality. In this work, we aim at determining the nontrivial value by which the monogamy inequality can be violated by a quantum correlation measure. Using an information-theoretic complementarity relation between the normalized purity and quantum correlation in any given multiparty state, we obtain a nontrivial lower bound on the negative monogamy score for the quantum correlation measure. In particular, for the three-qubit states the lower bound is equal to the negative von Neumann entropy of the single qubit reduced density matrix. We analytically examine the tightness of the derived lower bound for certain n -qubit quantum states. Further, we report numerical results of the same for monogamy violating correlation measures using Haar uniformly generated three-qubit states.
The Master Equation for Two-Level Accelerated Systems at Finite Temperature
NASA Astrophysics Data System (ADS)
Tomazelli, J. L.; Cunha, R. O.
2016-10-01
In this work, we study the behaviour of two weakly coupled quantum systems, described by a separable density operator; one of them is a single oscillator, representing a microscopic system, while the other is a set of oscillators which perform the role of a reservoir in thermal equilibrium. From the Liouville-Von Neumann equation for the reduced density operator, we devise the master equation that governs the evolution of the microscopic system, incorporating the effects of temperature via Thermofield Dynamics formalism by suitably redefining the vacuum of the macroscopic system. As applications, we initially investigate the behaviour of a Fermi oscillator in the presence of a heat bath consisting of a set of Fermi oscillators and that of an atomic two-level system interacting with a scalar radiation field, considered as a reservoir, by constructing the corresponding master equation which governs the time evolution of both sub-systems at finite temperature. Finally, we calculate the energy variation rates for the atom and the field, as well as the atomic population levels, both in the inertial case and at constant proper acceleration, considering the two-level system as a prototype of an Unruh detector, for admissible couplings of the radiation field.
Engineering a lunar photolithoautotroph to thrive on the moon - life or simulacrum?
NASA Astrophysics Data System (ADS)
Ellery, A. A.
2018-07-01
Recent work in developing self-replicating machines has approached the problem as an engineering problem, using engineering materials and methods to implement an engineering analogue of a hitherto uniquely biological function. The question is - can anything be learned that might be relevant to an astrobiological context in which the problem is to determine the general form of biology independent of the Earth. Compared with other non-terrestrial biology disciplines, engineered life is more demanding. Engineering a self-replicating machine tackles real environments unlike artificial life which avoids the problem of physical instantiation altogether by examining software models. Engineering a self-replicating machine is also more demanding than synthetic biology as no library of functional components exists. Everything must be constructed de novo. Biological systems already have the capacity to self-replicate but no engineered machine has yet been constructed with the same ability - this is our primary goal. On the basis of the von Neumann analysis of self-replication, self-replication is a by-product of universal construction capability - a universal constructor is a machine that can construct anything (in a functional sense) given the appropriate instructions (DNA/RNA), energy (ATP) and materials (food). In the biological cell, the universal construction mechanism is the ribosome. The ribosome is a biological assembly line for constructing proteins while DNA constitutes a design specification. For a photoautotroph, the energy source is ambient and the food is inorganic. We submit that engineering a self-replicating machine opens up new areas of astrobiology to be explored in the limits of life.
Neuromorphic implementations of neurobiological learning algorithms for spiking neural networks.
Walter, Florian; Röhrbein, Florian; Knoll, Alois
2015-12-01
The application of biologically inspired methods in design and control has a long tradition in robotics. Unlike previous approaches in this direction, the emerging field of neurorobotics not only mimics biological mechanisms at a relatively high level of abstraction but employs highly realistic simulations of actual biological nervous systems. Even today, carrying out these simulations efficiently at appropriate timescales is challenging. Neuromorphic chip designs specially tailored to this task therefore offer an interesting perspective for neurorobotics. Unlike Von Neumann CPUs, these chips cannot be simply programmed with a standard programming language. Like real brains, their functionality is determined by the structure of neural connectivity and synaptic efficacies. Enabling higher cognitive functions for neurorobotics consequently requires the application of neurobiological learning algorithms to adjust synaptic weights in a biologically plausible way. In this paper, we therefore investigate how to program neuromorphic chips by means of learning. First, we provide an overview over selected neuromorphic chip designs and analyze them in terms of neural computation, communication systems and software infrastructure. On the theoretical side, we review neurobiological learning techniques. Based on this overview, we then examine on-die implementations of these learning algorithms on the considered neuromorphic chips. A final discussion puts the findings of this work into context and highlights how neuromorphic hardware can potentially advance the field of autonomous robot systems. The paper thus gives an in-depth overview of neuromorphic implementations of basic mechanisms of synaptic plasticity which are required to realize advanced cognitive capabilities with spiking neural networks. Copyright © 2015 Elsevier Ltd. All rights reserved.
Effective theory and breakdown of conformal symmetry in a long-range quantum chain
NASA Astrophysics Data System (ADS)
Lepori, L.; Vodola, D.; Pupillo, G.; Gori, G.; Trombettoni, A.
2016-11-01
We deal with the problem of studying the symmetries and the effective theories of long-range models around their critical points. A prominent issue is to determine whether they possess (or not) conformal symmetry (CS) at criticality and how the presence of CS depends on the range of the interactions. To have a model, both simple to treat and interesting, where to investigate these questions, we focus on the Kitaev chain with long-range pairings decaying with distance as power-law with exponent α. This is a quadratic solvable model, yet displaying non-trivial quantum phase transitions. Two critical lines are found, occurring respectively at a positive and a negative chemical potential. Focusing first on the critical line at positive chemical potential, by means of a renormalization group approach we derive its effective theory close to criticality. Our main result is that the effective action is the sum of two terms: a Dirac action SD, found in the short-range Ising universality class, and an "anomalous" CS breaking term SAN. While SD originates from low-energy excitations in the spectrum, SAN originates from the higher energy modes where singularities develop, due to the long-range nature of the model. At criticality SAN flows to zero for α > 2, while for α < 2 it dominates and determines the breakdown of the CS. Out of criticality SAN breaks, in the considered approximation, the effective Lorentz invariance (ELI) for every finite α. As α increases such ELI breakdown becomes less and less pronounced and in the short-range limit α → ∞ the ELI is restored. In order to test the validity of the determined effective theory, we compared the two-fermion static correlation functions and the von Neumann entropy obtained from them with the ones calculated on the lattice, finding agreement. These results explain two observed features characteristic of long-range models, the hybrid decay of static correlation functions within gapped phases and the area-law violation for the von Neumann entropy. The proposed scenario is expected to hold in other long-range models displaying quasiparticle excitations in ballistic regime. From the effective theory one can also see that new phases emerge for α < 1. Finally we show that at every finite α the critical exponents, defined as for the short-range (α → ∞) model, are not altered. This also shows that the long-range paired Kitaev chain provides an example of a long-range model in which the value of α where the CS is broken does not coincide with the value at which the critical exponents start to differ from the ones of the corresponding short-range model. At variance, for the second critical line, having negative chemical potential, only SAN (SD) is present for 1 < α < 2 (for α > 2). Close to this line, where the minimum of the spectrum coincides with the momentum where singularities develop, the critical exponents change where CS is broken.
NASA Astrophysics Data System (ADS)
Krishnan, Chethan; Maheshwari, Shubham; Bala Subramanian, P. N.
2017-08-01
We write down a Robin boundary term for general relativity. The construction relies on the Neumann result of arXiv:1605.01603 in an essential way. This is unlike in mechanics and (polynomial) field theory, where two formulations of the Robin problem exist: one with Dirichlet as the natural limiting case, and another with Neumann.
A Neumann boundary term for gravity
NASA Astrophysics Data System (ADS)
Krishnan, Chethan; Raju, Avinash
2017-05-01
The Gibbons-Hawking-York (GHY) boundary term makes the Dirichlet problem for gravity well-defined, but no such general term seems to be known for Neumann boundary conditions. In this paper, we view Neumann not as fixing the normal derivative of the metric (“velocity”) at the boundary, but as fixing the functional derivative of the action with respect to the boundary metric (“momentum”). This leads directly to a new boundary term for gravity: the trace of the extrinsic curvature with a specific dimension-dependent coefficient. In three dimensions, this boundary term reduces to a “one-half” GHY term noted in the literature previously, and we observe that our action translates precisely to the Chern-Simons action with no extra boundary terms. In four dimensions, the boundary term vanishes, giving a natural Neumann interpretation to the standard Einstein-Hilbert action without boundary terms. We argue that in light of AdS/CFT, ours is a natural approach for defining a “microcanonical” path integral for gravity in the spirit of the (pre-AdS/CFT) work of Brown and York.
Analytical prediction of the unsteady lift on a rotor caused by downstream struts
NASA Technical Reports Server (NTRS)
Taylor, A. C., III; Ng, W. F.
1987-01-01
A two-dimensional, inviscid, incompressible procedure is presented for predicting the unsteady lift on turbomachinery blades caused by the upstream potential disturbance of downstream flow obstructions. Using the Douglas-Neumann singularity superposition potential flow computer program to model the downstream flow obstructions, classical equations of thin airfoil theory are then employed, to compute the unsteady lift on the upstream rotor blades. The method is applied to a particular geometry which consists of a rotor, a downstream stator, and downstream struts which support the engine casing. Very good agreement between the Douglas-Neumann program and experimental measurements was obtained for the downstream stator-strut flow field. The calculations for the unsteady lift due to the struts were in good agreement with the experiments in showing that the unsteady lift due to the struts decays exponentially with increased axial separation of the rotor and the struts. An application of the method showed that for a given axial spacing between the rotor and the strut, strut-induced unsteady lift is a very weak function of the axial or circumferential position of the stator.
ERIC Educational Resources Information Center
Fleury, Stephen
2011-01-01
In his proposal for a social studies both more critical and more participatory, Neumann's (2008) politically adaptive strategy of invoking standards for critical thinking poses troubling concerns for democratic-minded educators familiar with the trajectory of reforms. Neumann's outright dismissal of critical pedagogy ironically underscores how the…
Uniform gradient estimates on manifolds with a boundary and applications
NASA Astrophysics Data System (ADS)
Cheng, Li-Juan; Thalmaier, Anton; Thompson, James
2018-04-01
We revisit the problem of obtaining uniform gradient estimates for Dirichlet and Neumann heat semigroups on Riemannian manifolds with boundary. As applications, we obtain isoperimetric inequalities, using Ledoux's argument, and uniform quantitative gradient estimates, firstly for C^2_b functions with boundary conditions and then for the unit spectral projection operators of Dirichlet and Neumann Laplacians.
Paulo Freire and the Politics of Education: A Response to Neumann
ERIC Educational Resources Information Center
Roberts, Peter
2016-01-01
Jacob Neumann provides a thoughtful reading of "Paulo Freire in the 21st century: Education, dialogue, and transformation" [v48 n6 p634-644 2016]. His comments on the importance of contextualising Freire's work and the value of openness in engaging Freirean ideas are insightful and helpful. His use of the term "apolitical" is,…
On the Development of a Deterministic Three-Dimensional Radiation Transport Code
NASA Technical Reports Server (NTRS)
Rockell, Candice; Tweed, John
2011-01-01
Since astronauts on future deep space missions will be exposed to dangerous radiations, there is a need to accurately model the transport of radiation through shielding materials and to estimate the received radiation dose. In response to this need a three dimensional deterministic code for space radiation transport is now under development. The new code GRNTRN is based on a Green's function solution of the Boltzmann transport equation that is constructed in the form of a Neumann series. Analytical approximations will be obtained for the first three terms of the Neumann series and the remainder will be estimated by a non-perturbative technique . This work discusses progress made to date and exhibits some computations based on the first two Neumann series terms.
NASA Astrophysics Data System (ADS)
Srivastava, D. P.; Sahni, V.; Satsangi, P. S.
2014-08-01
Graph-theoretic quantum system modelling (GTQSM) is facilitated by considering the fundamental unit of quantum computation and information, viz. a quantum bit or qubit as a basic building block. Unit directional vectors "ket 0" and "ket 1" constitute two distinct fundamental quantum across variable orthonormal basis vectors, for the Hilbert space, specifying the direction of propagation of information, or computation data, while complementary fundamental quantum through, or flow rate, variables specify probability parameters, or amplitudes, as surrogates for scalar quantum information measure (von Neumann entropy). This paper applies GTQSM in continuum of protein heterodimer tubulin molecules of self-assembling polymers, viz. microtubules in the brain as a holistic system of interacting components representing hierarchical clustered quantum Hopfield network, hQHN, of networks. The quantum input/output ports of the constituent elemental interaction components, or processes, of tunnelling interactions and Coulombic bidirectional interactions are in cascade and parallel interconnections with each other, while the classical output ports of all elemental components are interconnected in parallel to accumulate micro-energy functions generated in the system as Hamiltonian, or Lyapunov, energy function. The paper presents an insight, otherwise difficult to gain, for the complex system of systems represented by clustered quantum Hopfield network, hQHN, through the application of GTQSM construct.
On the basis set convergence of electron–electron entanglement measures: helium-like systems
Hofer, Thomas S.
2013-01-01
A systematic investigation of three different electron–electron entanglement measures, namely the von Neumann, the linear and the occupation number entropy at full configuration interaction level has been performed for the four helium-like systems hydride, helium, Li+ and Be2+ using a large number of different basis sets. The convergence behavior of the resulting energies and entropies revealed that the latter do in general not show the expected strictly monotonic increase upon increase of the one–electron basis. Overall, the three different entanglement measures show good agreement among each other, the largest deviations being observed for small basis sets. The data clearly demonstrates that it is important to consider the nature of the chemical system when investigating entanglement phenomena in the framework of Gaussian type basis sets: while in case of hydride the use of augmentation functions is crucial, the application of core functions greatly improves the accuracy in case of cationic systems such as Li+ and Be2+. In addition, numerical derivatives of the entanglement measures with respect to the nucleic charge have been determined, which proved to be a very sensitive probe of the convergence leading to qualitatively wrong results (i.e., the wrong sign) if too small basis sets are used. PMID:24790952
On the basis set convergence of electron-electron entanglement measures: helium-like systems.
Hofer, Thomas S
2013-01-01
A systematic investigation of three different electron-electron entanglement measures, namely the von Neumann, the linear and the occupation number entropy at full configuration interaction level has been performed for the four helium-like systems hydride, helium, Li(+) and Be(2+) using a large number of different basis sets. The convergence behavior of the resulting energies and entropies revealed that the latter do in general not show the expected strictly monotonic increase upon increase of the one-electron basis. Overall, the three different entanglement measures show good agreement among each other, the largest deviations being observed for small basis sets. The data clearly demonstrates that it is important to consider the nature of the chemical system when investigating entanglement phenomena in the framework of Gaussian type basis sets: while in case of hydride the use of augmentation functions is crucial, the application of core functions greatly improves the accuracy in case of cationic systems such as Li(+) and Be(2+). In addition, numerical derivatives of the entanglement measures with respect to the nucleic charge have been determined, which proved to be a very sensitive probe of the convergence leading to qualitatively wrong results (i.e., the wrong sign) if too small basis sets are used.
The a(3) Scheme--A Fourth-Order Space-Time Flux-Conserving and Neutrally Stable CESE Solver
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung
2008-01-01
The CESE development is driven by a belief that a solver should (i) enforce conservation laws in both space and time, and (ii) be built from a non-dissipative (i.e., neutrally stable) core scheme so that the numerical dissipation can be controlled effectively. To initiate a systematic CESE development of high order schemes, in this paper we provide a thorough discussion on the structure, consistency, stability, phase error, and accuracy of a new 4th-order space-time flux-conserving and neutrally stable CESE solver of an 1D scalar advection equation. The space-time stencil of this two-level explicit scheme is formed by one point at the upper time level and three points at the lower time level. Because it is associated with three independent mesh variables (the numerical analogues of the dependent variable and its 1st-order and 2ndorder spatial derivatives, respectively) and three equations per mesh point, the new scheme is referred to as the a(3) scheme. Through the von Neumann analysis, it is shown that the a(3) scheme is stable if and only if the Courant number is less than 0.5. Moreover, it is established numerically that the a(3) scheme is 4th-order accurate.
The solution of the sixth Hilbert problem: the ultimate Galilean revolution
NASA Astrophysics Data System (ADS)
D'Ariano, Giacomo Mauro
2018-04-01
I argue for a full mathematization of the physical theory, including its axioms, which must contain no physical primitives. In provocative words: `physics from no physics'. Although this may seem an oxymoron, it is the royal road to keep complete logical coherence, hence falsifiability of the theory. For such a purely mathematical theory the physical connotation must pertain only the interpretation of the mathematics, ranging from the axioms to the final theorems. On the contrary, the postulates of the two current major physical theories either do not have physical interpretation (as for von Neumann's axioms for quantum theory), or contain physical primitives as `clock', `rigid rod', `force', `inertial mass' (as for special relativity and mechanics). A purely mathematical theory as proposed here, though with limited (but relentlessly growing) domain of applicability, will have the eternal validity of mathematical truth. It will be a theory on which natural sciences can firmly rely. Such kind of theory is what I consider to be the solution of the sixth Hilbert problem. I argue that a prototype example of such a mathematical theory is provided by the novel algorithmic paradigm for physics, as in the recent information-theoretical derivation of quantum theory and free quantum field theory. This article is part of the theme issue `Hilbert's sixth problem'.
NASA Astrophysics Data System (ADS)
Berkovich, Simon
2015-04-01
The undamental advantage of a Cellular automaton construction foris that it can be viewed as an undetectable absolute frame o reference, in accordance with Lorentz-Poincare's interpretation.. The cellular automaton model for physical poblems comes upon two basic hurdles: (1) How to find the Elemental Rule that, and how to get non-locality from local transformations. Both problems are resolved considering the transfomation rule of mutual distributed synchronization Actually any information proessing device starts with a clocking system. and it turns out that ``All physical phenomena are different aspects of the high-level description of distributed mutual synchronization in a network of digital clocks''. Non-locality comes from two hugely different time-scales of signaling.. The universe is acombinines information and matter processes, These fast spreading diffusion wave solutions create the mechanism of the Holographic Universe. And thirdly Disengaged from synchronization, circular counters can perform memory functions by retaining phases of their oscillations, an idea of Von Neumann'. Thus, the suggested model generates the necessary constructs for the physical world as an Internet of Things. Life emerges due to the specifics of macromolecules that serve as communication means, with the holographic memory...
On variational definition of quantum entropy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Belavkin, Roman V.
Entropy of distribution P can be defined in at least three different ways: 1) as the expectation of the Kullback-Leibler (KL) divergence of P from elementary δ-measures (in this case, it is interpreted as expected surprise); 2) as a negative KL-divergence of some reference measure ν from the probability measure P; 3) as the supremum of Shannon’s mutual information taken over all channels such that P is the output probability, in which case it is dual of some transportation problem. In classical (i.e. commutative) probability, all three definitions lead to the same quantity, providing only different interpretations of entropy. Inmore » non-commutative (i.e. quantum) probability, however, these definitions are not equivalent. In particular, the third definition, where the supremum is taken over all entanglements of two quantum systems with P being the output state, leads to the quantity that can be twice the von Neumann entropy. It was proposed originally by V. Belavkin and Ohya [1] and called the proper quantum entropy, because it allows one to define quantum conditional entropy that is always non-negative. Here we extend these ideas to define also quantum counterpart of proper cross-entropy and cross-information. We also show inequality for the values of classical and quantum information.« less
Pragmatic turn in biology: From biological molecules to genetic content operators.
Witzany, Guenther
2014-08-26
Erwin Schrödinger's question "What is life?" received the answer for decades of "physics + chemistry". The concepts of Alain Turing and John von Neumann introduced a third term: "information". This led to the understanding of nucleic acid sequences as a natural code. Manfred Eigen adapted the concept of Hammings "sequence space". Similar to Hilbert space, in which every ontological entity could be defined by an unequivocal point in a mathematical axiomatic system, in the abstract "sequence space" concept each point represents a unique syntactic structure and the value of their separation represents their dissimilarity. In this concept molecular features of the genetic code evolve by means of self-organisation of matter. Biological selection determines the fittest types among varieties of replication errors of quasi-species. The quasi-species concept dominated evolution theory for many decades. In contrast to this, recent empirical data on the evolution of DNA and its forerunners, the RNA-world and viruses indicate cooperative agent-based interactions. Group behaviour of quasi-species consortia constitute de novo and arrange available genetic content for adaptational purposes within real-life contexts that determine epigenetic markings. This review focuses on some fundamental changes in biology, discarding its traditional status as a subdiscipline of physics and chemistry.
Excess Entropy Production in Quantum System: Quantum Master Equation Approach
NASA Astrophysics Data System (ADS)
Nakajima, Satoshi; Tokura, Yasuhiro
2017-12-01
For open systems described by the quantum master equation (QME), we investigate the excess entropy production under quasistatic operations between nonequilibrium steady states. The average entropy production is composed of the time integral of the instantaneous steady entropy production rate and the excess entropy production. We propose to define average entropy production rate using the average energy and particle currents, which are calculated by using the full counting statistics with QME. The excess entropy production is given by a line integral in the control parameter space and its integrand is called the Berry-Sinitsyn-Nemenman (BSN) vector. In the weakly nonequilibrium regime, we show that BSN vector is described by ln \\breve{ρ }_0 and ρ _0 where ρ _0 is the instantaneous steady state of the QME and \\breve{ρ }_0 is that of the QME which is given by reversing the sign of the Lamb shift term. If the system Hamiltonian is non-degenerate or the Lamb shift term is negligible, the excess entropy production approximately reduces to the difference between the von Neumann entropies of the system. Additionally, we point out that the expression of the entropy production obtained in the classical Markov jump process is different from our result and show that these are approximately equivalent only in the weakly nonequilibrium regime.
The birth of numerical weather prediction
NASA Astrophysics Data System (ADS)
Wiin-Nielsen, A.
1991-08-01
The paper describes the major events leading gradually to operational, numerical, short-range predictions for the large-scale atmospheric flow. The theoretical foundation starting with Rossby's studies of the linearized, barotropic equation and ending a decade and a half later with the general formulation of the quasi-geostrophic, baroclinic model by Charney and Phillips is described. The problems connected with the very long waves and the inconsistences of the geostrophic approximation which were major obstacles in the first experimental forecasts are discussed. The resulting changes to divergent barotropic and baroclinic models and to the use of the balance equation are described. After the discussion of the theoretical foundation, the paper describes the major developments leading to the Meteorology Project at the Institute for Advanced Studied under the leadership of John von Neumann and Jule Charney followed by the establishment of the Joint Numerical Weather Prediction Unit in Suitland, Maryland. The interconnected developments in Europe, taking place more-or-less at the same time, are described by concentrating on the activities in Stockholm where the barotropic model was used in many experiments leading also to operational forecasts. The further developments resulting in the use of the primitive equations and the formulation of medium-range forecasting models are not included in the paper.
The birth of numerical weather prediction
NASA Astrophysics Data System (ADS)
Wiin-Nielsen, A.
1991-09-01
The paper describes the major events leading gradually to operational, numerical, short-range predictions for the large-scale atmospheric flow. The theoretical foundation starting with Rossby's studies of the linearized, barotropic equation and ending a decade and a half later with the general formulation of the quasi-geostrophic, baroclinic model by Charney and Phillips is described. The problems connected with the very long waves and the inconsistences of the geostrophic approximation which were major obstacles in the first experimental forecasts are discussed. The resulting changes to divergent barotropic and baroclinic models and to the use of the balance equation are described. After the discussion of the theoretical foundation, the paper describes the major developments leading to the Meteorology Project at the Institute for Advanced Studied under the leadership of John von Neumann and Jule Charney followed by the establishment of the Joint Numerical Weather Prediction Unit in Suitland, Maryland. The inter-connected developments in Europe, taking place more-or-less at the same time, are described by concentrating on the activities in Stockholm where the barotropic model was used in many experiments leading also to operational forecasts. The further developments resulting in the use of the primitive equations and the formulation of medium-range forecasting models are not included in the paper.
Uncertainty relations with quantum memory for the Wehrl entropy
NASA Astrophysics Data System (ADS)
De Palma, Giacomo
2018-03-01
We prove two new fundamental uncertainty relations with quantum memory for the Wehrl entropy. The first relation applies to the bipartite memory scenario. It determines the minimum conditional Wehrl entropy among all the quantum states with a given conditional von Neumann entropy and proves that this minimum is asymptotically achieved by a suitable sequence of quantum Gaussian states. The second relation applies to the tripartite memory scenario. It determines the minimum of the sum of the Wehrl entropy of a quantum state conditioned on the first memory quantum system with the Wehrl entropy of the same state conditioned on the second memory quantum system and proves that also this minimum is asymptotically achieved by a suitable sequence of quantum Gaussian states. The Wehrl entropy of a quantum state is the Shannon differential entropy of the outcome of a heterodyne measurement performed on the state. The heterodyne measurement is one of the main measurements in quantum optics and lies at the basis of one of the most promising protocols for quantum key distribution. These fundamental entropic uncertainty relations will be a valuable tool in quantum information and will, for example, find application in security proofs of quantum key distribution protocols in the asymptotic regime and in entanglement witnessing in quantum optics.
Evolving spiking neural networks: a novel growth algorithm exhibits unintelligent design
NASA Astrophysics Data System (ADS)
Schaffer, J. David
2015-06-01
Spiking neural networks (SNNs) have drawn considerable excitement because of their computational properties, believed to be superior to conventional von Neumann machines, and sharing properties with living brains. Yet progress building these systems has been limited because we lack a design methodology. We present a gene-driven network growth algorithm that enables a genetic algorithm (evolutionary computation) to generate and test SNNs. The genome for this algorithm grows O(n) where n is the number of neurons; n is also evolved. The genome not only specifies the network topology, but all its parameters as well. Experiments show the algorithm producing SNNs that effectively produce a robust spike bursting behavior given tonic inputs, an application suitable for central pattern generators. Even though evolution did not include perturbations of the input spike trains, the evolved networks showed remarkable robustness to such perturbations. In addition, the output spike patterns retain evidence of the specific perturbation of the inputs, a feature that could be exploited by network additions that could use this information for refined decision making if required. On a second task, a sequence detector, a discriminating design was found that might be considered an example of "unintelligent design"; extra non-functional neurons were included that, while inefficient, did not hamper its proper functioning.
NASA Astrophysics Data System (ADS)
Miatto, F. M.; Brougham, T.; Yao, A. M.
2012-07-01
We derive an analytical form of the Schmidt modes of spontaneous parametric down-conversion (SPDC) biphotons in both Cartesian and polar coordinates. We show that these correspond to Hermite-Gauss (HG) or Laguerre-Gauss (LG) modes only for a specific value of their width, and we show how such value depends on the experimental parameters. The Schmidt modes that we explicitly derive allow one to set up an optimised projection basis that maximises the mutual information gained from a joint measurement. The possibility of doing so with LG modes makes it possible to take advantage of the properties of orbital angular momentum eigenmodes. We derive a general entropic entanglement measure using the Rényi entropy as a function of the Schmidt number, K, and then retrieve the von Neumann entropy, S. Using the relation between S and K we show that, for highly entangled states, a non-ideal measurement basis does not degrade the number of shared bits by a large extent. More specifically, given a non-ideal measurement which corresponds to the loss of a fraction of the total number of modes, we can quantify the experimental parameters needed to generate an entangled SPDC state with a sufficiently high dimensionality to retain any given fraction of shared bits.
Hilbert space structure in quantum gravity: an algebraic perspective
Giddings, Steven B.
2015-12-16
If quantum gravity respects the principles of quantum mechanics, suitably generalized, it may be that a more viable approach to the theory is through identifying the relevant quantum structures rather than by quantizing classical spacetime. Here, this viewpoint is supported by difficulties of such quantization, and by the apparent lack of a fundamental role for locality. In finite or discrete quantum systems, important structure is provided by tensor factorizations of the Hilbert space. However, even in local quantum field theory properties of the generic type III von Neumann algebras and of long range gauge fields indicate that factorization of themore » Hilbert space is problematic. Instead it is better to focus on the structure of the algebra of observables, and in particular on its subalgebras corresponding to regions. This paper suggests that study of analogous algebraic structure in gravity gives an important perspective on the nature of the quantum theory. Significant departures from the subalgebra structure of local quantum field theory are found, working in the correspondence limit of long-distances/low-energies. Particularly, there are obstacles to identifying commuting algebras of localized operators. In addition to suggesting important properties of the algebraic structure, this and related observations pose challenges to proposals of a fundamental role for entanglement.« less
Time Asymmetric Quantum Mechanics
NASA Astrophysics Data System (ADS)
Bohm, Arno R.; Gadella, Manuel; Kielanowski, Piotr
2011-09-01
The meaning of time asymmetry in quantum physics is discussed. On the basis of a mathematical theorem, the Stone-von Neumann theorem, the solutions of the dynamical equations, the Schrödinger equation (1) for states or the Heisenberg equation (6a) for observables are given by a unitary group. Dirac kets require the concept of a RHS (rigged Hilbert space) of Schwartz functions; for this kind of RHS a mathematical theorem also leads to time symmetric group evolution. Scattering theory suggests to distinguish mathematically between states (defined by a preparation apparatus) and observables (defined by a registration apparatus (detector)). If one requires that scattering resonances of width Γ and exponentially decaying states of lifetime τ=h/Γ should be the same physical entities (for which there is sufficient evidence) one is led to a pair of RHS's of Hardy functions and connected with it, to a semigroup time evolution t0≤t<∞, with the puzzling result that there is a quantum mechanical beginning of time, just like the big bang time for the universe, when it was a quantum system. The decay of quasi-stable particles is used to illustrate this quantum mechanical time asymmetry. From the analysis of these processes, we show that the properties of rigged Hilbert spaces of Hardy functions are suitable for a formulation of time asymmetry in quantum mechanics.
The solution of the sixth Hilbert problem: the ultimate Galilean revolution.
D'Ariano, Giacomo Mauro
2018-04-28
I argue for a full mathematization of the physical theory, including its axioms, which must contain no physical primitives. In provocative words: 'physics from no physics'. Although this may seem an oxymoron, it is the royal road to keep complete logical coherence, hence falsifiability of the theory. For such a purely mathematical theory the physical connotation must pertain only the interpretation of the mathematics, ranging from the axioms to the final theorems. On the contrary, the postulates of the two current major physical theories either do not have physical interpretation (as for von Neumann's axioms for quantum theory), or contain physical primitives as 'clock', 'rigid rod', 'force', 'inertial mass' (as for special relativity and mechanics). A purely mathematical theory as proposed here, though with limited (but relentlessly growing) domain of applicability, will have the eternal validity of mathematical truth. It will be a theory on which natural sciences can firmly rely. Such kind of theory is what I consider to be the solution of the sixth Hilbert problem. I argue that a prototype example of such a mathematical theory is provided by the novel algorithmic paradigm for physics, as in the recent information-theoretical derivation of quantum theory and free quantum field theory.This article is part of the theme issue 'Hilbert's sixth problem'. © 2018 The Author(s).
Marginally trapped surfaces and AdS/CFT
NASA Astrophysics Data System (ADS)
Grado-White, Brianna; Marolf, Donald
2018-02-01
It has been proposed that the areas of marginally trapped or anti-trapped surfaces (also known as leaves of holographic screens) may encode some notion of entropy. To connect this to AdS/CFT, we study the case of marginally trapped surfaces anchored to an AdS boundary. We establish that such boundary-anchored leaves lie between the causal and extremal surfaces defined by the anchor and that they have area bounded below by that of the minimal extremal surface. This suggests that the area of any leaf represents a coarse-grained von Neumann entropy for the associated region of the dual CFT. We further demonstrate that the leading area-divergence of a boundary-anchored marginally trapped surface agrees with that for the associated extremal surface, though subleading divergences generally differ. Finally, we generalize an argument of Bousso and Engelhardt to show that holographic screens with all leaves anchored to the same boundary set have leaf-areas that increase monotonically along the screen, and we describe a construction through which this monotonicity can take the more standard form of requiring entropy to increase with boundary time. This construction is related to what one might call future causal holographic information, which in such cases also provides an upper bound on the area of the associated leaves.
Gao, Shuang; Liu, Gang; Chen, Qilai; Xue, Wuhong; Yang, Huali; Shang, Jie; Chen, Bin; Zeng, Fei; Song, Cheng; Pan, Feng; Li, Run-Wei
2018-02-21
Resistive random access memory (RRAM) with inherent logic-in-memory capability exhibits great potential to construct beyond von-Neumann computers. Particularly, unipolar RRAM is more promising because its single polarity operation enables large-scale crossbar logic-in-memory circuits with the highest integration density and simpler peripheral control circuits. However, unipolar RRAM usually exhibits poor switching uniformity because of random activation of conducting filaments and consequently cannot meet the strict uniformity requirement for logic-in-memory application. In this contribution, a new methodology that constructs cone-shaped conducting filaments by using chemically a active metal cathode is proposed to improve unipolar switching uniformity. Such a peculiar metal cathode will react spontaneously with the oxide switching layer to form an interfacial layer, which together with the metal cathode itself can act as a load resistor to prevent the overgrowth of conducting filaments and thus make them more cone-like. In this way, the rupture of conducting filaments can be strictly limited to the tip region, making their residual parts favorable locations for subsequent filament growth and thus suppressing their random regeneration. As such, a novel "one switch + one unipolar RRAM cell" hybrid structure is capable to realize all 16 Boolean logic functions for large-scale logic-in-memory circuits.
Hilbert space structure in quantum gravity: an algebraic perspective
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giddings, Steven B.
If quantum gravity respects the principles of quantum mechanics, suitably generalized, it may be that a more viable approach to the theory is through identifying the relevant quantum structures rather than by quantizing classical spacetime. Here, this viewpoint is supported by difficulties of such quantization, and by the apparent lack of a fundamental role for locality. In finite or discrete quantum systems, important structure is provided by tensor factorizations of the Hilbert space. However, even in local quantum field theory properties of the generic type III von Neumann algebras and of long range gauge fields indicate that factorization of themore » Hilbert space is problematic. Instead it is better to focus on the structure of the algebra of observables, and in particular on its subalgebras corresponding to regions. This paper suggests that study of analogous algebraic structure in gravity gives an important perspective on the nature of the quantum theory. Significant departures from the subalgebra structure of local quantum field theory are found, working in the correspondence limit of long-distances/low-energies. Particularly, there are obstacles to identifying commuting algebras of localized operators. In addition to suggesting important properties of the algebraic structure, this and related observations pose challenges to proposals of a fundamental role for entanglement.« less
Wealth distribution across communities of adaptive financial agents
NASA Astrophysics Data System (ADS)
DeLellis, Pietro; Garofalo, Franco; Lo Iudice, Francesco; Napoletano, Elena
2015-08-01
This paper studies the trading volumes and wealth distribution of a novel agent-based model of an artificial financial market. In this model, heterogeneous agents, behaving according to the Von Neumann and Morgenstern utility theory, may mutually interact. A Tobin-like tax (TT) on successful investments and a flat tax are compared to assess the effects on the agents’ wealth distribution. We carry out extensive numerical simulations in two alternative scenarios: (i) a reference scenario, where the agents keep their utility function fixed, and (ii) a focal scenario, where the agents are adaptive and self-organize in communities, emulating their neighbours by updating their own utility function. Specifically, the interactions among the agents are modelled through a directed scale-free network to account for the presence of community leaders, and the herding-like effect is tested against the reference scenario. We observe that our model is capable of replicating the benefits and drawbacks of the two taxation systems and that the interactions among the agents strongly affect the wealth distribution across the communities. Remarkably, the communities benefit from the presence of leaders with successful trading strategies, and are more likely to increase their average wealth. Moreover, this emulation mechanism mitigates the decrease in trading volumes, which is a typical drawback of TTs.
Bethe, Oppenheimer, Teller and the Fermi Award: Norris Bradbury Speaks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meade, Roger Allen
In 1956 the Enrico Fermi Presidential Award was established to recognize scientists, engineers, and science policymakers who gave unstintingly over their careers to advance energy science and technology. The first recipient was John von Neumann. .1 Among those scientists who were thought eligible for the award were Hans Bethe, J. Robert Oppenheimer, and Edward Teller. In 1959 Norris Bradbury was asked to comment on the relative merits of each these three men, whom he knew well from their affiliation with Los Alamos. Below is a reproduction of the letter Bradbury sent to Dr. Warren C. Johnson of the AEC’s Generalmore » Advisory Committee(GAC) containing his evaluation of each man. The letter might surprise those not accustomed to Bradbury’s modus operandi of providing very detailed and forthright answers to the AEC. The letter, itself, was found in cache of old microfilm. Whether because of the age of the microfilm or the quality of the filming process, portions of the letter are not legible. Where empty brackets appear, the word or words could not be read or deduced. Words appearing in brackets are guesses that appear, from the image, to be what was written. These guesses, of course, are just that – guesses.« less
Monte Carlo simulation of quantum Zeno effect in the brain
NASA Astrophysics Data System (ADS)
Georgiev, Danko
2015-12-01
Environmental decoherence appears to be the biggest obstacle for successful construction of quantum mind theories. Nevertheless, the quantum physicist Henry Stapp promoted the view that the mind could utilize quantum Zeno effect to influence brain dynamics and that the efficacy of such mental efforts would not be undermined by environmental decoherence of the brain. To address the physical plausibility of Stapp's claim, we modeled the brain using quantum tunneling of an electron in a multiple-well structure such as the voltage sensor in neuronal ion channels and performed Monte Carlo simulations of quantum Zeno effect exerted by the mind upon the brain in the presence or absence of environmental decoherence. The simulations unambiguously showed that the quantum Zeno effect breaks down for timescales greater than the brain decoherence time. To generalize the Monte Carlo simulation results for any n-level quantum system, we further analyzed the change of brain entropy due to the mind probing actions and proved a theorem according to which local projections cannot decrease the von Neumann entropy of the unconditional brain density matrix. The latter theorem establishes that Stapp's model is physically implausible but leaves a door open for future development of quantum mind theories provided the brain has a decoherence-free subspace.
Memristor-based cellular nonlinear/neural network: design, analysis, and applications.
Duan, Shukai; Hu, Xiaofang; Dong, Zhekang; Wang, Lidan; Mazumder, Pinaki
2015-06-01
Cellular nonlinear/neural network (CNN) has been recognized as a powerful massively parallel architecture capable of solving complex engineering problems by performing trillions of analog operations per second. The memristor was theoretically predicted in the late seventies, but it garnered nascent research interest due to the recent much-acclaimed discovery of nanocrossbar memories by engineers at the Hewlett-Packard Laboratory. The memristor is expected to be co-integrated with nanoscale CMOS technology to revolutionize conventional von Neumann as well as neuromorphic computing. In this paper, a compact CNN model based on memristors is presented along with its performance analysis and applications. In the new CNN design, the memristor bridge circuit acts as the synaptic circuit element and substitutes the complex multiplication circuit used in traditional CNN architectures. In addition, the negative differential resistance and nonlinear current-voltage characteristics of the memristor have been leveraged to replace the linear resistor in conventional CNNs. The proposed CNN design has several merits, for example, high density, nonvolatility, and programmability of synaptic weights. The proposed memristor-based CNN design operations for implementing several image processing functions are illustrated through simulation and contrasted with conventional CNNs. Monte-Carlo simulation has been used to demonstrate the behavior of the proposed CNN due to the variations in memristor synaptic weights.
NASA Astrophysics Data System (ADS)
Faghihi, M. J.; Tavassoly, M. K.; Bagheri Harouni, M.
2014-04-01
In this paper, we study the interaction between a Λ-type three-level atom and two quantized electromagnetic fields which are simultaneously injected in a bichromatic cavity surrounded by a Kerr medium in the presence of field-field interaction (parametric down conversion) and detuning parameters. By applying a canonical transformation, the introduced model is reduced to a well-known form of the generalized Jaynes-Cummings model. Under particular initial conditions which may be prepared for the atom and the field, the time evolution of the state vector of the entire system is analytically evaluated. Then, the dynamics of the atom is studied through the evolution of the atomic population inversion. In addition, two different measures of entanglement between the tripartite system (three entities make the system: two field modes and one atom), i.e., von Neumann and linear entropy are investigated. Also, two kinds of entropic uncertainty relations, from which entropy squeezing can be obtained, are discussed. In each case, the influences of the detuning parameters and Kerr medium on the above nonclassicality features are analyzed in detail via numerical results. It is illustrated that the amount of the above-mentioned physical phenomena can be tuned by choosing the evolved parameters, appropriately.
Subnanosecond measurements of detonation fronts in solid high explosives
NASA Astrophysics Data System (ADS)
Sheffield, S. A.; Bloomquist, D. D.; Tarver, C. M.
1984-04-01
Detonation fronts in solid high explosives have been examined through measurements of particle velocity histories resulting from the interaction of a detonation wave with a thin metal foil backed by a water window. Using a high time resolution velocity-interferometer system, experiments were conducted on three explosives—a TATB (1,3,5-triamino-trinitrobenzene)-based explosive called PBX-9502, TNT (2,4,6-Trinitrotoluene), and CP (2-{5-cyanotetrazolato} pentaamminecobalt {III} perchlorate). In all cases, detonation-front rise times were found to be less than the 300 ps resolution of the interferometer system. The thermodynamic state in the front of the detonation wave was estimated to be near the unreacted state determined from an extrapolation of low-pressure unreacted Hugoniot data for both TNT and PBX-9502 explosives. Computer calculations based on an ignition and growth model of a Zeldovich-von Neumann-Doering (ZND) detonation wave show good agreement with the measurements. By using the unreacted Hugoniot and a JWL equation of state for the reaction products, we estimated the initial reaction rate in the high explosive after the detonation wave front interacted with the foil to be 40 μs-1 for CP, 60 μs-1 for TNT, and 80 μs-1 for PBX-9502. The shape of the profiles indicates the reaction rate decreases as reaction proceeds.
Fully parallel write/read in resistive synaptic array for accelerating on-chip learning
NASA Astrophysics Data System (ADS)
Gao, Ligang; Wang, I.-Ting; Chen, Pai-Yu; Vrudhula, Sarma; Seo, Jae-sun; Cao, Yu; Hou, Tuo-Hung; Yu, Shimeng
2015-11-01
A neuro-inspired computing paradigm beyond the von Neumann architecture is emerging and it generally takes advantage of massive parallelism and is aimed at complex tasks that involve intelligence and learning. The cross-point array architecture with synaptic devices has been proposed for on-chip implementation of the weighted sum and weight update in the learning algorithms. In this work, forming-free, silicon-process-compatible Ta/TaO x /TiO2/Ti synaptic devices are fabricated, in which >200 levels of conductance states could be continuously tuned by identical programming pulses. In order to demonstrate the advantages of parallelism of the cross-point array architecture, a novel fully parallel write scheme is designed and experimentally demonstrated in a small-scale crossbar array to accelerate the weight update in the training process, at a speed that is independent of the array size. Compared to the conventional row-by-row write scheme, it achieves >30× speed-up and >30× improvement in energy efficiency as projected in a large-scale array. If realistic synaptic device characteristics such as device variations are taken into an array-level simulation, the proposed array architecture is able to achieve ∼95% recognition accuracy of MNIST handwritten digits, which is close to the accuracy achieved by software using the ideal sparse coding algorithm.
Geometrical tile design for complex neighborhoods.
Czeizler, Eugen; Kari, Lila
2009-01-01
Recent research has showed that tile systems are one of the most suitable theoretical frameworks for the spatial study and modeling of self-assembly processes, such as the formation of DNA and protein oligomeric structures. A Wang tile is a unit square, with glues on its edges, attaching to other tiles and forming larger and larger structures. Although quite intuitive, the idea of glues placed on the edges of a tile is not always natural for simulating the interactions occurring in some real systems. For example, when considering protein self-assembly, the shape of a protein is the main determinant of its functions and its interactions with other proteins. Our goal is to use geometric tiles, i.e., square tiles with geometrical protrusions on their edges, for simulating tiled paths (zippers) with complex neighborhoods, by ribbons of geometric tiles with simple, local neighborhoods. This paper is a step toward solving the general case of an arbitrary neighborhood, by proposing geometric tile designs that solve the case of a "tall" von Neumann neighborhood, the case of the f-shaped neighborhood, and the case of a 3 x 5 "filled" rectangular neighborhood. The techniques can be combined and generalized to solve the problem in the case of any neighborhood, centered at the tile of reference, and included in a 3 x (2k + 1) rectangle.
An electrically reconfigurable logic gate intrinsically enabled by spin-orbit materials.
Kazemi, Mohammad
2017-11-10
The spin degree of freedom in magnetic devices has been discussed widely for computing, since it could significantly reduce energy dissipation, might enable beyond Von Neumann computing, and could have applications in quantum computing. For spin-based computing to become widespread, however, energy efficient logic gates comprising as few devices as possible are required. Considerable recent progress has been reported in this area. However, proposals for spin-based logic either require ancillary charge-based devices and circuits in each individual gate or adopt principals underlying charge-based computing by employing ancillary spin-based devices, which largely negates possible advantages. Here, we show that spin-orbit materials possess an intrinsic basis for the execution of logic operations. We present a spin-orbit logic gate that performs a universal logic operation utilizing the minimum possible number of devices, that is, the essential devices required for representing the logic operands. Also, whereas the previous proposals for spin-based logic require extra devices in each individual gate to provide reconfigurability, the proposed gate is 'electrically' reconfigurable at run-time simply by setting the amplitude of the clock pulse applied to the gate. We demonstrate, analytically and numerically with experimentally benchmarked models, that the gate performs logic operations and simultaneously stores the result, realizing the 'stateful' spin-based logic scalable to ultralow energy dissipation.
Elisabeth Noelle-Neumann's "Spiral of Silence" and the Historical Context of Communication Theory.
ERIC Educational Resources Information Center
Simpson, Christopher
1996-01-01
Examines the work and the life of German public opinion expert Elisabeth Noelle-Neumann. Shows that the attitudes and analytic tools she forged during her youth and brought to bear in her work as a Nazi Collaborator and apologist shaped her later thinking, including her articulation of the "spiral of silence" model of mass communication…
Evaluating U.S. Military Engineering Efforts In East Africa
2013-03-01
Michael J. Neumann, Cathryn Quantic Thurston, Developing an Army Strategy for Building Partner Capacity for Stability Operations (Santa Monica, CA...Building Partnership Capacity, Master of Military Art and Science Thesis (Fort Leavenworth, KS: U.S. Army Command and General Staff College, June...Lynch, Michael J. Neumann, Cathryn Quantic Thurston, Developing an Army Strategy for Building Partner Capacity for Stability Operations, (Santa Monica
Dirichlet to Neumann operator for Abelian Yang-Mills gauge fields
NASA Astrophysics Data System (ADS)
Díaz-Marín, Homero G.
We consider the Dirichlet to Neumann operator for Abelian Yang-Mills boundary conditions. The aim is constructing a complex structure for the symplectic space of boundary conditions of Euler-Lagrange solutions modulo gauge for space-time manifolds with smooth boundary. Thus we prepare a suitable scenario for geometric quantization within the reduced symplectic space of boundary conditions of Abelian gauge fields.
The Role of Antisociality in the Psychopathy Construct: Comment on Skeem and Cooke (2010)
ERIC Educational Resources Information Center
Hare, Robert D.; Neumann, Craig S.
2010-01-01
J. Skeem and D. J. Cooke (2010) asserted that Hare and Neumann consider criminality to be an essential component of the psychopathy construct. The assertion, presented in the guise of a debate on the nature of psychopathy, is neither accurate nor consistent with the clinical and empirical literature on psychopathy to which Hare and Neumann have…
Zuverlässigkeit digitaler Schaltungen unter Einfluss von intrinsischem Rauschen
NASA Astrophysics Data System (ADS)
Kleeberger, V. B.; Schlichtmann, U.
2011-08-01
Die kontinuierlich fortschreitende Miniaturisierung in integrierten Schaltungen führt zu einem Anstieg des intrinsischen Rauschens. Um den Einfluss von intrinsischem Rauschen auf die Zuverlässigkeit zukünftiger digitaler Schaltungen analysieren zu können, werden Methoden benötigt, die auf CAD-Verfahren wie Analogsimulation statt auf abschätzenden Berechnungen beruhen. Dieser Beitrag stellt eine neue Methode vor, die den Einfluss von intrinsischem Rauschen in digitalen Schaltungen für eine gegebene Prozesstechnologie analysieren kann. Die Amplituden von thermischen, 1/f und Schrotrauschen werden mit Hilfe eines SPICE Simulators bestimmt. Anschließend wird der Einfluss des Rauschens auf die Schaltungszuverlässigkeit durch Simulation analysiert. Zusätzlich zur Analyse werden Möglichkeiten aufgezeigt, wie die durch Rauschen hervorgerufenen Effekte im Schaltungsentwurf mit berücksichtigt werden können. Im Gegensatz zum Stand der Technik kann die vorgestellte Methode auf beliebige Logikimplementierungen und Prozesstechnologien angewendet werden. Zusätzlich wird gezeigt, dass bisherige Ansätze den Einfluss von Rauschen bis um das Vierfache überschätzen.
NASA Astrophysics Data System (ADS)
Jun, Li; Huicheng, Yin
2018-05-01
The paper is devoted to investigating long time behavior of smooth small data solutions to 3-D quasilinear wave equations outside of compact convex obstacles with Neumann boundary conditions. Concretely speaking, when the surface of a 3-D compact convex obstacle is smooth and the quasilinear wave equation fulfills the null condition, we prove that the smooth small data solution exists globally provided that the Neumann boundary condition on the exterior domain is given. One of the main ingredients in the current paper is the establishment of local energy decay estimates of the solution itself. As an application of the main result, the global stability to 3-D static compressible Chaplygin gases in exterior domain is shown under the initial irrotational perturbation with small amplitude.