Finite-block-length analysis in classical and quantum information theory.
Hayashi, Masahito
2017-01-01
Coding technology is used in several information processing tasks. In particular, when noise during transmission disturbs communications, coding technology is employed to protect the information. However, there are two types of coding technology: coding in classical information theory and coding in quantum information theory. Although the physical media used to transmit information ultimately obey quantum mechanics, we need to choose the type of coding depending on the kind of information device, classical or quantum, that is being used. In both branches of information theory, there are many elegant theoretical results under the ideal assumption that an infinitely large system is available. In a realistic situation, we need to account for finite size effects. The present paper reviews finite size effects in classical and quantum information theory with respect to various topics, including applied aspects.
Finite-block-length analysis in classical and quantum information theory
HAYASHI, Masahito
2017-01-01
Coding technology is used in several information processing tasks. In particular, when noise during transmission disturbs communications, coding technology is employed to protect the information. However, there are two types of coding technology: coding in classical information theory and coding in quantum information theory. Although the physical media used to transmit information ultimately obey quantum mechanics, we need to choose the type of coding depending on the kind of information device, classical or quantum, that is being used. In both branches of information theory, there are many elegant theoretical results under the ideal assumption that an infinitely large system is available. In a realistic situation, we need to account for finite size effects. The present paper reviews finite size effects in classical and quantum information theory with respect to various topics, including applied aspects. PMID:28302962
NP-hardness of decoding quantum error-correction codes
NASA Astrophysics Data System (ADS)
Hsieh, Min-Hsiu; Le Gall, François
2011-05-01
Although the theory of quantum error correction is intimately related to classical coding theory and, in particular, one can construct quantum error-correction codes (QECCs) from classical codes with the dual-containing property, this does not necessarily imply that the computational complexity of decoding QECCs is the same as their classical counterparts. Instead, decoding QECCs can be very much different from decoding classical codes due to the degeneracy property. Intuitively, one expects degeneracy would simplify the decoding since two different errors might not and need not be distinguished in order to correct them. However, we show that general quantum decoding problem is NP-hard regardless of the quantum codes being degenerate or nondegenerate. This finding implies that no considerably fast decoding algorithm exists for the general quantum decoding problems and suggests the existence of a quantum cryptosystem based on the hardness of decoding QECCs.
Correcting quantum errors with entanglement.
Brun, Todd; Devetak, Igor; Hsieh, Min-Hsiu
2006-10-20
We show how entanglement shared between encoder and decoder can simplify the theory of quantum error correction. The entanglement-assisted quantum codes we describe do not require the dual-containing constraint necessary for standard quantum error-correcting codes, thus allowing us to "quantize" all of classical linear coding theory. In particular, efficient modern classical codes that attain the Shannon capacity can be made into entanglement-assisted quantum codes attaining the hashing bound (closely related to the quantum capacity). For systems without large amounts of shared entanglement, these codes can also be used as catalytic codes, in which a small amount of initial entanglement enables quantum communication.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takeoka, Masahiro; Fujiwara, Mikio; Mizuno, Jun
2004-05-01
Quantum-information theory predicts that when the transmission resource is doubled in quantum channels, the amount of information transmitted can be increased more than twice by quantum-channel coding technique, whereas the increase is at most twice in classical information theory. This remarkable feature, the superadditive quantum-coding gain, can be implemented by appropriate choices of code words and corresponding quantum decoding which requires a collective quantum measurement. Recently, an experimental demonstration was reported [M. Fujiwara et al., Phys. Rev. Lett. 90, 167906 (2003)]. The purpose of this paper is to describe our experiment in detail. Particularly, a design strategy of quantum-collective decodingmore » in physical quantum circuits is emphasized. We also address the practical implication of the gain on communication performance by introducing the quantum-classical hybrid coding scheme. We show how the superadditive quantum-coding gain, even in a small code length, can boost the communication performance of conventional coding techniques.« less
Entanglement-assisted quantum convolutional coding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilde, Mark M.; Brun, Todd A.
2010-04-15
We show how to protect a stream of quantum information from decoherence induced by a noisy quantum communication channel. We exploit preshared entanglement and a convolutional coding structure to develop a theory of entanglement-assisted quantum convolutional coding. Our construction produces a Calderbank-Shor-Steane (CSS) entanglement-assisted quantum convolutional code from two arbitrary classical binary convolutional codes. The rate and error-correcting properties of the classical convolutional codes directly determine the corresponding properties of the resulting entanglement-assisted quantum convolutional code. We explain how to encode our CSS entanglement-assisted quantum convolutional codes starting from a stream of information qubits, ancilla qubits, and shared entangled bits.
Coherent-state constellations and polar codes for thermal Gaussian channels
NASA Astrophysics Data System (ADS)
Lacerda, Felipe; Renes, Joseph M.; Scholz, Volkher B.
2017-06-01
Optical communication channels are ultimately quantum mechanical in nature, and we must therefore look beyond classical information theory to determine their communication capacity as well as to find efficient encoding and decoding schemes of the highest rates. Thermal channels, which arise from linear coupling of the field to a thermal environment, are of particular practical relevance; their classical capacity has been recently established, but their quantum capacity remains unknown. While the capacity sets the ultimate limit on reliable communication rates, it does not promise that such rates are achievable by practical means. Here we construct efficiently encodable codes for thermal channels which achieve the classical capacity and the so-called Gaussian coherent information for transmission of classical and quantum information, respectively. Our codes are based on combining polar codes with a discretization of the channel input into a finite "constellation" of coherent states. Encoding of classical information can be done using linear optics.
Advanced Small Perturbation Potential Flow Theory for Unsteady Aerodynamic and Aeroelastic Analyses
NASA Technical Reports Server (NTRS)
Batina, John T.
2005-01-01
An advanced small perturbation (ASP) potential flow theory has been developed to improve upon the classical transonic small perturbation (TSP) theories that have been used in various computer codes. These computer codes are typically used for unsteady aerodynamic and aeroelastic analyses in the nonlinear transonic flight regime. The codes exploit the simplicity of stationary Cartesian meshes with the movement or deformation of the configuration under consideration incorporated into the solution algorithm through a planar surface boundary condition. The new ASP theory was developed methodically by first determining the essential elements required to produce full-potential-like solutions with a small perturbation approach on the requisite Cartesian grid. This level of accuracy required a higher-order streamwise mass flux and a mass conserving surface boundary condition. The ASP theory was further developed by determining the essential elements required to produce results that agreed well with Euler solutions. This level of accuracy required mass conserving entropy and vorticity effects, and second-order terms in the trailing wake boundary condition. Finally, an integral boundary layer procedure, applicable to both attached and shock-induced separated flows, was incorporated for viscous effects. The resulting ASP potential flow theory, including entropy, vorticity, and viscous effects, is shown to be mathematically more appropriate and computationally more accurate than the classical TSP theories. The formulaic details of the ASP theory are described fully and the improvements are demonstrated through careful comparisons with accepted alternative results and experimental data. The new theory has been used as the basis for a new computer code called ASP3D (Advanced Small Perturbation - 3D), which also is briefly described with representative results.
Topics in quantum cryptography, quantum error correction, and channel simulation
NASA Astrophysics Data System (ADS)
Luo, Zhicheng
In this thesis, we mainly investigate four different topics: efficiently implementable codes for quantum key expansion [51], quantum error-correcting codes based on privacy amplification [48], private classical capacity of quantum channels [44], and classical channel simulation with quantum side information [49, 50]. For the first topic, we propose an efficiently implementable quantum key expansion protocol, capable of increasing the size of a pre-shared secret key by a constant factor. Previously, the Shor-Preskill proof [64] of the security of the Bennett-Brassard 1984 (BB84) [6] quantum key distribution protocol relied on the theoretical existence of good classical error-correcting codes with the "dual-containing" property. But the explicit and efficiently decodable construction of such codes is unknown. We show that we can lift the dual-containing constraint by employing the non-dual-containing codes with excellent performance and efficient decoding algorithms. For the second topic, we propose a construction of Calderbank-Shor-Steane (CSS) [19, 68] quantum error-correcting codes, which are originally based on pairs of mutually dual-containing classical codes, by combining a classical code with a two-universal hash function. We show, using the results of Renner and Koenig [57], that the communication rates of such codes approach the hashing bound on tensor powers of Pauli channels in the limit of large block-length. For the third topic, we prove a regularized formula for the secret key assisted capacity region of a quantum channel for transmitting private classical information. This result parallels the work of Devetak on entanglement assisted quantum communication capacity. This formula provides a new family protocol, the private father protocol, under the resource inequality framework that includes the private classical communication without the assisted secret keys as a child protocol. For the fourth topic, we study and solve the problem of classical channel simulation with quantum side information at the receiver. Our main theorem has two important corollaries: rate-distortion theory with quantum side information and common randomness distillation. Simple proofs of achievability of classical multi-terminal source coding problems can be made via a unified approach using the channel simulation theorem as building blocks. The fully quantum generalization of the problem is also conjectured with outer and inner bounds on the achievable rate pairs.
An Introduction to Quantum Theory
NASA Astrophysics Data System (ADS)
Greensite, Jeff
2017-02-01
Written in a lucid and engaging style, the author takes readers from an overview of classical mechanics and the historical development of quantum theory through to advanced topics. The mathematical aspects of quantum theory necessary for a firm grasp of the subject are developed in the early chapters, but an effort is made to motivate that formalism on physical grounds. Including animated figures and their respective Mathematica® codes, this book provides a complete and comprehensive text for students in physics, maths, chemistry and engineering needing an accessible introduction to quantum mechanics. Supplementary Mathematica codes available within Book Information
The analysis of convolutional codes via the extended Smith algorithm
NASA Technical Reports Server (NTRS)
Mceliece, R. J.; Onyszchuk, I.
1993-01-01
Convolutional codes have been the central part of most error-control systems in deep-space communication for many years. Almost all such applications, however, have used the restricted class of (n,1), also known as 'rate 1/n,' convolutional codes. The more general class of (n,k) convolutional codes contains many potentially useful codes, but their algebraic theory is difficult and has proved to be a stumbling block in the evolution of convolutional coding systems. In this article, the situation is improved by describing a set of practical algorithms for computing certain basic things about a convolutional code (among them the degree, the Forney indices, a minimal generator matrix, and a parity-check matrix), which are usually needed before a system using the code can be built. The approach is based on the classic Forney theory for convolutional codes, together with the extended Smith algorithm for polynomial matrices, which is introduced in this article.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradler, Kamil; Hayden, Patrick; Touchette, Dave
Coding theorems in quantum Shannon theory express the ultimate rates at which a sender can transmit information over a noisy quantum channel. More often than not, the known formulas expressing these transmission rates are intractable, requiring an optimization over an infinite number of uses of the channel. Researchers have rarely found quantum channels with a tractable classical or quantum capacity, but when such a finding occurs, it demonstrates a complete understanding of that channel's capabilities for transmitting classical or quantum information. Here we show that the three-dimensional capacity region for entanglement-assisted transmission of classical and quantum information is tractable formore » the Hadamard class of channels. Examples of Hadamard channels include generalized dephasing channels, cloning channels, and the Unruh channel. The generalized dephasing channels and the cloning channels are natural processes that occur in quantum systems through the loss of quantum coherence or stimulated emission, respectively. The Unruh channel is a noisy process that occurs in relativistic quantum information theory as a result of the Unruh effect and bears a strong relationship to the cloning channels. We give exact formulas for the entanglement-assisted classical and quantum communication capacity regions of these channels. The coding strategy for each of these examples is superior to a naieve time-sharing strategy, and we introduce a measure to determine this improvement.« less
JDFTx: Software for joint density-functional theory
Sundararaman, Ravishankar; Letchworth-Weaver, Kendra; Schwarz, Kathleen A.; ...
2017-11-14
Density-functional theory (DFT) has revolutionized computational prediction of atomic-scale properties from first principles in physics, chemistry and materials science. Continuing development of new methods is necessary for accurate predictions of new classes of materials and properties, and for connecting to nano- and mesoscale properties using coarse-grained theories. JDFTx is a fully-featured open-source electronic DFT software designed specifically to facilitate rapid development of new theories, models and algorithms. Using an algebraic formulation as an abstraction layer, compact C++11 code automatically performs well on diverse hardware including GPUs (Graphics Processing Units). This code hosts the development of joint density-functional theory (JDFT) thatmore » combines electronic DFT with classical DFT and continuum models of liquids for first-principles calculations of solvated and electrochemical systems. In addition, the modular nature of the code makes it easy to extend and interface with, facilitating the development of multi-scale toolkits that connect to ab initio calculations, e.g. photo-excited carrier dynamics combining electron and phonon calculations with electromagnetic simulations.« less
JDFTx: Software for joint density-functional theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sundararaman, Ravishankar; Letchworth-Weaver, Kendra; Schwarz, Kathleen A.
Density-functional theory (DFT) has revolutionized computational prediction of atomic-scale properties from first principles in physics, chemistry and materials science. Continuing development of new methods is necessary for accurate predictions of new classes of materials and properties, and for connecting to nano- and mesoscale properties using coarse-grained theories. JDFTx is a fully-featured open-source electronic DFT software designed specifically to facilitate rapid development of new theories, models and algorithms. Using an algebraic formulation as an abstraction layer, compact C++11 code automatically performs well on diverse hardware including GPUs (Graphics Processing Units). This code hosts the development of joint density-functional theory (JDFT) thatmore » combines electronic DFT with classical DFT and continuum models of liquids for first-principles calculations of solvated and electrochemical systems. In addition, the modular nature of the code makes it easy to extend and interface with, facilitating the development of multi-scale toolkits that connect to ab initio calculations, e.g. photo-excited carrier dynamics combining electron and phonon calculations with electromagnetic simulations.« less
NASA Astrophysics Data System (ADS)
Semenov, Alexander; Babikov, Dmitri
2013-11-01
We formulated the mixed quantum/classical theory for rotationally and vibrationally inelastic scattering process in the diatomic molecule + atom system. Two versions of theory are presented, first in the space-fixed and second in the body-fixed reference frame. First version is easy to derive and the resultant equations of motion are transparent, but the state-to-state transition matrix is complex-valued and dense. Such calculations may be computationally demanding for heavier molecules and/or higher temperatures, when the number of accessible channels becomes large. In contrast, the second version of theory requires some tedious derivations and the final equations of motion are rather complicated (not particularly intuitive). However, the state-to-state transitions are driven by real-valued sparse matrixes of much smaller size. Thus, this formulation is the method of choice from the computational point of view, while the space-fixed formulation can serve as a test of the body-fixed equations of motion, and the code. Rigorous numerical tests were carried out for a model system to ensure that all equations, matrixes, and computer codes in both formulations are correct.
Topological order, entanglement, and quantum memory at finite temperature
NASA Astrophysics Data System (ADS)
Mazáč, Dalimil; Hamma, Alioscia
2012-09-01
We compute the topological entropy of the toric code models in arbitrary dimension at finite temperature. We find that the critical temperatures for the existence of full quantum (classical) topological entropy correspond to the confinement-deconfinement transitions in the corresponding Z2 gauge theories. This implies that the thermal stability of topological entropy corresponds to the stability of quantum (classical) memory. The implications for the understanding of ergodicity breaking in topological phases are discussed.
Analysis of quantum error-correcting codes: Symplectic lattice codes and toric codes
NASA Astrophysics Data System (ADS)
Harrington, James William
Quantum information theory is concerned with identifying how quantum mechanical resources (such as entangled quantum states) can be utilized for a number of information processing tasks, including data storage, computation, communication, and cryptography. Efficient quantum algorithms and protocols have been developed for performing some tasks (e.g. , factoring large numbers, securely communicating over a public channel, and simulating quantum mechanical systems) that appear to be very difficult with just classical resources. In addition to identifying the separation between classical and quantum computational power, much of the theoretical focus in this field over the last decade has been concerned with finding novel ways of encoding quantum information that are robust against errors, which is an important step toward building practical quantum information processing devices. In this thesis I present some results on the quantum error-correcting properties of oscillator codes (also described as symplectic lattice codes) and toric codes. Any harmonic oscillator system (such as a mode of light) can be encoded with quantum information via symplectic lattice codes that are robust against shifts in the system's continuous quantum variables. I show the existence of lattice codes whose achievable rates match the one-shot coherent information over the Gaussian quantum channel. Also, I construct a family of symplectic self-dual lattices and search for optimal encodings of quantum information distributed between several oscillators. Toric codes provide encodings of quantum information into two-dimensional spin lattices that are robust against local clusters of errors and which require only local quantum operations for error correction. Numerical simulations of this system under various error models provide a calculation of the accuracy threshold for quantum memory using toric codes, which can be related to phase transitions in certain condensed matter models. I also present a local classical processing scheme for correcting errors on toric codes, which demonstrates that quantum information can be maintained in two dimensions by purely local (quantum and classical) resources.
NASA Technical Reports Server (NTRS)
Moncada, Albert M.; Chattopadhyay, Aditi; Bednarcyk, Brett A.; Arnold, Steven M.
2008-01-01
Predicting failure in a composite can be done with ply level mechanisms and/or micro level mechanisms. This paper uses the Generalized Method of Cells and High-Fidelity Generalized Method of Cells micromechanics theories, coupled with classical lamination theory, as implemented within NASA's Micromechanics Analysis Code with Generalized Method of Cells. The code is able to implement different failure theories on the level of both the fiber and the matrix constituents within a laminate. A comparison is made among maximum stress, maximum strain, Tsai-Hill, and Tsai-Wu failure theories. To verify the failure theories the Worldwide Failure Exercise (WWFE) experiments have been used. The WWFE is a comprehensive study that covers a wide range of polymer matrix composite laminates. The numerical results indicate good correlation with the experimental results for most of the composite layups, but also point to the need for more accurate resin damage progression models.
Adaptive neural coding: from biological to behavioral decision-making
Louie, Kenway; Glimcher, Paul W.; Webb, Ryan
2015-01-01
Empirical decision-making in diverse species deviates from the predictions of normative choice theory, but why such suboptimal behavior occurs is unknown. Here, we propose that deviations from optimality arise from biological decision mechanisms that have evolved to maximize choice performance within intrinsic biophysical constraints. Sensory processing utilizes specific computations such as divisive normalization to maximize information coding in constrained neural circuits, and recent evidence suggests that analogous computations operate in decision-related brain areas. These adaptive computations implement a relative value code that may explain the characteristic context-dependent nature of behavioral violations of classical normative theory. Examining decision-making at the computational level thus provides a crucial link between the architecture of biological decision circuits and the form of empirical choice behavior. PMID:26722666
A note on powers in finite fields
NASA Astrophysics Data System (ADS)
Aabrandt, Andreas; Lundsgaard Hansen, Vagn
2016-08-01
The study of solutions to polynomial equations over finite fields has a long history in mathematics and is an interesting area of contemporary research. In recent years, the subject has found important applications in the modelling of problems from applied mathematical fields such as signal analysis, system theory, coding theory and cryptology. In this connection, it is of interest to know criteria for the existence of squares and other powers in arbitrary finite fields. Making good use of polynomial division in polynomial rings over finite fields, we have examined a classical criterion of Euler for squares in odd prime fields, giving it a formulation that is apt for generalization to arbitrary finite fields and powers. Our proof uses algebra rather than classical number theory, which makes it convenient when presenting basic methods of applied algebra in the classroom.
Developing Learning Objectives for Accounting Ethics Using Bloom's Taxonomy
ERIC Educational Resources Information Center
Kidwell, Linda A.; Fisher, Dann G.; Braun, Robert L.; Swanson, Diane L.
2013-01-01
The purpose of our article is to offer a set of core knowledge learning objectives for accounting ethics education. Using Bloom's taxonomy of educational objectives, we develop learning objectives in six content areas: codes of ethical conduct, corporate governance, the accounting profession, moral development, classical ethics theories, and…
New quantum codes derived from a family of antiprimitive BCH codes
NASA Astrophysics Data System (ADS)
Liu, Yang; Li, Ruihu; Lü, Liangdong; Guo, Luobin
The Bose-Chaudhuri-Hocquenghem (BCH) codes have been studied for more than 57 years and have found wide application in classical communication system and quantum information theory. In this paper, we study the construction of quantum codes from a family of q2-ary BCH codes with length n=q2m+1 (also called antiprimitive BCH codes in the literature), where q≥4 is a power of 2 and m≥2. By a detailed analysis of some useful properties about q2-ary cyclotomic cosets modulo n, Hermitian dual-containing conditions for a family of non-narrow-sense antiprimitive BCH codes are presented, which are similar to those of q2-ary primitive BCH codes. Consequently, via Hermitian Construction, a family of new quantum codes can be derived from these dual-containing BCH codes. Some of these new antiprimitive quantum BCH codes are comparable with those derived from primitive BCH codes.
SurfKin: an ab initio kinetic code for modeling surface reactions.
Le, Thong Nguyen-Minh; Liu, Bin; Huynh, Lam K
2014-10-05
In this article, we describe a C/C++ program called SurfKin (Surface Kinetics) to construct microkinetic mechanisms for modeling gas-surface reactions. Thermodynamic properties of reaction species are estimated based on density functional theory calculations and statistical mechanics. Rate constants for elementary steps (including adsorption, desorption, and chemical reactions on surfaces) are calculated using the classical collision theory and transition state theory. Methane decomposition and water-gas shift reaction on Ni(111) surface were chosen as test cases to validate the code implementations. The good agreement with literature data suggests this is a powerful tool to facilitate the analysis of complex reactions on surfaces, and thus it helps to effectively construct detailed microkinetic mechanisms for such surface reactions. SurfKin also opens a possibility for designing nanoscale model catalysts. Copyright © 2014 Wiley Periodicals, Inc.
Autosophy information theory provides lossless data and video compression based on the data content
NASA Astrophysics Data System (ADS)
Holtz, Klaus E.; Holtz, Eric S.; Holtz, Diana
1996-09-01
A new autosophy information theory provides an alternative to the classical Shannon information theory. Using the new theory in communication networks provides both a high degree of lossless compression and virtually unbreakable encryption codes for network security. The bandwidth in a conventional Shannon communication is determined only by the data volume and the hardware parameters, such as image size; resolution; or frame rates in television. The data content, or what is shown on the screen, is irrelevant. In contrast, the bandwidth in autosophy communication is determined only by data content, such as novelty and movement in television images. It is the data volume and hardware parameters that become irrelevant. Basically, the new communication methods use prior 'knowledge' of the data, stored in a library, to encode subsequent transmissions. The more 'knowledge' stored in the libraries, the higher the potential compression ratio. 'Information' is redefined as that which is not already known by the receiver. Everything already known is redundant and need not be re-transmitted. In a perfect communication each transmission code, called a 'tip,' creates a new 'engram' of knowledge in the library in which each tip transmission can represent any amount of data. Autosophy theories provide six separate learning modes, or omni dimensional networks, all of which can be used for data compression. The new information theory reveals the theoretical flaws of other data compression methods, including: the Huffman; Ziv Lempel; LZW codes and commercial compression codes such as V.42bis and MPEG-2.
Fundamental finite key limits for one-way information reconciliation in quantum key distribution
NASA Astrophysics Data System (ADS)
Tomamichel, Marco; Martinez-Mateo, Jesus; Pacher, Christoph; Elkouss, David
2017-11-01
The security of quantum key distribution protocols is guaranteed by the laws of quantum mechanics. However, a precise analysis of the security properties requires tools from both classical cryptography and information theory. Here, we employ recent results in non-asymptotic classical information theory to show that one-way information reconciliation imposes fundamental limitations on the amount of secret key that can be extracted in the finite key regime. In particular, we find that an often used approximation for the information leakage during information reconciliation is not generally valid. We propose an improved approximation that takes into account finite key effects and numerically test it against codes for two probability distributions, that we call binary-binary and binary-Gaussian, that typically appear in quantum key distribution protocols.
NASA Astrophysics Data System (ADS)
Marcolongo, Juan P.; Zeida, Ari; Semelak, Jonathan A.; Foglia, Nicolás O.; Morzan, Uriel N.; Estrin, Dario A.; González Lebrero, Mariano C.; Scherlis, Damián A.
2018-03-01
In this work we present the current advances in the development and the applications of LIO, a lab-made code designed for density functional theory calculations in graphical processing units (GPU), that can be coupled with different classical molecular dynamics engines. This code has been thoroughly optimized to perform efficient molecular dynamics simulations at the QM/MM DFT level, allowing for an exhaustive sampling of the configurational space. Selected examples are presented for the description of chemical reactivity in terms of free energy profiles, and also for the computation of optical properties, such as vibrational and electronic spectra in solvent and protein environments.
Classical theory of atomic collisions - The first hundred years
NASA Astrophysics Data System (ADS)
Grujić, Petar V.
2012-05-01
Classical calculations of the atomic processes started in 1911 with famous Rutherford's evaluation of the differential cross section for α particles scattered on foil atoms [1]. The success of these calculations was soon overshadowed by the rise of Quantum Mechanics in 1925 and its triumphal success in describing processes at the atomic and subatomic levels. It was generally recognized that the classical approach should be inadequate and it was neglected until 1953, when the famous paper by Gregory Wannier appeared, in which the threshold law for the single ionization cross section behaviour by electron impact was derived. All later calculations and experimental studies confirmed the law derived by purely classical theory. The next step was taken by Ian Percival and collaborators in 60s, who developed a general classical three-body computer code, which was used by many researchers in evaluating various atomic processes like ionization, excitation, detachment, dissociation, etc. Another approach was pursued by Michal Gryzinski from Warsaw, who started a far reaching programme for treating atomic particles and processes as purely classical objects [2]. Though often criticized for overestimating the domain of the classical theory, results of his group were able to match many experimental data. Belgrade group was pursuing the classical approach using both analytical and numerical calculations, studying a number of atomic collisions, in particular near-threshold processes. Riga group, lead by Modris Gailitis [3], contributed considerably to the field, as it was done by Valentin Ostrovsky and coworkers from Sanct Petersbourg, who developed powerful analytical methods within purely classical mechanics [4]. We shall make an overview of these approaches and show some of the remarkable results, which were subsequently confirmed by semiclassical and quantum mechanical calculations, as well as by the experimental evidence. Finally we discuss the theoretical and epistemological background of the classical calculations and explain why these turned out so successful, despite the essentially quantum nature of the atomic and subatomic systems.
n-Nucleotide circular codes in graph theory.
Fimmel, Elena; Michel, Christian J; Strüngmann, Lutz
2016-03-13
The circular code theory proposes that genes are constituted of two trinucleotide codes: the classical genetic code with 61 trinucleotides for coding the 20 amino acids (except the three stop codons {TAA,TAG,TGA}) and a circular code based on 20 trinucleotides for retrieving, maintaining and synchronizing the reading frame. It relies on two main results: the identification of a maximal C(3) self-complementary trinucleotide circular code X in genes of bacteria, eukaryotes, plasmids and viruses (Michel 2015 J. Theor. Biol. 380, 156-177. (doi:10.1016/j.jtbi.2015.04.009); Arquès & Michel 1996 J. Theor. Biol. 182, 45-58. (doi:10.1006/jtbi.1996.0142)) and the finding of X circular code motifs in tRNAs and rRNAs, in particular in the ribosome decoding centre (Michel 2012 Comput. Biol. Chem. 37, 24-37. (doi:10.1016/j.compbiolchem.2011.10.002); El Soufi & Michel 2014 Comput. Biol. Chem. 52, 9-17. (doi:10.1016/j.compbiolchem.2014.08.001)). The univerally conserved nucleotides A1492 and A1493 and the conserved nucleotide G530 are included in X circular code motifs. Recently, dinucleotide circular codes were also investigated (Michel & Pirillo 2013 ISRN Biomath. 2013, 538631. (doi:10.1155/2013/538631); Fimmel et al. 2015 J. Theor. Biol. 386, 159-165. (doi:10.1016/j.jtbi.2015.08.034)). As the genetic motifs of different lengths are ubiquitous in genes and genomes, we introduce a new approach based on graph theory to study in full generality n-nucleotide circular codes X, i.e. of length 2 (dinucleotide), 3 (trinucleotide), 4 (tetranucleotide), etc. Indeed, we prove that an n-nucleotide code X is circular if and only if the corresponding graph [Formula: see text] is acyclic. Moreover, the maximal length of a path in [Formula: see text] corresponds to the window of nucleotides in a sequence for detecting the correct reading frame. Finally, the graph theory of tournaments is applied to the study of dinucleotide circular codes. It has full equivalence between the combinatorics theory (Michel & Pirillo 2013 ISRN Biomath. 2013, 538631. (doi:10.1155/2013/538631)) and the group theory (Fimmel et al. 2015 J. Theor. Biol. 386, 159-165. (doi:10.1016/j.jtbi.2015.08.034)) of dinucleotide circular codes while its mathematical approach is simpler. © 2016 The Author(s).
Statistical correlation analysis for comparing vibration data from test and analysis
NASA Technical Reports Server (NTRS)
Butler, T. G.; Strang, R. F.; Purves, L. R.; Hershfeld, D. J.
1986-01-01
A theory was developed to compare vibration modes obtained by NASTRAN analysis with those obtained experimentally. Because many more analytical modes can be obtained than experimental modes, the analytical set was treated as expansion functions for putting both sources in comparative form. The dimensional symmetry was developed for three general cases: nonsymmetric whole model compared with a nonsymmetric whole structural test, symmetric analytical portion compared with a symmetric experimental portion, and analytical symmetric portion with a whole experimental test. The theory was coded and a statistical correlation program was installed as a utility. The theory is established with small classical structures.
Implementing controlled-unitary operations over the butterfly network
NASA Astrophysics Data System (ADS)
Soeda, Akihito; Kinjo, Yoshiyuki; Turner, Peter S.; Murao, Mio
2014-12-01
We introduce a multiparty quantum computation task over a network in a situation where the capacities of both the quantum and classical communication channels of the network are limited and a bottleneck occurs. Using a resource setting introduced by Hayashi [1], we present an efficient protocol for performing controlled-unitary operations between two input nodes and two output nodes over the butterfly network, one of the most fundamental networks exhibiting the bottleneck problem. This result opens the possibility of developing a theory of quantum network coding for multiparty quantum computation, whereas the conventional network coding only treats multiparty quantum communication.
Implementing controlled-unitary operations over the butterfly network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soeda, Akihito; Kinjo, Yoshiyuki; Turner, Peter S.
2014-12-04
We introduce a multiparty quantum computation task over a network in a situation where the capacities of both the quantum and classical communication channels of the network are limited and a bottleneck occurs. Using a resource setting introduced by Hayashi [1], we present an efficient protocol for performing controlled-unitary operations between two input nodes and two output nodes over the butterfly network, one of the most fundamental networks exhibiting the bottleneck problem. This result opens the possibility of developing a theory of quantum network coding for multiparty quantum computation, whereas the conventional network coding only treats multiparty quantum communication.
Functional dissociation of stimulus intensity encoding and predictive coding of pain in the insula
Geuter, Stephan; Boll, Sabrina; Eippert, Falk; Büchel, Christian
2017-01-01
The computational principles by which the brain creates a painful experience from nociception are still unknown. Classic theories suggest that cortical regions either reflect stimulus intensity or additive effects of intensity and expectations, respectively. By contrast, predictive coding theories provide a unified framework explaining how perception is shaped by the integration of beliefs about the world with mismatches resulting from the comparison of these beliefs against sensory input. Using functional magnetic resonance imaging during a probabilistic heat pain paradigm, we investigated which computations underlie pain perception. Skin conductance, pupil dilation, and anterior insula responses to cued pain stimuli strictly followed the response patterns hypothesized by the predictive coding model, whereas posterior insula encoded stimulus intensity. This novel functional dissociation of pain processing within the insula together with previously observed alterations in chronic pain offer a novel interpretation of aberrant pain processing as disturbed weighting of predictions and prediction errors. DOI: http://dx.doi.org/10.7554/eLife.24770.001 PMID:28524817
2D Quantum Simulation of MOSFET Using the Non Equilibrium Green's Function Method
NASA Technical Reports Server (NTRS)
Svizhenko, Alexel; Anantram, M. P.; Govindan, T. R.; Yan, Jerry (Technical Monitor)
2000-01-01
The objectives this viewgraph presentation summarizes include: (1) the development of a quantum mechanical simulator for ultra short channel MOSFET simulation, including theory, physical approximations, and computer code; (2) explore physics that is not accessible by semiclassical methods; (3) benchmarking of semiclassical and classical methods; and (4) study other two-dimensional devices and molecular structure, from discretized Hamiltonian to tight-binding Hamiltonian.
NASA Technical Reports Server (NTRS)
Valley, Lois
1989-01-01
The SPS product, Classic-Ada, is a software tool that supports object-oriented Ada programming with powerful inheritance and dynamic binding. Object Oriented Design (OOD) is an easy, natural development paradigm, but it is not supported by Ada. Following the DOD Ada mandate, SPS developed Classic-Ada to provide a tool which supports OOD and implements code in Ada. It consists of a design language, a code generator and a toolset. As a design language, Classic-Ada supports the object-oriented principles of information hiding, data abstraction, dynamic binding, and inheritance. It also supports natural reuse and incremental development through inheritance, code factoring, and Ada, Classic-Ada, dynamic binding and static binding in the same program. Only nine new constructs were added to Ada to provide object-oriented design capabilities. The Classic-Ada code generator translates user application code into fully compliant, ready-to-run, standard Ada. The Classic-Ada toolset is fully supported by SPS and consists of an object generator, a builder, a dictionary manager, and a reporter. Demonstrations of Classic-Ada and the Classic-Ada Browser were given at the workshop.
Applications of Derandomization Theory in Coding
NASA Astrophysics Data System (ADS)
Cheraghchi, Mahdi
2011-07-01
Randomized techniques play a fundamental role in theoretical computer science and discrete mathematics, in particular for the design of efficient algorithms and construction of combinatorial objects. The basic goal in derandomization theory is to eliminate or reduce the need for randomness in such randomized constructions. In this thesis, we explore some applications of the fundamental notions in derandomization theory to problems outside the core of theoretical computer science, and in particular, certain problems related to coding theory. First, we consider the wiretap channel problem which involves a communication system in which an intruder can eavesdrop a limited portion of the transmissions, and construct efficient and information-theoretically optimal communication protocols for this model. Then we consider the combinatorial group testing problem. In this classical problem, one aims to determine a set of defective items within a large population by asking a number of queries, where each query reveals whether a defective item is present within a specified group of items. We use randomness condensers to explicitly construct optimal, or nearly optimal, group testing schemes for a setting where the query outcomes can be highly unreliable, as well as the threshold model where a query returns positive if the number of defectives pass a certain threshold. Finally, we design ensembles of error-correcting codes that achieve the information-theoretic capacity of a large class of communication channels, and then use the obtained ensembles for construction of explicit capacity achieving codes. [This is a shortened version of the actual abstract in the thesis.
Online Performance-Improvement Algorithms
1994-08-01
fault rate as the request sequence length approaches infinity. Their algorithms are based on an innovative use of the classical Ziv - Lempel [85] data ...Report CS-TR-348-91. [85] J. Ziv and A. Lempel . Compression of individual sequences via variable-rate coding. IEEE Trans. Inf. Theory, 24:530-53`, 1978. 94...Deferred Data Structuring Recall that our incremental multi-trip algorithm spreads the building of the fence-tree over several trips in order to
Code subspaces for LLM geometries
NASA Astrophysics Data System (ADS)
Berenstein, David; Miller, Alexandra
2018-03-01
We consider effective field theory around classical background geometries with a gauge theory dual, specifically those in the class of LLM geometries. These are dual to half-BPS states of N= 4 SYM. We find that the language of code subspaces is natural for discussing the set of nearby states, which are built by acting with effective fields on these backgrounds. This work extends our previous work by going beyond the strict infinite N limit. We further discuss how one can extract the topology of the state beyond N→∞ and find that, as before, uncertainty and entanglement entropy calculations provide a useful tool to do so. Finally, we discuss obstructions to writing down a globally defined metric operator. We find that the answer depends on the choice of reference state that one starts with. Therefore, within this setup, there is ambiguity in trying to write an operator that describes the metric globally.
On entanglement-assisted quantum codes achieving the entanglement-assisted Griesmer bound
NASA Astrophysics Data System (ADS)
Li, Ruihu; Li, Xueliang; Guo, Luobin
2015-12-01
The theory of entanglement-assisted quantum error-correcting codes (EAQECCs) is a generalization of the standard stabilizer formalism. Any quaternary (or binary) linear code can be used to construct EAQECCs under the entanglement-assisted (EA) formalism. We derive an EA-Griesmer bound for linear EAQECCs, which is a quantum analog of the Griesmer bound for classical codes. This EA-Griesmer bound is tighter than known bounds for EAQECCs in the literature. For a given quaternary linear code {C}, we show that the parameters of the EAQECC that EA-stabilized by the dual of {C} can be determined by a zero radical quaternary code induced from {C}, and a necessary condition under which a linear EAQECC may achieve the EA-Griesmer bound is also presented. We construct four families of optimal EAQECCs and then show the necessary condition for existence of EAQECCs is also sufficient for some low-dimensional linear EAQECCs. The four families of optimal EAQECCs are degenerate codes and go beyond earlier constructions. What is more, except four codes, our [[n,k,d_{ea};c
Spatial versus sequential correlations for random access coding
NASA Astrophysics Data System (ADS)
Tavakoli, Armin; Marques, Breno; Pawłowski, Marcin; Bourennane, Mohamed
2016-03-01
Random access codes are important for a wide range of applications in quantum information. However, their implementation with quantum theory can be made in two very different ways: (i) by distributing data with strong spatial correlations violating a Bell inequality or (ii) using quantum communication channels to create stronger-than-classical sequential correlations between state preparation and measurement outcome. Here we study this duality of the quantum realization. We present a family of Bell inequalities tailored to the task at hand and study their quantum violations. Remarkably, we show that the use of spatial and sequential quantum correlations imposes different limitations on the performance of quantum random access codes: Sequential correlations can outperform spatial correlations. We discuss the physics behind the observed discrepancy between spatial and sequential quantum correlations.
Topologies on quantum topoi induced by quantization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakayama, Kunji
2013-07-15
In the present paper, we consider effects of quantization in a topos approach of quantum theory. A quantum system is assumed to be coded in a quantum topos, by which we mean the topos of presheaves on the context category of commutative subalgebras of a von Neumann algebra of bounded operators on a Hilbert space. A classical system is modeled by a Lie algebra of classical observables. It is shown that a quantization map from the classical observables to self-adjoint operators on the Hilbert space naturally induces geometric morphisms from presheaf topoi related to the classical system to the quantummore » topos. By means of the geometric morphisms, we give Lawvere-Tierney topologies on the quantum topos (and their equivalent Grothendieck topologies on the context category). We show that, among them, there exists a canonical one which we call a quantization topology. We furthermore give an explicit expression of a sheafification functor associated with the quantization topology.« less
Simulation of transient flow in a shock tunnel and a high Mach number nozzle
NASA Technical Reports Server (NTRS)
Jacobs, P. A.
1991-01-01
A finite volume Navier-Stokes code was used to simulate the shock reflection and nozzle starting processes in an axisymmetric shock tube and a high Mach number nozzle. The simulated nozzle starting processes were found to match the classical quasi-1-D theory and some features of the experimental measurements. The shock reflection simulation illustrated a new mechanism for the driver gas contamination of the stagnated test gas.
Flenady, Tracy; Dwyer, Trudy; Applegarth, Judith
2017-09-01
Abnormal respiratory rates are one of the first indicators of clinical deterioration in emergency department(ED) patients. Despite the importance of respiratory rate observations, this vital sign is often inaccurately recorded on ED observation charts, compromising patient safety. Concurrently, there is a paucity of research reporting why this phenomenon occurs. To develop a substantive theory explaining ED registered nurses' reasoning when they miss or misreport respiratory rate observations. This research project employed a classic grounded theory analysis of qualitative data. Seventy-nine registered nurses currently working in EDs within Australia. Data collected included detailed responses from individual interviews and open-ended responses from an online questionnaire. Classic grounded theory (CGT) research methods were utilised, therefore coding was central to the abstraction of data and its reintegration as theory. Constant comparison synonymous with CGT methods were employed to code data. This approach facilitated the identification of the main concern of the participants and aided in the generation of theory explaining how the participants processed this issue. The main concern identified is that ED registered nurses do not believe that collecting an accurate respiratory rate for ALL patients at EVERY round of observations is a requirement, and yet organizational requirements often dictate that a value for the respiratory rate be included each time vital signs are collected. The theory 'Rationalising Transgression', explains how participants continually resolve this problem. The study found that despite feeling professionally conflicted, nurses often erroneously record respiratory rate observations, and then rationalise this behaviour by employing strategies that adjust the significance of the organisational requirement. These strategies include; Compensating, when nurses believe they are compensating for errant behaviour by enhancing the patient's outcome; Minimalizing, when nurses believe that the patient's outcome would be no different if they recorded an accurate respiratory rate or not and; Trivialising, a strategy that sanctions negligent behaviour and occurs when nurses 'cut corners' to get the job done. Nurses' use these strategies to titrate the level ofemotional discomfort associated with erroneous behaviour, thereby rationalising transgression CONCLUSION: This research reveals that despite continuing education regarding gold standard guidelines for respiratory rate collection, suboptimal practice continues. Ideally, to combat this transgression, a culture shift must occur regarding nurses' understanding of acceptable practice methods. Nurses must receive education in a way that permeates their understanding of the relationship between the regular collection of accurate respiratory rate observations and optimal patient outcomes. Copyright © 2017 Elsevier Ltd. All rights reserved.
A novel construction method of QC-LDPC codes based on CRT for optical communications
NASA Astrophysics Data System (ADS)
Yuan, Jian-guo; Liang, Meng-qi; Wang, Yong; Lin, Jin-zhao; Pang, Yu
2016-05-01
A novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes is proposed based on Chinese remainder theory (CRT). The method can not only increase the code length without reducing the girth, but also greatly enhance the code rate, so it is easy to construct a high-rate code. The simulation results show that at the bit error rate ( BER) of 10-7, the net coding gain ( NCG) of the regular QC-LDPC(4 851, 4 546) code is respectively 2.06 dB, 1.36 dB, 0.53 dB and 0.31 dB more than those of the classic RS(255, 239) code in ITU-T G.975, the LDPC(32 640, 30 592) code in ITU-T G.975.1, the QC-LDPC(3 664, 3 436) code constructed by the improved combining construction method based on CRT and the irregular QC-LDPC(3 843, 3 603) code constructed by the construction method based on the Galois field ( GF( q)) multiplicative group. Furthermore, all these five codes have the same code rate of 0.937. Therefore, the regular QC-LDPC(4 851, 4 546) code constructed by the proposed construction method has excellent error-correction performance, and can be more suitable for optical transmission systems.
Research Prototype: Automated Analysis of Scientific and Engineering Semantics
NASA Technical Reports Server (NTRS)
Stewart, Mark E. M.; Follen, Greg (Technical Monitor)
2001-01-01
Physical and mathematical formulae and concepts are fundamental elements of scientific and engineering software. These classical equations and methods are time tested, universally accepted, and relatively unambiguous. The existence of this classical ontology suggests an ideal problem for automated comprehension. This problem is further motivated by the pervasive use of scientific code and high code development costs. To investigate code comprehension in this classical knowledge domain, a research prototype has been developed. The prototype incorporates scientific domain knowledge to recognize code properties (including units, physical, and mathematical quantity). Also, the procedure implements programming language semantics to propagate these properties through the code. This prototype's ability to elucidate code and detect errors will be demonstrated with state of the art scientific codes.
ERIC Educational Resources Information Center
Yelboga, Atilla; Tavsancil, Ezel
2010-01-01
In this research, the classical test theory and generalizability theory analyses were carried out with the data obtained by a job performance scale for the years 2005 and 2006. The reliability coefficients obtained (estimated) from the classical test theory and generalizability theory analyses were compared. In classical test theory, test retest…
Quantum and classical behavior in interacting bosonic systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hertzberg, Mark P.
It is understood that in free bosonic theories, the classical field theory accurately describes the full quantum theory when the occupancy numbers of systems are very large. However, the situation is less understood in interacting theories, especially on time scales longer than the dynamical relaxation time. Recently there have been claims that the quantum theory deviates spectacularly from the classical theory on this time scale, even if the occupancy numbers are extremely large. Furthermore, it is claimed that the quantum theory quickly thermalizes while the classical theory does not. The evidence for these claims comes from noticing a spectacular differencemore » in the time evolution of expectation values of quantum operators compared to the classical micro-state evolution. If true, this would have dramatic consequences for many important phenomena, including laboratory studies of interacting BECs, dark matter axions, preheating after inflation, etc. In this work we critically examine these claims. We show that in fact the classical theory can describe the quantum behavior in the high occupancy regime, even when interactions are large. The connection is that the expectation values of quantum operators in a single quantum micro-state are approximated by a corresponding classical ensemble average over many classical micro-states. Furthermore, by the ergodic theorem, a classical ensemble average of local fields with statistical translation invariance is the spatial average of a single micro-state. So the correlation functions of the quantum and classical field theories of a single micro-state approximately agree at high occupancy, even in interacting systems. Furthermore, both quantum and classical field theories can thermalize, when appropriate coarse graining is introduced, with the classical case requiring a cutoff on low occupancy UV modes. We discuss applications of our results.« less
Quantum Stabilizer Codes Can Realize Access Structures Impossible by Classical Secret Sharing
NASA Astrophysics Data System (ADS)
Matsumoto, Ryutaroh
We show a simple example of a secret sharing scheme encoding classical secret to quantum shares that can realize an access structure impossible by classical information processing with limitation on the size of each share. The example is based on quantum stabilizer codes.
Diagrammar in classical scalar field theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cattaruzza, E., E-mail: Enrico.Cattaruzza@gmail.com; Gozzi, E., E-mail: gozzi@ts.infn.it; INFN, Sezione di Trieste
2011-09-15
In this paper we analyze perturbatively a g{phi}{sup 4}classical field theory with and without temperature. In order to do that, we make use of a path-integral approach developed some time ago for classical theories. It turns out that the diagrams appearing at the classical level are many more than at the quantum level due to the presence of extra auxiliary fields in the classical formalism. We shall show that a universal supersymmetry present in the classical path-integral mentioned above is responsible for the cancelation of various diagrams. The same supersymmetry allows the introduction of super-fields and super-diagrams which considerably simplifymore » the calculations and make the classical perturbative calculations almost 'identical' formally to the quantum ones. Using the super-diagrams technique, we develop the classical perturbation theory up to third order. We conclude the paper with a perturbative check of the fluctuation-dissipation theorem. - Highlights: > We provide the Feynman diagrams of perturbation theory for a classical field theory. > We give a super-formalism which links the quantum diagrams to the classical ones. > We check perturbatively the fluctuation-dissipation theorem.« less
Moderate Deviation Analysis for Classical Communication over Quantum Channels
NASA Astrophysics Data System (ADS)
Chubb, Christopher T.; Tan, Vincent Y. F.; Tomamichel, Marco
2017-11-01
We analyse families of codes for classical data transmission over quantum channels that have both a vanishing probability of error and a code rate approaching capacity as the code length increases. To characterise the fundamental tradeoff between decoding error, code rate and code length for such codes we introduce a quantum generalisation of the moderate deviation analysis proposed by Altŭg and Wagner as well as Polyanskiy and Verdú. We derive such a tradeoff for classical-quantum (as well as image-additive) channels in terms of the channel capacity and the channel dispersion, giving further evidence that the latter quantity characterises the necessary backoff from capacity when transmitting finite blocks of classical data. To derive these results we also study asymmetric binary quantum hypothesis testing in the moderate deviations regime. Due to the central importance of the latter task, we expect that our techniques will find further applications in the analysis of other quantum information processing tasks.
Superdense coding interleaved with forward error correction
Humble, Travis S.; Sadlier, Ronald J.
2016-05-12
Superdense coding promises increased classical capacity and communication security but this advantage may be undermined by noise in the quantum channel. We present a numerical study of how forward error correction (FEC) applied to the encoded classical message can be used to mitigate against quantum channel noise. By studying the bit error rate under different FEC codes, we identify the unique role that burst errors play in superdense coding, and we show how these can be mitigated against by interleaving the FEC codewords prior to transmission. As a result, we conclude that classical FEC with interleaving is a useful methodmore » to improve the performance in near-term demonstrations of superdense coding.« less
Experimental realization of the analogy of quantum dense coding in classical optics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Zhenwei; Sun, Yifan; Li, Pengyun
2016-06-15
We report on the experimental realization of the analogy of quantum dense coding in classical optical communication using classical optical correlations. Compared to quantum dense coding that uses pairs of photons entangled in polarization, we find that the proposed design exhibits many advantages. Considering that it is convenient to realize in optical communication, the attainable channel capacity in the experiment for dense coding can reach 2 bits, which is higher than that of the usual quantum coding capacity (1.585 bits). This increased channel capacity has been proven experimentally by transmitting ASCII characters in 12 quaternary digitals instead of the usualmore » 24 bits.« less
NASA Technical Reports Server (NTRS)
Kikuchi, Hideaki; Kalia, Rajiv; Nakano, Aiichiro; Vashishta, Priya; Iyetomi, Hiroshi; Ogata, Shuji; Kouno, Takahisa; Shimojo, Fuyuki; Tsuruta, Kanji; Saini, Subhash;
2002-01-01
A multidisciplinary, collaborative simulation has been performed on a Grid of geographically distributed PC clusters. The multiscale simulation approach seamlessly combines i) atomistic simulation backed on the molecular dynamics (MD) method and ii) quantum mechanical (QM) calculation based on the density functional theory (DFT), so that accurate but less scalable computations are performed only where they are needed. The multiscale MD/QM simulation code has been Grid-enabled using i) a modular, additive hybridization scheme, ii) multiple QM clustering, and iii) computation/communication overlapping. The Gridified MD/QM simulation code has been used to study environmental effects of water molecules on fracture in silicon. A preliminary run of the code has achieved a parallel efficiency of 94% on 25 PCs distributed over 3 PC clusters in the US and Japan, and a larger test involving 154 processors on 5 distributed PC clusters is in progress.
Relativistic quantum cryptography
NASA Astrophysics Data System (ADS)
Molotkov, S. N.; Nazin, S. S.
2003-07-01
The problem of unconditional security of quantum cryptography (i.e. the security which is guaranteed by the fundamental laws of nature rather than by technical limitations) is one of the central points in quantum information theory. We propose a relativistic quantum cryptosystem and prove its unconditional security against any eavesdropping attempts. Relativistitic causality arguments allow to demonstrate the security of the system in a simple way. Since the proposed protocol does not empoly collective measurements and quantum codes, the cryptosystem can be experimentally realized with the present state-of-art in fiber optics technologies. The proposed cryptosystem employs only the individual measurements and classical codes and, in addition, the key distribution problem allows to postpone the choice of the state encoding scheme until after the states are already received instead of choosing it before sending the states into the communication channel (i.e. to employ a sort of "antedate" coding).
Raykov, Tenko; Marcoulides, George A
2016-04-01
The frequently neglected and often misunderstood relationship between classical test theory and item response theory is discussed for the unidimensional case with binary measures and no guessing. It is pointed out that popular item response models can be directly obtained from classical test theory-based models by accounting for the discrete nature of the observed items. Two distinct observational equivalence approaches are outlined that render the item response models from corresponding classical test theory-based models, and can each be used to obtain the former from the latter models. Similarly, classical test theory models can be furnished using the reverse application of either of those approaches from corresponding item response models.
Transient Ejector Analysis (TEA) code user's guide
NASA Technical Reports Server (NTRS)
Drummond, Colin K.
1993-01-01
A FORTRAN computer program for the semi analytic prediction of unsteady thrust augmenting ejector performance has been developed, based on a theoretical analysis for ejectors. That analysis blends classic self-similar turbulent jet descriptions with control-volume mixing region elements. Division of the ejector into an inlet, diffuser, and mixing region allowed flexibility in the modeling of the physics for each region. In particular, the inlet and diffuser analyses are simplified by a quasi-steady-analysis, justified by the assumption that pressure is the forcing function in those regions. Only the mixing region is assumed to be dominated by viscous effects. The present work provides an overview of the code structure, a description of the required input and output data file formats, and the results for a test case. Since there are limitations to the code for applications outside the bounds of the test case, the user should consider TEA as a research code (not as a production code), designed specifically as an implementation of the proposed ejector theory. Program error flags are discussed, and some diagnostic routines are presented.
Quantum Graphical Models and Belief Propagation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leifer, M.S.; Perimeter Institute for Theoretical Physics, 31 Caroline Street North, Waterloo Ont., N2L 2Y5; Poulin, D.
Belief Propagation algorithms acting on Graphical Models of classical probability distributions, such as Markov Networks, Factor Graphs and Bayesian Networks, are amongst the most powerful known methods for deriving probabilistic inferences amongst large numbers of random variables. This paper presents a generalization of these concepts and methods to the quantum case, based on the idea that quantum theory can be thought of as a noncommutative, operator-valued, generalization of classical probability theory. Some novel characterizations of quantum conditional independence are derived, and definitions of Quantum n-Bifactor Networks, Markov Networks, Factor Graphs and Bayesian Networks are proposed. The structure of Quantum Markovmore » Networks is investigated and some partial characterization results are obtained, along the lines of the Hammersley-Clifford theorem. A Quantum Belief Propagation algorithm is presented and is shown to converge on 1-Bifactor Networks and Markov Networks when the underlying graph is a tree. The use of Quantum Belief Propagation as a heuristic algorithm in cases where it is not known to converge is discussed. Applications to decoding quantum error correcting codes and to the simulation of many-body quantum systems are described.« less
Polar codes for achieving the classical capacity of a quantum channel
NASA Astrophysics Data System (ADS)
Guha, Saikat; Wilde, Mark
2012-02-01
We construct the first near-explicit, linear, polar codes that achieve the capacity for classical communication over quantum channels. The codes exploit the channel polarization phenomenon observed by Arikan for classical channels. Channel polarization is an effect in which one can synthesize a set of channels, by ``channel combining'' and ``channel splitting,'' in which a fraction of the synthesized channels is perfect for data transmission while the other fraction is completely useless for data transmission, with the good fraction equal to the capacity of the channel. Our main technical contributions are threefold. First, we demonstrate that the channel polarization effect occurs for channels with classical inputs and quantum outputs. We then construct linear polar codes based on this effect, and the encoding complexity is O(N log N), where N is the blocklength of the code. We also demonstrate that a quantum successive cancellation decoder works well, i.e., the word error rate decays exponentially with the blocklength of the code. For a quantum channel with binary pure-state outputs, such as a binary-phase-shift-keyed coherent-state optical communication alphabet, the symmetric Holevo information rate is in fact the ultimate channel capacity, which is achieved by our polar code.
Quantum-capacity-approaching codes for the detected-jump channel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grassl, Markus; Wei Zhaohui; Ji Zhengfeng
2010-12-15
The quantum-channel capacity gives the ultimate limit for the rate at which quantum data can be reliably transmitted through a noisy quantum channel. Degradable quantum channels are among the few channels whose quantum capacities are known. Given the quantum capacity of a degradable channel, it remains challenging to find a practical coding scheme which approaches capacity. Here we discuss code designs for the detected-jump channel, a degradable channel with practical relevance describing the physics of spontaneous decay of atoms with detected photon emission. We show that this channel can be used to simulate a binary classical channel with both erasuresmore » and bit flips. The capacity of the simulated classical channel gives a lower bound on the quantum capacity of the detected-jump channel. When the jump probability is small, it almost equals the quantum capacity. Hence using a classical capacity-approaching code for the simulated classical channel yields a quantum code which approaches the quantum capacity of the detected-jump channel.« less
Quantum steganography and quantum error-correction
NASA Astrophysics Data System (ADS)
Shaw, Bilal A.
Quantum error-correcting codes have been the cornerstone of research in quantum information science (QIS) for more than a decade. Without their conception, quantum computers would be a footnote in the history of science. When researchers embraced the idea that we live in a world where the effects of a noisy environment cannot completely be stripped away from the operations of a quantum computer, the natural way forward was to think about importing classical coding theory into the quantum arena to give birth to quantum error-correcting codes which could help in mitigating the debilitating effects of decoherence on quantum data. We first talk about the six-qubit quantum error-correcting code and show its connections to entanglement-assisted error-correcting coding theory and then to subsystem codes. This code bridges the gap between the five-qubit (perfect) and Steane codes. We discuss two methods to encode one qubit into six physical qubits. Each of the two examples corrects an arbitrary single-qubit error. The first example is a degenerate six-qubit quantum error-correcting code. We explicitly provide the stabilizer generators, encoding circuits, codewords, logical Pauli operators, and logical CNOT operator for this code. We also show how to convert this code into a non-trivial subsystem code that saturates the subsystem Singleton bound. We then prove that a six-qubit code without entanglement assistance cannot simultaneously possess a Calderbank-Shor-Steane (CSS) stabilizer and correct an arbitrary single-qubit error. A corollary of this result is that the Steane seven-qubit code is the smallest single-error correcting CSS code. Our second example is the construction of a non-degenerate six-qubit CSS entanglement-assisted code. This code uses one bit of entanglement (an ebit) shared between the sender (Alice) and the receiver (Bob) and corrects an arbitrary single-qubit error. The code we obtain is globally equivalent to the Steane seven-qubit code and thus corrects an arbitrary error on the receiver's half of the ebit as well. We prove that this code is the smallest code with a CSS structure that uses only one ebit and corrects an arbitrary single-qubit error on the sender's side. We discuss the advantages and disadvantages for each of the two codes. In the second half of this thesis we explore the yet uncharted and relatively undiscovered area of quantum steganography. Steganography is the process of hiding secret information by embedding it in an "innocent" message. We present protocols for hiding quantum information in a codeword of a quantum error-correcting code passing through a channel. Using either a shared classical secret key or shared entanglement Alice disguises her information as errors in the channel. Bob can retrieve the hidden information, but an eavesdropper (Eve) with the power to monitor the channel, but without the secret key, cannot distinguish the message from channel noise. We analyze how difficult it is for Eve to detect the presence of secret messages, and estimate rates of steganographic communication and secret key consumption for certain protocols. We also provide an example of how Alice hides quantum information in the perfect code when the underlying channel between Bob and her is the depolarizing channel. Using this scheme Alice can hide up to four stego-qubits.
ERIC Educational Resources Information Center
Raykov, Tenko; Marcoulides, George A.
2016-01-01
The frequently neglected and often misunderstood relationship between classical test theory and item response theory is discussed for the unidimensional case with binary measures and no guessing. It is pointed out that popular item response models can be directly obtained from classical test theory-based models by accounting for the discrete…
Fundamental theories of waves and particles formulated without classical mass
NASA Astrophysics Data System (ADS)
Fry, J. L.; Musielak, Z. E.
2010-12-01
Quantum and classical mechanics are two conceptually and mathematically different theories of physics, and yet they do use the same concept of classical mass that was originally introduced by Newton in his formulation of the laws of dynamics. In this paper, physical consequences of using the classical mass by both theories are explored, and a novel approach that allows formulating fundamental (Galilean invariant) theories of waves and particles without formally introducing the classical mass is presented. In this new formulation, the theories depend only on one common parameter called 'wave mass', which is deduced from experiments for selected elementary particles and for the classical mass of one kilogram. It is shown that quantum theory with the wave mass is independent of the Planck constant and that higher accuracy of performing calculations can be attained by such theory. Natural units in connection with the presented approach are also discussed and justification beyond dimensional analysis is given for the particular choice of such units.
The contrasting roles of Planck's constant in classical and quantum theories
NASA Astrophysics Data System (ADS)
Boyer, Timothy H.
2018-04-01
We trace the historical appearance of Planck's constant in physics, and we note that initially the constant did not appear in connection with quanta. Furthermore, we emphasize that Planck's constant can appear in both classical and quantum theories. In both theories, Planck's constant sets the scale of atomic phenomena. However, the roles played in the foundations of the theories are sharply different. In quantum theory, Planck's constant is crucial to the structure of the theory. On the other hand, in classical electrodynamics, Planck's constant is optional, since it appears only as the scale factor for the (homogeneous) source-free contribution to the general solution of Maxwell's equations. Since classical electrodynamics can be solved while taking the homogenous source-free contribution in the solution as zero or non-zero, there are naturally two different theories of classical electrodynamics, one in which Planck's constant is taken as zero and one where it is taken as non-zero. The textbooks of classical electromagnetism present only the version in which Planck's constant is taken to vanish.
Evaluation of a Progressive Failure Analysis Methodology for Laminated Composite Structures
NASA Technical Reports Server (NTRS)
Sleight, David W.; Knight, Norman F., Jr.; Wang, John T.
1997-01-01
A progressive failure analysis methodology has been developed for predicting the nonlinear response and failure of laminated composite structures. The progressive failure analysis uses C plate and shell elements based on classical lamination theory to calculate the in-plane stresses. Several failure criteria, including the maximum strain criterion, Hashin's criterion, and Christensen's criterion, are used to predict the failure mechanisms. The progressive failure analysis model is implemented into a general purpose finite element code and can predict the damage and response of laminated composite structures from initial loading to final failure.
Taking-On: A Grounded Theory of Addressing Barriers in Task Completion
ERIC Educational Resources Information Center
Austinson, Julie Ann
2011-01-01
This study of taking-on was conducted using classical grounded theory methodology (Glaser, 1978, 1992, 1998, 2001, 2005; Glaser & Strauss, 1967). Classical grounded theory is inductive, empirical, and naturalistic; it does not utilize manipulation or constrained time frames. Classical grounded theory is a systemic research method used to generate…
On the classic and modern theories of matching.
McDowell, J J
2005-07-01
Classic matching theory, which is based on Herrnstein's (1961) original matching equation and includes the well-known quantitative law of effect, is almost certainly false. The theory is logically inconsistent with known experimental findings, and experiments have shown that its central constant-k assumption is not tenable. Modern matching theory, which is based on the power function version of the original matching equation, remains tenable, although it has not been discussed or studied extensively. The modern theory is logically consistent with known experimental findings, it predicts the fact and details of the violation of the classic theory's constant-k assumption, and it accurately describes at least some data that are inconsistent with the classic theory.
Classical Field Theory and the Stress-Energy Tensor
NASA Astrophysics Data System (ADS)
Swanson, Mark S.
2015-09-01
This book is a concise introduction to the key concepts of classical field theory for beginning graduate students and advanced undergraduate students who wish to study the unifying structures and physical insights provided by classical field theory without dealing with the additional complication of quantization. In that regard, there are many important aspects of field theory that can be understood without quantizing the fields. These include the action formulation, Galilean and relativistic invariance, traveling and standing waves, spin angular momentum, gauge invariance, subsidiary conditions, fluctuations, spinor and vector fields, conservation laws and symmetries, and the Higgs mechanism, all of which are often treated briefly in a course on quantum field theory. The variational form of classical mechanics and continuum field theory are both developed in the time-honored graduate level text by Goldstein et al (2001). An introduction to classical field theory from a somewhat different perspective is available in Soper (2008). Basic classical field theory is often treated in books on quantum field theory. Two excellent texts where this is done are Greiner and Reinhardt (1996) and Peskin and Schroeder (1995). Green's function techniques are presented in Arfken et al (2013).
A quantum-classical theory with nonlinear and stochastic dynamics
NASA Astrophysics Data System (ADS)
Burić, N.; Popović, D. B.; Radonjić, M.; Prvanović, S.
2014-12-01
The method of constrained dynamical systems on the quantum-classical phase space is utilized to develop a theory of quantum-classical hybrid systems. Effects of the classical degrees of freedom on the quantum part are modeled using an appropriate constraint, and the interaction also includes the effects of neglected degrees of freedom. Dynamical law of the theory is given in terms of nonlinear stochastic differential equations with Hamiltonian and gradient terms. The theory provides a successful dynamical description of the collapse during quantum measurement.
Yurko, Joseph P.; Buongiorno, Jacopo; Youngblood, Robert
2015-05-28
System codes for simulation of safety performance of nuclear plants may contain parameters whose values are not known very accurately. New information from tests or operating experience is incorporated into safety codes by a process known as calibration, which reduces uncertainty in the output of the code and thereby improves its support for decision-making. The work reported here implements several improvements on classic calibration techniques afforded by modern analysis techniques. The key innovation has come from development of code surrogate model (or code emulator) construction and prediction algorithms. Use of a fast emulator makes the calibration processes used here withmore » Markov Chain Monte Carlo (MCMC) sampling feasible. This study uses Gaussian Process (GP) based emulators, which have been used previously to emulate computer codes in the nuclear field. The present work describes the formulation of an emulator that incorporates GPs into a factor analysis-type or pattern recognition-type model. This “function factorization” Gaussian Process (FFGP) model allows overcoming limitations present in standard GP emulators, thereby improving both accuracy and speed of the emulator-based calibration process. Calibration of a friction-factor example using a Method of Manufactured Solution is performed to illustrate key properties of the FFGP based process.« less
Continuous-variable quantum network coding for coherent states
NASA Astrophysics Data System (ADS)
Shang, Tao; Li, Ke; Liu, Jian-wei
2017-04-01
As far as the spectral characteristic of quantum information is concerned, the existing quantum network coding schemes can be looked on as the discrete-variable quantum network coding schemes. Considering the practical advantage of continuous variables, in this paper, we explore two feasible continuous-variable quantum network coding (CVQNC) schemes. Basic operations and CVQNC schemes are both provided. The first scheme is based on Gaussian cloning and ADD/SUB operators and can transmit two coherent states across with a fidelity of 1/2, while the second scheme utilizes continuous-variable quantum teleportation and can transmit two coherent states perfectly. By encoding classical information on quantum states, quantum network coding schemes can be utilized to transmit classical information. Scheme analysis shows that compared with the discrete-variable paradigms, the proposed CVQNC schemes provide better network throughput from the viewpoint of classical information transmission. By modulating the amplitude and phase quadratures of coherent states with classical characters, the first scheme and the second scheme can transmit 4{log _2}N and 2{log _2}N bits of information by a single network use, respectively.
Staying theoretically sensitive when conducting grounded theory research.
Reay, Gudrun; Bouchal, Shelley Raffin; A Rankin, James
2016-09-01
Background Grounded theory (GT) is founded on the premise that underlying social patterns can be discovered and conceptualised into theories. The method and need for theoretical sensitivity are best understood in the historical context in which GT was developed. Theoretical sensitivity entails entering the field with no preconceptions, so as to remain open to the data and the emerging theory. Investigators also read literature from other fields to understand various ways to construct theories. Aim To explore the concept of theoretical sensitivity from a classical GT perspective, and discuss the ontological and epistemological foundations of GT. Discussion Difficulties in remaining theoretically sensitive throughout research are discussed and illustrated with examples. Emergence - the idea that theory and substance will emerge from the process of comparing data - and staying open to the data are emphasised. Conclusion Understanding theoretical sensitivity as an underlying guiding principle of GT helps the researcher make sense of important concepts, such as delaying the literature review, emergence and the constant comparative method (simultaneous collection, coding and analysis of data). Implications for practice Theoretical sensitivity and adherence to the GT research method allow researchers to discover theories that can bridge the gap between theory and practice.
NASA Astrophysics Data System (ADS)
Harvey, James E.
2012-10-01
Professor Bill Wolfe was an exceptional mentor for his graduate students, and he made a major contribution to the field of optical engineering by teaching the (largely ignored) principles of radiometry for over forty years. This paper describes an extension of Bill's work on surface scatter behavior and the application of the BRDF to practical optical engineering problems. Most currently-available image analysis codes require the BRDF data as input in order to calculate the image degradation from residual optical fabrication errors. This BRDF data is difficult to measure and rarely available for short EUV wavelengths of interest. Due to a smooth-surface approximation, the classical Rayleigh-Rice surface scatter theory cannot be used to calculate BRDFs from surface metrology data for even slightly rough surfaces. The classical Beckmann-Kirchhoff theory has a paraxial limitation and only provides a closed-form solution for Gaussian surfaces. Recognizing that surface scatter is a diffraction process, and by utilizing sound radiometric principles, we first developed a linear systems theory of non-paraxial scalar diffraction in which diffracted radiance is shift-invariant in direction cosine space. Since random rough surfaces are merely a superposition of sinusoidal phase gratings, it was a straightforward extension of this non-paraxial scalar diffraction theory to develop a unified surface scatter theory that is valid for moderately rough surfaces at arbitrary incident and scattered angles. Finally, the above two steps are combined to yield a linear systems approach to modeling image quality for systems suffering from a variety of image degradation mechanisms. A comparison of image quality predictions with experimental results taken from on-orbit Solar X-ray Imager (SXI) data is presented.
Simulation of surface processes
Jónsson, Hannes
2011-01-01
Computer simulations of surface processes can reveal unexpected insight regarding atomic-scale structure and transitions. Here, the strengths and weaknesses of some commonly used approaches are reviewed as well as promising avenues for improvements. The electronic degrees of freedom are usually described by gradient-dependent functionals within Kohn–Sham density functional theory. Although this level of theory has been remarkably successful in numerous studies, several important problems require a more accurate theoretical description. It is important to develop new tools to make it possible to study, for example, localized defect states and band gaps in large and complex systems. Preliminary results presented here show that orbital density-dependent functionals provide a promising avenue, but they require the development of new numerical methods and substantial changes to codes designed for Kohn–Sham density functional theory. The nuclear degrees of freedom can, in most cases, be described by the classical equations of motion; however, they still pose a significant challenge, because the time scale of interesting transitions, which typically involve substantial free energy barriers, is much longer than the time scale of vibrations—often 10 orders of magnitude. Therefore, simulation of diffusion, structural annealing, and chemical reactions cannot be achieved with direct simulation of the classical dynamics. Alternative approaches are needed. One such approach is transition state theory as implemented in the adaptive kinetic Monte Carlo algorithm, which, thus far, has relied on the harmonic approximation but could be extended and made applicable to systems with rougher energy landscape and transitions through quantum mechanical tunneling. PMID:21199939
Noise-enhanced coding in phasic neuron spike trains.
Ly, Cheng; Doiron, Brent
2017-01-01
The stochastic nature of neuronal response has lead to conjectures about the impact of input fluctuations on the neural coding. For the most part, low pass membrane integration and spike threshold dynamics have been the primary features assumed in the transfer from synaptic input to output spiking. Phasic neurons are a common, but understudied, neuron class that are characterized by a subthreshold negative feedback that suppresses spike train responses to low frequency signals. Past work has shown that when a low frequency signal is accompanied by moderate intensity broadband noise, phasic neurons spike trains are well locked to the signal. We extend these results with a simple, reduced model of phasic activity that demonstrates that a non-Markovian spike train structure caused by the negative feedback produces a noise-enhanced coding. Further, this enhancement is sensitive to the timescales, as opposed to the intensity, of a driving signal. Reduced hazard function models show that noise-enhanced phasic codes are both novel and separate from classical stochastic resonance reported in non-phasic neurons. The general features of our theory suggest that noise-enhanced codes in excitable systems with subthreshold negative feedback are a particularly rich framework to study.
Generalized classical and quantum signal theories
NASA Astrophysics Data System (ADS)
Rundblad, E.; Labunets, V.; Novak, P.
2005-05-01
In this paper we develop two topics and show their inter- and cross-relation. The first centers on general notions of the generalized classical signal theory on finite Abelian hypergroups. The second concerns the generalized quantum hyperharmonic analysis of quantum signals (Hermitean operators associated with classical signals). We study classical and quantum generalized convolution hypergroup algebras of classical and quantum signals.
Efficient Polar Coding of Quantum Information
NASA Astrophysics Data System (ADS)
Renes, Joseph M.; Dupuis, Frédéric; Renner, Renato
2012-08-01
Polar coding, introduced 2008 by Arıkan, is the first (very) efficiently encodable and decodable coding scheme whose information transmission rate provably achieves the Shannon bound for classical discrete memoryless channels in the asymptotic limit of large block sizes. Here, we study the use of polar codes for the transmission of quantum information. Focusing on the case of qubit Pauli channels and qubit erasure channels, we use classical polar codes to construct a coding scheme that asymptotically achieves a net transmission rate equal to the coherent information using efficient encoding and decoding operations and code construction. Our codes generally require preshared entanglement between sender and receiver, but for channels with a sufficiently low noise level we demonstrate that the rate of preshared entanglement required is zero.
Coupling LAMMPS with Lattice Boltzmann fluid solver: theory, implementation, and applications
NASA Astrophysics Data System (ADS)
Tan, Jifu; Sinno, Talid; Diamond, Scott
2016-11-01
Studying of fluid flow coupled with solid has many applications in biological and engineering problems, e.g., blood cell transport, particulate flow, drug delivery. We present a partitioned approach to solve the coupled Multiphysics problem. The fluid motion is solved by the Lattice Boltzmann method, while the solid displacement and deformation is simulated by Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS). The coupling is achieved through the immersed boundary method so that the expensive remeshing step is eliminated. The code can model both rigid and deformable solids. The code also shows very good scaling results. It was validated with classic problems such as migration of rigid particles, ellipsoid particle's orbit in shear flow. Examples of the applications in blood flow, drug delivery, platelet adhesion and rupture are also given in the paper. NIH.
ERRATUM: Papers published in incorrect sections
NASA Astrophysics Data System (ADS)
2004-04-01
A number of J. Phys. A: Math. Gen. articles have mistakenly been placed in the wrong subject section in recent issues of the journal. We would like to apologize to the authors of these articles for publishing their papers in the Fluid and Plasma Theory section. The correct section for each article is given below. Statistical Physics Issue 4: Microcanonical entropy for small magnetizations Behringer H 2004 J. Phys. A: Math. Gen. 37 1443 Mathematical Physics Issue 9: On the solution of fractional evolution equations Kilbas A A, Pierantozzi T, Trujillo J J and Vázquez L 2004 J. Phys. A: Math. Gen. 37 3271 Quantum Mechanics and Quantum Information Theory Issue 6: New exactly solvable isospectral partners for PT-symmetric potentials Sinha A and Roy P 2004 J. Phys. A: Math. Gen. 37 2509 Issue 9: Symplectically entangled states and their applications to coding Vourdas A 2004 J. Phys. A: Math. Gen. 37 3305 Classical and Quantum Field Theory Issue 6: Pairing of parafermions of order 2: seniority model Nelson C A 2004 J. Phys. A: Math. Gen. 37 2497 Issue 7: Jordan-Schwinger map, 3D harmonic oscillator constants of motion, and classical and quantum parameters characterizing electromagnetic wave polarization Mota R D, Xicoténcatl M A and Granados V D 2004 J. Phys. A: Math. Gen. 37 2835 Issue 9: Could only fermions be elementary? Lev F M 2004 J. Phys. A: Math. Gen. 37 3285
Multi-species ion transport in ICF relevant conditions
NASA Astrophysics Data System (ADS)
Vold, Erik; Kagan, Grigory; Simakov, Andrei; Molvig, Kim; Yin, Lin; Albright, Brian
2017-10-01
Classical transport theory based on Chapman-Enskog methods provides self consistent approximations for kinetic fluxes of mass, heat and momentum for each ion species in a multi-ion plasma characterized with a small Knudsen number. A numerical method for solving the classic forms of multi-ion transport, self-consistently including heat and species mass fluxes relative to the center of mass, is given in [Kagan-Baalrud, arXiv '16] and similar transport coefficients result from recent derivations [Simakov-Molvig, PoP, '16]. We have implemented a combination of these methods in a standalone test code and in xRage, an adaptive-mesh radiation hydrodynamics code, at LANL. Transport mixing is examined between a DT fuel and a CH capsule shell in ICF conditions. The four ion species develop individual self-similar density profiles under the assumption of P-T equilibrium in 1D and show interesting early time transient pressure and center of mass velocity behavior when P-T equilibrium is not enforced. Some 2D results are explored to better understand the transport mix in combination with convective flow driven by macroscopic fluid instabilities at the fuel-capsule interface. Early transient and some 2D behaviors from the fluid transport are compared to kinetic code results. Work performed under the auspices of the U.S. DOE by the LANS, LLC, Los Alamos National Laboratory under Contract No. DE-AC52-06NA25396. Funding provided by the Advanced Simulation and Computing (ASC) Program.
On the Monte Carlo simulation of electron transport in the sub-1 keV energy range.
Thomson, Rowan M; Kawrakow, Iwan
2011-08-01
The validity of "classic" Monte Carlo (MC) simulations of electron and positron transport at sub-1 keV energies is investigated in the context of quantum theory. Quantum theory dictates that uncertainties on the position and energy-momentum four-vectors of radiation quanta obey Heisenberg's uncertainty relation; however, these uncertainties are neglected in "classical" MC simulations of radiation transport in which position and momentum are known precisely. Using the quantum uncertainty relation and electron mean free path, the magnitudes of uncertainties on electron position and momentum are calculated for different kinetic energies; a validity bound on the classical simulation of electron transport is derived. In order to satisfy the Heisenberg uncertainty principle, uncertainties of 5% must be assigned to position and momentum for 1 keV electrons in water; at 100 eV, these uncertainties are 17 to 20% and are even larger at lower energies. In gaseous media such as air, these uncertainties are much smaller (less than 1% for electrons with energy 20 eV or greater). The classical Monte Carlo transport treatment is questionable for sub-1 keV electrons in condensed water as uncertainties on position and momentum must be large (relative to electron momentum and mean free path) to satisfy the quantum uncertainty principle. Simulations which do not account for these uncertainties are not faithful representations of the physical processes, calling into question the results of MC track structure codes simulating sub-1 keV electron transport. Further, the large difference in the scale at which quantum effects are important in gaseous and condensed media suggests that track structure measurements in gases are not necessarily representative of track structure in condensed materials on a micrometer or a nanometer scale.
Introduction to Classical Density Functional Theory by a Computational Experiment
ERIC Educational Resources Information Center
Jeanmairet, Guillaume; Levy, Nicolas; Levesque, Maximilien; Borgis, Daniel
2014-01-01
We propose an in silico experiment to introduce the classical density functional theory (cDFT). Density functional theories, whether quantum or classical, rely on abstract concepts that are nonintuitive; however, they are at the heart of powerful tools and active fields of research in both physics and chemistry. They led to the 1998 Nobel Prize in…
Design Equations and Criteria of Orthotropic Composite Panels
2013-05-01
33 Appendix A Classical Laminate Theory ( CLT ): ....................................................................... A–1 Appendix...Science London , 1990. NSWCCD-65-TR–2004/16A A–1 Appendix A Classical Laminate Theory ( CLT ): In Section 6 of this report, preliminary design...determined using: Classical Laminate Theory, CLT , to Predict Equivalent Stiffness Characteristics, First- Ply Strength Note: CLT is valid for
k-Cosymplectic Classical Field Theories: Tulczyjew and Skinner-Rusk Formulations
NASA Astrophysics Data System (ADS)
Rey, Angel M.; Román-Roy, Narciso; Salgado, Modesto; Vilariño, Silvia
2012-06-01
The k-cosymplectic Lagrangian and Hamiltonian formalisms of first-order classical field theories are reviewed and completed. In particular, they are stated for singular and almost-regular systems. Subsequently, several alternative formulations for k-cosymplectic first-order field theories are developed: First, generalizing the construction of Tulczyjew for mechanics, we give a new interpretation of the classical field equations. Second, the Lagrangian and Hamiltonian formalisms are unified by giving an extension of the Skinner-Rusk formulation on classical mechanics.
Opportunistic quantum network coding based on quantum teleportation
NASA Astrophysics Data System (ADS)
Shang, Tao; Du, Gang; Liu, Jian-wei
2016-04-01
It seems impossible to endow opportunistic characteristic to quantum network on the basis that quantum channel cannot be overheard without disturbance. In this paper, we propose an opportunistic quantum network coding scheme by taking full advantage of channel characteristic of quantum teleportation. Concretely, it utilizes quantum channel for secure transmission of quantum states and can detect eavesdroppers by means of quantum channel verification. What is more, it utilizes classical channel for both opportunistic listening to neighbor states and opportunistic coding by broadcasting measurement outcome. Analysis results show that our scheme can reduce the times of transmissions over classical channels for relay nodes and can effectively defend against classical passive attack and quantum active attack.
Identification of Linear and Nonlinear Aerodynamic Impulse Responses Using Digital Filter Techniques
NASA Technical Reports Server (NTRS)
Silva, Walter A.
1997-01-01
This paper discusses the mathematical existence and the numerically-correct identification of linear and nonlinear aerodynamic impulse response functions. Differences between continuous-time and discrete-time system theories, which permit the identification and efficient use of these functions, will be detailed. Important input/output definitions and the concept of linear and nonlinear systems with memory will also be discussed. It will be shown that indicial (step or steady) responses (such as Wagner's function), forced harmonic responses (such as Theodorsen's function or those from doublet lattice theory), and responses to random inputs (such as gusts) can all be obtained from an aerodynamic impulse response function. This paper establishes the aerodynamic impulse response function as the most fundamental, and, therefore, the most computationally efficient, aerodynamic function that can be extracted from any given discrete-time, aerodynamic system. The results presented in this paper help to unify the understanding of classical two-dimensional continuous-time theories with modern three-dimensional, discrete-time theories. First, the method is applied to the nonlinear viscous Burger's equation as an example. Next the method is applied to a three-dimensional aeroelastic model using the CAP-TSD (Computational Aeroelasticity Program - Transonic Small Disturbance) code and then to a two-dimensional model using the CFL3D Navier-Stokes code. Comparisons of accuracy and computational cost savings are presented. Because of its mathematical generality, an important attribute of this methodology is that it is applicable to a wide range of nonlinear, discrete-time problems.
Identification of Linear and Nonlinear Aerodynamic Impulse Responses Using Digital Filter Techniques
NASA Technical Reports Server (NTRS)
Silva, Walter A.
1997-01-01
This paper discusses the mathematical existence and the numerically-correct identification of linear and nonlinear aerodynamic impulse response functions. Differences between continuous-time and discrete-time system theories, which permit the identification and efficient use of these functions, will be detailed. Important input/output definitions and the concept of linear and nonlinear systems with memory will also be discussed. It will be shown that indicial (step or steady) responses (such as Wagner's function), forced harmonic responses (such as Tbeodorsen's function or those from doublet lattice theory), and responses to random inputs (such as gusts) can all be obtained from an aerodynamic impulse response function. This paper establishes the aerodynamic impulse response function as the most fundamental, and, therefore, the most computationally efficient, aerodynamic function that can be extracted from any given discrete-time, aerodynamic system. The results presented in this paper help to unify the understanding of classical two-dimensional continuous-time theories with modem three-dimensional, discrete-time theories. First, the method is applied to the nonlinear viscous Burger's equation as an example. Next the method is applied to a three-dimensional aeroelastic model using the CAP-TSD (Computational Aeroelasticity Program - Transonic Small Disturbance) code and then to a two-dimensional model using the CFL3D Navier-Stokes code. Comparisons of accuracy and computational cost savings are presented. Because of its mathematical generality, an important attribute of this methodology is that it is applicable to a wide range of nonlinear, discrete-time problems.
NASA Technical Reports Server (NTRS)
Koch, L. Danielle
1998-01-01
Reported here is a design study of a propeller for a vehicle capable of subsonic flight in Earth's stratosphere. All propellers presented were required to absorb 63.4 kW (85 hp) at 25.9 km (85,000 ft) while aircraft cruise velocity was maintained at Mach 0.40. To produce the final design, classic momentum and blade-element theories were combined with two and three-dimensional results from the Advanced Ducted Propfan Analysis Code (ADPAC), a numerical Navier-Stokes analysis code. The Eppler 387 airfoil was used for each of the constant section propeller designs compared. Experimental data from the Langley Low-Turbulence Pressure Tunnel was used in the strip theory design and analysis programs written. The experimental data was also used to validate ADPAC at a Reynolds numbers of 60,000 and a Mach number of 0.20. Experimental and calculated surface pressure coefficients are compared for a range of angles of attack. Since low Reynolds number transonic experimental data was unavailable, ADPAC was used to generate two-dimensional section performance predictions for Reynolds numbers of 60,000 and 100,000 and Mach numbers ranging from 0.45 to 0.75. Surface pressure coefficients are presented for selected angles of attack. in addition to the variation of lift and drag coefficients at each flow condition. A three-dimensional model of the final design was made which ADPAC used to calculated propeller performance. ADPAC performance predictions were compared with strip-theory calculations at design point. Propeller efficiency predicted by ADPAC was within 1.5% of that calculated by strip theory methods, although ADPAC predictions of thrust, power, and torque coefficients were approximately 5% lower than the strip theory results. Simplifying assumptions made in the strip theory account for the differences seen.
Generalized probability theories: what determines the structure of quantum theory?
NASA Astrophysics Data System (ADS)
Janotta, Peter; Hinrichsen, Haye
2014-08-01
The framework of generalized probabilistic theories is a powerful tool for studying the foundations of quantum physics. It provides the basis for a variety of recent findings that significantly improve our understanding of the rich physical structure of quantum theory. This review paper tries to present the framework and recent results to a broader readership in an accessible manner. To achieve this, we follow a constructive approach. Starting from a few basic physically motivated assumptions we show how a given set of observations can be manifested in an operational theory. Furthermore, we characterize consistency conditions limiting the range of possible extensions. In this framework classical and quantum theory appear as special cases, and the aim is to understand what distinguishes quantum mechanics as the fundamental theory realized in nature. It turns out that non-classical features of single systems can equivalently result from higher-dimensional classical theories that have been restricted. Entanglement and non-locality, however, are shown to be genuine non-classical features.
Single stock dynamics on high-frequency data: from a compressed coding perspective.
Fushing, Hsieh; Chen, Shu-Chun; Hwang, Chii-Ruey
2014-01-01
High-frequency return, trading volume and transaction number are digitally coded via a nonparametric computing algorithm, called hierarchical factor segmentation (HFS), and then are coupled together to reveal a single stock dynamics without global state-space structural assumptions. The base-8 digital coding sequence, which is capable of revealing contrasting aggregation against sparsity of extreme events, is further compressed into a shortened sequence of state transitions. This compressed digital code sequence vividly demonstrates that the aggregation of large absolute returns is the primary driving force for stimulating both the aggregations of large trading volumes and transaction numbers. The state of system-wise synchrony is manifested with very frequent recurrence in the stock dynamics. And this data-driven dynamic mechanism is seen to correspondingly vary as the global market transiting in and out of contraction-expansion cycles. These results not only elaborate the stock dynamics of interest to a fuller extent, but also contradict some classical theories in finance. Overall this version of stock dynamics is potentially more coherent and realistic, especially when the current financial market is increasingly powered by high-frequency trading via computer algorithms, rather than by individual investors.
Single Stock Dynamics on High-Frequency Data: From a Compressed Coding Perspective
Fushing, Hsieh; Chen, Shu-Chun; Hwang, Chii-Ruey
2014-01-01
High-frequency return, trading volume and transaction number are digitally coded via a nonparametric computing algorithm, called hierarchical factor segmentation (HFS), and then are coupled together to reveal a single stock dynamics without global state-space structural assumptions. The base-8 digital coding sequence, which is capable of revealing contrasting aggregation against sparsity of extreme events, is further compressed into a shortened sequence of state transitions. This compressed digital code sequence vividly demonstrates that the aggregation of large absolute returns is the primary driving force for stimulating both the aggregations of large trading volumes and transaction numbers. The state of system-wise synchrony is manifested with very frequent recurrence in the stock dynamics. And this data-driven dynamic mechanism is seen to correspondingly vary as the global market transiting in and out of contraction-expansion cycles. These results not only elaborate the stock dynamics of interest to a fuller extent, but also contradict some classical theories in finance. Overall this version of stock dynamics is potentially more coherent and realistic, especially when the current financial market is increasingly powered by high-frequency trading via computer algorithms, rather than by individual investors. PMID:24586235
Noniterative MAP reconstruction using sparse matrix representations.
Cao, Guangzhi; Bouman, Charles A; Webb, Kevin J
2009-09-01
We present a method for noniterative maximum a posteriori (MAP) tomographic reconstruction which is based on the use of sparse matrix representations. Our approach is to precompute and store the inverse matrix required for MAP reconstruction. This approach has generally not been used in the past because the inverse matrix is typically large and fully populated (i.e., not sparse). In order to overcome this problem, we introduce two new ideas. The first idea is a novel theory for the lossy source coding of matrix transformations which we refer to as matrix source coding. This theory is based on a distortion metric that reflects the distortions produced in the final matrix-vector product, rather than the distortions in the coded matrix itself. The resulting algorithms are shown to require orthonormal transformations of both the measurement data and the matrix rows and columns before quantization and coding. The second idea is a method for efficiently storing and computing the required orthonormal transformations, which we call a sparse-matrix transform (SMT). The SMT is a generalization of the classical FFT in that it uses butterflies to compute an orthonormal transform; but unlike an FFT, the SMT uses the butterflies in an irregular pattern, and is numerically designed to best approximate the desired transforms. We demonstrate the potential of the noniterative MAP reconstruction with examples from optical tomography. The method requires offline computation to encode the inverse transform. However, once these offline computations are completed, the noniterative MAP algorithm is shown to reduce both storage and computation by well over two orders of magnitude, as compared to a linear iterative reconstruction methods.
de Bock, Élodie; Hardouin, Jean-Benoit; Blanchin, Myriam; Le Neel, Tanguy; Kubis, Gildas; Bonnaud-Antignac, Angélique; Dantan, Étienne; Sébille, Véronique
2016-10-01
The objective was to compare classical test theory and Rasch-family models derived from item response theory for the analysis of longitudinal patient-reported outcomes data with possibly informative intermittent missing items. A simulation study was performed in order to assess and compare the performance of classical test theory and Rasch model in terms of bias, control of the type I error and power of the test of time effect. The type I error was controlled for classical test theory and Rasch model whether data were complete or some items were missing. Both methods were unbiased and displayed similar power with complete data. When items were missing, Rasch model remained unbiased and displayed higher power than classical test theory. Rasch model performed better than the classical test theory approach regarding the analysis of longitudinal patient-reported outcomes with possibly informative intermittent missing items mainly for power. This study highlights the interest of Rasch-based models in clinical research and epidemiology for the analysis of incomplete patient-reported outcomes data. © The Author(s) 2013.
Transport studies in high-performance field reversed configuration plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gupta, S., E-mail: sgupta@trialphaenergy.com; Barnes, D. C.; Dettrick, S. A.
2016-05-15
A significant improvement of field reversed configuration (FRC) lifetime and plasma confinement times in the C-2 plasma, called High Performance FRC regime, has been observed with neutral beam injection (NBI), improved edge stability, and better wall conditioning [Binderbauer et al., Phys. Plasmas 22, 056110 (2015)]. A Quasi-1D (Q1D) fluid transport code has been developed and employed to carry out transport analysis of such C-2 plasma conditions. The Q1D code is coupled to a Monte-Carlo code to incorporate the effect of fast ions, due to NBI, on the background FRC plasma. Numerically, the Q1D transport behavior with enhanced transport coefficients (butmore » with otherwise classical parametric dependencies) such as 5 times classical resistive diffusion, classical thermal ion conductivity, 20 times classical electron thermal conductivity, and classical fast ion behavior fit with the experimentally measured time evolution of the excluded flux radius, line-integrated density, and electron/ion temperature. The numerical study shows near sustainment of poloidal flux for nearly 1 ms in the presence of NBI.« less
Telling and Not-Telling: A Classic Grounded Theory of Sharing Life-Stories
ERIC Educational Resources Information Center
Powers, Trudy Lee
2013-01-01
This study of "Telling and Not-Telling" was conducted using the classic grounded theory methodology (Glaser 1978, 1992, 1998; Glaser & Strauss, 1967). This unique methodology systematically and inductively generates conceptual theories from data. The goal is to discover theory that explains, predicts, and provides practical…
Entanglement-assisted quantum quasicyclic low-density parity-check codes
NASA Astrophysics Data System (ADS)
Hsieh, Min-Hsiu; Brun, Todd A.; Devetak, Igor
2009-03-01
We investigate the construction of quantum low-density parity-check (LDPC) codes from classical quasicyclic (QC) LDPC codes with girth greater than or equal to 6. We have shown that the classical codes in the generalized Calderbank-Skor-Steane construction do not need to satisfy the dual-containing property as long as preshared entanglement is available to both sender and receiver. We can use this to avoid the many four cycles which typically arise in dual-containing LDPC codes. The advantage of such quantum codes comes from the use of efficient decoding algorithms such as sum-product algorithm (SPA). It is well known that in the SPA, cycles of length 4 make successive decoding iterations highly correlated and hence limit the decoding performance. We show the principle of constructing quantum QC-LDPC codes which require only small amounts of initial shared entanglement.
Combinatorial neural codes from a mathematical coding theory perspective.
Curto, Carina; Itskov, Vladimir; Morrison, Katherine; Roth, Zachary; Walker, Judy L
2013-07-01
Shannon's seminal 1948 work gave rise to two distinct areas of research: information theory and mathematical coding theory. While information theory has had a strong influence on theoretical neuroscience, ideas from mathematical coding theory have received considerably less attention. Here we take a new look at combinatorial neural codes from a mathematical coding theory perspective, examining the error correction capabilities of familiar receptive field codes (RF codes). We find, perhaps surprisingly, that the high levels of redundancy present in these codes do not support accurate error correction, although the error-correcting performance of receptive field codes catches up to that of random comparison codes when a small tolerance to error is introduced. However, receptive field codes are good at reflecting distances between represented stimuli, while the random comparison codes are not. We suggest that a compromise in error-correcting capability may be a necessary price to pay for a neural code whose structure serves not only error correction, but must also reflect relationships between stimuli.
Bidirectional holographic codes and sub-AdS locality
NASA Astrophysics Data System (ADS)
Yang, Zhao; Hayden, Patrick; Qi, Xiaoliang
Tensor networks implementing quantum error correcting codes have recently been used as toy models of the holographic duality which explicitly realize some of the more puzzling features of the AdS/CFT correspondence. These models reproduce the Ryu-Takayanagi entropy formula for boundary intervals, and allow bulk operators to be mapped to the boundary in a redundant fashion. These exactly solvable, explicit models have provided valuable insight but nonetheless suffer from many deficiencies, some of which we attempt to address in this talk. We propose a new class of tensor network models that subsume the earlier advances and, in addition, incorporate additional features of holographic duality, including: (1) a holographic interpretation of all boundary states, not just those in a ''code'' subspace, (2) a set of bulk states playing the role of ''classical geometries'' which reproduce the Ryu-Takayanagi formula for boundary intervals, (3) a bulk gauge symmetry analogous to diffeomorphism invariance in gravitational theories, (4) emergent bulk locality for sufficiently sparse excitations, and the ability to describe geometry at sub-AdS resolutions or even flat space. David and Lucile Packard Foundation.
Bidirectional holographic codes and sub-AdS locality
NASA Astrophysics Data System (ADS)
Yang, Zhao; Hayden, Patrick; Qi, Xiao-Liang
2016-01-01
Tensor networks implementing quantum error correcting codes have recently been used to construct toy models of holographic duality explicitly realizing some of the more puzzling features of the AdS/CFT correspondence. These models reproduce the Ryu-Takayanagi entropy formula for boundary intervals, and allow bulk operators to be mapped to the boundary in a redundant fashion. These exactly solvable, explicit models have provided valuable insight but nonetheless suffer from many deficiencies, some of which we attempt to address in this article. We propose a new class of tensor network models that subsume the earlier advances and, in addition, incorporate additional features of holographic duality, including: (1) a holographic interpretation of all boundary states, not just those in a "code" subspace, (2) a set of bulk states playing the role of "classical geometries" which reproduce the Ryu-Takayanagi formula for boundary intervals, (3) a bulk gauge symmetry analogous to diffeomorphism invariance in gravitational theories, (4) emergent bulk locality for sufficiently sparse excitations, and (5) the ability to describe geometry at sub-AdS resolutions or even flat space.
Koopman-von Neumann formulation of classical Yang-Mills theories: I
NASA Astrophysics Data System (ADS)
Carta, P.; Gozzi, E.; Mauro, D.
2006-03-01
In this paper we present the Koopman-von Neumann (KvN) formulation of classical non-Abelian gauge field theories. In particular we shall explore the functional (or classical path integral) counterpart of the KvN method. In the quantum path integral quantization of Yang-Mills theories concepts like gauge-fixing and Faddeev-Popov determinant appear in a quite natural way. We will prove that these same objects are needed also in this classical path integral formulation for Yang-Mills theories. We shall also explore the classical path integral counterpart of the BFV formalism and build all the associated universal and gauge charges. These last are quite different from the analog quantum ones and we shall show the relation between the two. This paper lays the foundation of this formalism which, due to the many auxiliary fields present, is rather heavy. Applications to specific topics outlined in the paper will appear in later publications.
a Classical Isodual Theory of Antimatter and its Prediction of Antigravity
NASA Astrophysics Data System (ADS)
Santilli, Ruggero Maria
An inspection of the contemporary physics literature reveals that, while matter is treated at all levels of study, from Newtonian mechanics to quantum field theory, antimatter is solely treated at the level of second quantization. For the purpose of initiating the restoration of full equivalence in the treatment of matter and antimatter in due time, and as the classical foundations of an axiomatically consistent inclusion of gravitation in unified gauge theories recently appeared elsewhere, in this paper we present a classical representation of antimatter which begins at the primitive Newtonian level with corresponding formulations at all subsequent levels. By recalling that charge conjugation of particles into antiparticles is antiautomorphic, the proposed theory of antimatter is based on a new map, called isoduality, which is also antiautomorphic (and more generally, antiisomorphic), yet it is applicable beginning at the classical level and then persists at the quantum level where it becomes equivalent to charge conjugation. We therefore present, apparently for the first time, the classical isodual theory of antimatter, we identify the physical foundations of the theory as being the novel isodual Galilean, special and general relativities, and we show the compatibility of the theory with all available classical experimental data on antimatter. We identify the classical foundations of the prediction of antigravity for antimatter in the field of matter (or vice-versa) without any claim on its validity, and defer its resolution to specifically identified experiments. We identify the novel, classical, isodual electromagnetic waves which are predicted to be emitted by antimatter, the so-called space-time machine based on a novel non-Newtonian geometric propulsion, and other implications of the theory. We also introduce, apparently for the first time, the isodual space and time inversions and show that they are nontrivially different than the conventional ones, thus offering a possibility for the future resolution whether far away galaxies and quasars are made up of matter or of antimatter. The paper ends with the indication that the studies are at their first infancy, and indicates some of the open problems. To avoid a prohibitive length, the paper is restricted to the classical treatment, while studies on operator profiles are treated elsewhere.
Neural Elements for Predictive Coding.
Shipp, Stewart
2016-01-01
Predictive coding theories of sensory brain function interpret the hierarchical construction of the cerebral cortex as a Bayesian, generative model capable of predicting the sensory data consistent with any given percept. Predictions are fed backward in the hierarchy and reciprocated by prediction error in the forward direction, acting to modify the representation of the outside world at increasing levels of abstraction, and so to optimize the nature of perception over a series of iterations. This accounts for many 'illusory' instances of perception where what is seen (heard, etc.) is unduly influenced by what is expected, based on past experience. This simple conception, the hierarchical exchange of prediction and prediction error, confronts a rich cortical microcircuitry that is yet to be fully documented. This article presents the view that, in the current state of theory and practice, it is profitable to begin a two-way exchange: that predictive coding theory can support an understanding of cortical microcircuit function, and prompt particular aspects of future investigation, whilst existing knowledge of microcircuitry can, in return, influence theoretical development. As an example, a neural inference arising from the earliest formulations of predictive coding is that the source populations of forward and backward pathways should be completely separate, given their functional distinction; this aspect of circuitry - that neurons with extrinsically bifurcating axons do not project in both directions - has only recently been confirmed. Here, the computational architecture prescribed by a generalized (free-energy) formulation of predictive coding is combined with the classic 'canonical microcircuit' and the laminar architecture of hierarchical extrinsic connectivity to produce a template schematic, that is further examined in the light of (a) updates in the microcircuitry of primate visual cortex, and (b) rapid technical advances made possible by transgenic neural engineering in the mouse. The exercise highlights a number of recurring themes, amongst them the consideration of interneuron diversity as a spur to theoretical development and the potential for specifying a pyramidal neuron's function by its individual 'connectome,' combining its extrinsic projection (forward, backward or subcortical) with evaluation of its intrinsic network (e.g., unidirectional versus bidirectional connections with other pyramidal neurons).
Neural Elements for Predictive Coding
Shipp, Stewart
2016-01-01
Predictive coding theories of sensory brain function interpret the hierarchical construction of the cerebral cortex as a Bayesian, generative model capable of predicting the sensory data consistent with any given percept. Predictions are fed backward in the hierarchy and reciprocated by prediction error in the forward direction, acting to modify the representation of the outside world at increasing levels of abstraction, and so to optimize the nature of perception over a series of iterations. This accounts for many ‘illusory’ instances of perception where what is seen (heard, etc.) is unduly influenced by what is expected, based on past experience. This simple conception, the hierarchical exchange of prediction and prediction error, confronts a rich cortical microcircuitry that is yet to be fully documented. This article presents the view that, in the current state of theory and practice, it is profitable to begin a two-way exchange: that predictive coding theory can support an understanding of cortical microcircuit function, and prompt particular aspects of future investigation, whilst existing knowledge of microcircuitry can, in return, influence theoretical development. As an example, a neural inference arising from the earliest formulations of predictive coding is that the source populations of forward and backward pathways should be completely separate, given their functional distinction; this aspect of circuitry – that neurons with extrinsically bifurcating axons do not project in both directions – has only recently been confirmed. Here, the computational architecture prescribed by a generalized (free-energy) formulation of predictive coding is combined with the classic ‘canonical microcircuit’ and the laminar architecture of hierarchical extrinsic connectivity to produce a template schematic, that is further examined in the light of (a) updates in the microcircuitry of primate visual cortex, and (b) rapid technical advances made possible by transgenic neural engineering in the mouse. The exercise highlights a number of recurring themes, amongst them the consideration of interneuron diversity as a spur to theoretical development and the potential for specifying a pyramidal neuron’s function by its individual ‘connectome,’ combining its extrinsic projection (forward, backward or subcortical) with evaluation of its intrinsic network (e.g., unidirectional versus bidirectional connections with other pyramidal neurons). PMID:27917138
MAC/GMC 4.0 User's Manual: Keywords Manual. Volume 2
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Arnold, Steven M.
2002-01-01
This document is the second volume in the three volume set of User's Manuals for the Micromechanics Analysis Code with Generalized Method of Cells Version 4.0 (MAC/GMC 4.0). Volume 1 is the Theory Manual, this document is the Keywords Manual, and Volume 3 is the Example Problem Manual. MAC/GMC 4.0 is a composite material and laminate analysis software program developed at the NASA Glenn Research Center. It is based on the generalized method of cells (GMC) micromechanics theory, which provides access to the local stress and strain fields in the composite material. This access grants GMC the ability to accommodate arbitrary local models for inelastic material behavior and various types of damage and failure analysis. MAC/GMC 4.0 has been built around GMC to provide the theory with a user-friendly framework, along with a library of local inelastic, damage, and failure models. Further, applications of simulated thermo-mechanical loading, generation of output results, and selection of architectures to represent the composite material have been automated in MAC/GMC 4.0. Finally, classical lamination theory has been implemented within MAC/GMC 4.0 wherein GMC is used to model the composite material response of each ply. Consequently, the full range of GMC composite material capabilities is available for analysis of arbitrary laminate configurations as well. This volume describes the basic information required to use the MAC/GMC 4.0 software, including a 'Getting Started' section, and an in-depth description of each of the 22 keywords used in the input file to control the execution of the code.
The Prediction of Item Parameters Based on Classical Test Theory and Latent Trait Theory
ERIC Educational Resources Information Center
Anil, Duygu
2008-01-01
In this study, the prediction power of the item characteristics based on the experts' predictions on conditions try-out practices cannot be applied was examined for item characteristics computed depending on classical test theory and two-parameters logistic model of latent trait theory. The study was carried out on 9914 randomly selected students…
Any Ontological Model of the Single Qubit Stabilizer Formalism must be Contextual
NASA Astrophysics Data System (ADS)
Lillystone, Piers; Wallman, Joel J.
Quantum computers allow us to easily solve some problems classical computers find hard. Non-classical improvements in computational power should be due to some non-classical property of quantum theory. Contextuality, a more general notion of non-locality, is a necessary, but not sufficient, resource for quantum speed-up. Proofs of contextuality can be constructed for the classically simulable stabilizer formalism. Previous proofs of stabilizer contextuality are known for 2 or more qubits, for example the Mermin-Peres magic square. In the work presented we extend these results and prove that any ontological model of the single qubit stabilizer theory must be contextual, as defined by R. Spekkens, and give a relation between our result and the Mermin-Peres square. By demonstrating that contextuality is present in the qubit stabilizer formalism we provide further insight into the contextuality present in quantum theory. Understanding the contextuality of classical sub-theories will allow us to better identify the physical properties of quantum theory required for computational speed up. This research was supported by CIFAR, the Government of Ontario, and the Government of Canada through NSERC and Industry Canada.
NASA Astrophysics Data System (ADS)
Oblow, E. M.
1982-10-01
An evaluation was made of the mathematical and economic basis for conversion processes in the Long-term Energy Analysis Program (LEAP) energy economy model. Conversion processes are the main modeling subunit in LEAP used to represent energy conversion industries and are supposedly based on the classical economic theory of the firm. Questions about uniqueness and existence of LEAP solutions and their relation to classical equilibrium economic theory prompted the study. An analysis of classical theory and LEAP model equations was made to determine their exact relationship. The conclusions drawn from this analysis were that LEAP theory is not consistent with the classical theory of the firm. Specifically, the capacity factor formalism used by LEAP does not support a classical interpretation in terms of a technological production function for energy conversion processes. The economic implications of this inconsistency are suboptimal process operation and short term negative profits in years where plant operation should be terminated. A new capacity factor formalism, which retains the behavioral features of the original model, is proposed to resolve these discrepancies.
ERIC Educational Resources Information Center
Lange, Elizabeth
2015-01-01
This article argues that sociology has been a foundational discipline for the field of adult education, but it has been largely implicit, until recently. This article contextualizes classical theories of sociology within contemporary critiques, reviews the historical roots of sociology and then briefly introduces the classical theories…
Stott, Clifford; Drury, John
2016-04-01
This article explores the origins and ideology of classical crowd psychology, a body of theory reflected in contemporary popularised understandings such as of the 2011 English 'riots'. This article argues that during the nineteenth century, the crowd came to symbolise a fear of 'mass society' and that 'classical' crowd psychology was a product of these fears. Classical crowd psychology pathologised, reified and decontextualised the crowd, offering the ruling elites a perceived opportunity to control it. We contend that classical theory misrepresents crowd psychology and survives in contemporary understanding because it is ideological. We conclude by discussing how classical theory has been supplanted in academic contexts by an identity-based crowd psychology that restores the meaning to crowd action, replaces it in its social context and in so doing transforms theoretical understanding of 'riots' and the nature of the self. © The Author(s) 2016.
Influence of an asymmetric ring on the modeling of an orthogonally stiffened cylindrical shell
NASA Technical Reports Server (NTRS)
Rastogi, Naveen; Johnson, Eric R.
1994-01-01
Structural models are examined for the influence of a ring with an asymmetrical cross section on the linear elastic response of an orthogonally stiffened cylindrical shell subjected to internal pressure. The first structural model employs classical theory for the shell and stiffeners. The second model employs transverse shear deformation theories for the shell and stringer and classical theory for the ring. Closed-end pressure vessel effects are included. Interacting line load intensities are computed in the stiffener-to-skin joints for an example problem having the dimensions of the fuselage of a large transport aircraft. Classical structural theory is found to exaggerate the asymmetric response compared to the transverse shear deformation theory.
NASA Astrophysics Data System (ADS)
Ohba, Nobuko; Ogata, Shuji; Tamura, Tomoyuki; Kobayashi, Ryo; Yamakawa, Shunsuke; Asahi, Ryoji
2012-02-01
Enhancing the diffusivity of the Li ion in a Li-graphite intercalation compound that has been used as a negative electrode in the Li-ion rechargeable battery, is important in improving both the recharging speed and power of the battery. In the compound, the Li ion creates a long-range stress field around itself by expanding the interlayer spacing of graphite. We advance the hybrid quantum-classical simulation code to include the external electric field in addition to the long-range stress field by first-principles simulation. In the hybrid code, the quantum region selected adaptively around the Li ion is treated using the real-space density-functional theory for electrons. The rest of the system is described with an empirical interatomic potential that includes the term relating to the dispersion force between the C atoms in different layers. Hybrid simulation runs for Li dynamics in graphite are performed at 423 K under various settings of the amplitude and frequency of alternating electric fields perpendicular to C-layers. We find that the in-plane diffusivity of the Li ion is enhanced significantly by the electric field if the amplitude is larger than 0.2 V/Å within its order and the frequency is as high as 1.7 THz. The microscopic mechanisms of the enhancement are explained.
ERIC Educational Resources Information Center
Magno, Carlo
2009-01-01
The present report demonstrates the difference between classical test theory (CTT) and item response theory (IRT) approach using an actual test data for chemistry junior high school students. The CTT and IRT were compared across two samples and two forms of test on their item difficulty, internal consistency, and measurement errors. The specific…
ERIC Educational Resources Information Center
Guler, Nese; Gelbal, Selahattin
2010-01-01
In this study, the Classical test theory and generalizability theory were used for determination to reliability of scores obtained from measurement tool of mathematics success. 24 open-ended mathematics question of the TIMSS-1999 was applied to 203 students in 2007-spring semester. Internal consistency of scores was found as 0.92. For…
ERIC Educational Resources Information Center
Kohli, Nidhi; Koran, Jennifer; Henn, Lisa
2015-01-01
There are well-defined theoretical differences between the classical test theory (CTT) and item response theory (IRT) frameworks. It is understood that in the CTT framework, person and item statistics are test- and sample-dependent. This is not the perception with IRT. For this reason, the IRT framework is considered to be theoretically superior…
Cappelleri, Joseph C; Jason Lundy, J; Hays, Ron D
2014-05-01
The US Food and Drug Administration's guidance for industry document on patient-reported outcomes (PRO) defines content validity as "the extent to which the instrument measures the concept of interest" (FDA, 2009, p. 12). According to Strauss and Smith (2009), construct validity "is now generally viewed as a unifying form of validity for psychological measurements, subsuming both content and criterion validity" (p. 7). Hence, both qualitative and quantitative information are essential in evaluating the validity of measures. We review classical test theory and item response theory (IRT) approaches to evaluating PRO measures, including frequency of responses to each category of the items in a multi-item scale, the distribution of scale scores, floor and ceiling effects, the relationship between item response options and the total score, and the extent to which hypothesized "difficulty" (severity) order of items is represented by observed responses. If a researcher has few qualitative data and wants to get preliminary information about the content validity of the instrument, then descriptive assessments using classical test theory should be the first step. As the sample size grows during subsequent stages of instrument development, confidence in the numerical estimates from Rasch and other IRT models (as well as those of classical test theory) would also grow. Classical test theory and IRT can be useful in providing a quantitative assessment of items and scales during the content-validity phase of PRO-measure development. Depending on the particular type of measure and the specific circumstances, the classical test theory and/or the IRT should be considered to help maximize the content validity of PRO measures. Copyright © 2014 Elsevier HS Journals, Inc. All rights reserved.
Constrained variational calculus for higher order classical field theories
NASA Astrophysics Data System (ADS)
Campos, Cédric M.; de León, Manuel; Martín de Diego, David
2010-11-01
We develop an intrinsic geometrical setting for higher order constrained field theories. As a main tool we use an appropriate generalization of the classical Skinner-Rusk formalism. Some examples of applications are studied, in particular to the geometrical description of optimal control theory for partial differential equations.
Chance, determinism and the classical theory of probability.
Vasudevan, Anubav
2018-02-01
This paper situates the metaphysical antinomy between chance and determinism in the historical context of some of the earliest developments in the mathematical theory of probability. Since Hacking's seminal work on the subject, it has been a widely held view that the classical theorists of probability were guilty of an unwitting equivocation between a subjective, or epistemic, interpretation of probability, on the one hand, and an objective, or statistical, interpretation, on the other. While there is some truth to this account, I argue that the tension at the heart of the classical theory of probability is not best understood in terms of the duality between subjective and objective interpretations of probability. Rather, the apparent paradox of chance and determinism, when viewed through the lens of the classical theory of probability, manifests itself in a much deeper ambivalence on the part of the classical probabilists as to the rational commensurability of causal and probabilistic reasoning. Copyright © 2017 Elsevier Ltd. All rights reserved.
Coherent state coding approaches the capacity of non-Gaussian bosonic channels
NASA Astrophysics Data System (ADS)
Huber, Stefan; König, Robert
2018-05-01
The additivity problem asks if the use of entanglement can boost the information-carrying capacity of a given channel beyond what is achievable by coding with simple product states only. This has recently been shown not to be the case for phase-insensitive one-mode Gaussian channels, but remains unresolved in general. Here we consider two general classes of bosonic noise channels, which include phase-insensitive Gaussian channels as special cases: these are attenuators with general, potentially non-Gaussian environment states and classical noise channels with general probabilistic noise. We show that additivity violations, if existent, are rather minor for all these channels: the maximal gain in classical capacity is bounded by a constant independent of the input energy. Our proof shows that coding by simple classical modulation of coherent states is close to optimal.
A unified viscous theory of lift and drag of 2-D thin airfoils and 3-D thin wings
NASA Technical Reports Server (NTRS)
Yates, John E.
1991-01-01
A unified viscous theory of 2-D thin airfoils and 3-D thin wings is developed with numerical examples. The viscous theory of the load distribution is unique and tends to the classical inviscid result with Kutta condition in the high Reynolds number limit. A new theory of 2-D section induced drag is introduced with specific applications to three cases of interest: (1) constant angle of attack; (2) parabolic camber; and (3) a flapped airfoil. The first case is also extended to a profiled leading edge foil. The well-known drag due to absence of leading edge suction is derived from the viscous theory. It is independent of Reynolds number for zero thickness and varies inversely with the square root of the Reynolds number based on the leading edge radius for profiled sections. The role of turbulence in the section induced drag problem is discussed. A theory of minimum section induced drag is derived and applied. For low Reynolds number the minimum drag load tends to the constant angle of attack solution and for high Reynolds number to an approximation of the parabolic camber solution. The parabolic camber section induced drag is about 4 percent greater than the ideal minimum at high Reynolds number. Two new concepts, the viscous induced drag angle and the viscous induced separation potential are introduced. The separation potential is calculated for three 2-D cases and for a 3-D rectangular wing. The potential is calculated with input from a standard doublet lattice wing code without recourse to any boundary layer calculations. Separation is indicated in regions where it is observed experimentally. The classical induced drag is recovered in the 3-D high Reynolds number limit with an additional contribution that is Reynold number dependent. The 3-D viscous theory of minimum induced drag yields an equation for the optimal spanwise and chordwise load distribution. The design of optimal wing tip planforms and camber distributions is possible with the viscous 3-D wing theory.
Dual Coding, Reasoning and Fallacies.
ERIC Educational Resources Information Center
Hample, Dale
1982-01-01
Develops the theory that a fallacy is not a comparison of a rhetorical text to a set of definitions but a comparison of one person's cognition with another's. Reviews Paivio's dual coding theory, relates nonverbal coding to reasoning processes, and generates a limited fallacy theory based on dual coding theory. (PD)
Coding Issues in Grounded Theory
ERIC Educational Resources Information Center
Moghaddam, Alireza
2006-01-01
This paper discusses grounded theory as one of the qualitative research designs. It describes how grounded theory generates from data. Three phases of grounded theory--open coding, axial coding, and selective coding--are discussed, along with some of the issues which are the source of debate among grounded theorists, especially between its…
Dressing the post-Newtonian two-body problem and classical effective field theory
NASA Astrophysics Data System (ADS)
Kol, Barak; Smolkin, Michael
2009-12-01
We apply a dressed perturbation theory to better organize and economize the computation of high orders of the 2-body effective action of an inspiralling post-Newtonian (PN) gravitating binary. We use the effective field theory approach with the nonrelativistic field decomposition (NRG fields). For that purpose we develop quite generally the dressing theory of a nonlinear classical field theory coupled to pointlike sources. We introduce dressed charges and propagators, but unlike the quantum theory there are no dressed bulk vertices. The dressed quantities are found to obey recursive integral equations which succinctly encode parts of the diagrammatic expansion, and are the classical version of the Schwinger-Dyson equations. Actually, the classical equations are somewhat stronger since they involve only finitely many quantities, unlike the quantum theory. Classical diagrams are shown to factorize exactly when they contain nonlinear worldline vertices, and we classify all the possible topologies of irreducible diagrams for low loop numbers. We apply the dressing program to our post-Newtonian case of interest. The dressed charges consist of the dressed energy-momentum tensor after a nonrelativistic decomposition, and we compute all dressed charges (in the harmonic gauge) appearing up to 2PN in the 2-body effective action (and more). We determine the irreducible skeleton diagrams up to 3PN and we employ the dressed charges to compute several terms beyond 2PN.
Bosonic Loop Diagrams as Perturbative Solutions of the Classical Field Equations in ϕ4-Theory
NASA Astrophysics Data System (ADS)
Finster, Felix; Tolksdorf, Jürgen
2012-05-01
Solutions of the classical ϕ4-theory in Minkowski space-time are analyzed in a perturbation expansion in the nonlinearity. Using the language of Feynman diagrams, the solution of the Cauchy problem is expressed in terms of tree diagrams which involve the retarded Green's function and have one outgoing leg. In order to obtain general tree diagrams, we set up a "classical measurement process" in which a virtual observer of a scattering experiment modifies the field and detects suitable energy differences. By adding a classical stochastic background field, we even obtain all loop diagrams. The expansions are compared with the standard Feynman diagrams of the corresponding quantum field theory.
ERIC Educational Resources Information Center
Davis, Colin J.; Bowers, Jeffrey S.
2006-01-01
Five theories of how letter position is coded are contrasted: position-specific slot-coding, Wickelcoding, open-bigram coding (discrete and continuous), and spatial coding. These theories make different predictions regarding the relative similarity of three different types of pairs of letter strings: substitution neighbors,…
Configuration interaction in charge exchange spectra of tin and xenon
NASA Astrophysics Data System (ADS)
D'Arcy, R.; Morris, O.; Ohashi, H.; Suda, S.; Tanuma, H.; Fujioka, S.; Nishimura, H.; Nishihara, K.; Suzuki, C.; Kato, T.; Koike, F.; O'Sullivan, G.
2011-06-01
Charge-state-specific extreme ultraviolet spectra from both tin ions and xenon ions have been recorded at Tokyo Metropolitan University. The electron cyclotron resonance source spectra were produced from charge exchange collisions between the ions and rare gas target atoms. To identify unknown spectral lines of tin and xenon, atomic structure calculations were performed for Sn14+-Sn17+ and Xe16+-Xe20+ using the Hartree-Fock configuration interaction code of Cowan (1981 The Theory of Atomic Structure and Spectra (Berkeley, CA: University of California Press)). The energies of the capture states involved in the single-electron process that occurs in these slow collisions were estimated using the classical over-barrier model.
Suburban queer: reading Grease.
Borgstrom, Michael
2011-01-01
This article examines how nontraditional representations of gender can complicate received social norms, and it examines, in particular, how pre-adolescent suburban youth reconfigure social codes within popular film in order to identify positive queer aesthetics. While several studies have documented the function of classic and mainstream film in the tradition of queer reading, there has been comparatively less analysis devoted to the ways that filmic representations themselves might contribute to theoretical debates regarding sexual identity. As a case in point, this essay analyzes the 1978 musical Grease in order to suggest ways that critics might navigate between strict social constructionist and essentialist theories of sexual identity in order to identify avenues for queer identification within non-queer contexts.
Lattice Truss Structural Response Using Energy Methods
NASA Technical Reports Server (NTRS)
Kenner, Winfred Scottson
1996-01-01
A deterministic methodology is presented for developing closed-form deflection equations for two-dimensional and three-dimensional lattice structures. Four types of lattice structures are studied: beams, plates, shells and soft lattices. Castigliano's second theorem, which entails the total strain energy of a structure, is utilized to generate highly accurate results. Derived deflection equations provide new insight into the bending and shear behavior of the four types of lattices, in contrast to classic solutions of similar structures. Lattice derivations utilizing kinetic energy are also presented, and used to examine the free vibration response of simple lattice structures. Derivations utilizing finite element theory for unique lattice behavior are also presented and validated using the finite element analysis code EAL.
How Settings Change People: Applying Behavior Setting Theory to Consumer-Run Organizations
ERIC Educational Resources Information Center
Brown, Louis D.; Shepherd, Matthew D.; Wituk, Scott A.; Meissen, Greg
2007-01-01
Self-help initiatives stand as a classic context for organizational studies in community psychology. Behavior setting theory stands as a classic conception of organizations and the environment. This study explores both, applying behavior setting theory to consumer-run organizations (CROs). Analysis of multiple data sets from all CROs in Kansas…
ERIC Educational Resources Information Center
Langdale, John A.
The construct of "organizational climate" was explicated and various ways of operationalizing it were reviewed. A survey was made of the literature pertinent to the classical-human relations dimension of environmental quality. As a result, it was hypothesized that the appropriateness of the classical and human-relations master plans is moderated…
The evolving Planck mass in classically scale-invariant theories
NASA Astrophysics Data System (ADS)
Kannike, K.; Raidal, M.; Spethmann, C.; Veermäe, H.
2017-04-01
We consider classically scale-invariant theories with non-minimally coupled scalar fields, where the Planck mass and the hierarchy of physical scales are dynamically generated. The classical theories possess a fixed point, where scale invariance is spontaneously broken. In these theories, however, the Planck mass becomes unstable in the presence of explicit sources of scale invariance breaking, such as non-relativistic matter and cosmological constant terms. We quantify the constraints on such classical models from Big Bang Nucleosynthesis that lead to an upper bound on the non-minimal coupling and require trans-Planckian field values. We show that quantum corrections to the scalar potential can stabilise the fixed point close to the minimum of the Coleman-Weinberg potential. The time-averaged motion of the evolving fixed point is strongly suppressed, thus the limits on the evolving gravitational constant from Big Bang Nucleosynthesis and other measurements do not presently constrain this class of theories. Field oscillations around the fixed point, if not damped, contribute to the dark matter density of the Universe.
New GOES satellite synchronized time code generation
NASA Technical Reports Server (NTRS)
Fossler, D. E.; Olson, R. K.
1984-01-01
The TRAK Systems' GOES Satellite Synchronized Time Code Generator is described. TRAK Systems has developed this timing instrument to supply improved accuracy over most existing GOES receiver clocks. A classical time code generator is integrated with a GOES receiver.
NASA Technical Reports Server (NTRS)
McGowan, David M.; Anderson, Melvin S.
1998-01-01
The analytical formulation of curved-plate non-linear equilibrium equations that include transverse-shear-deformation effects is presented. A unified set of non-linear strains that contains terms from both physical and tensorial strain measures is used. Using several simplifying assumptions, linearized, stability equations are derived that describe the response of the plate just after bifurcation buckling occurs. These equations are then modified to allow the plate reference surface to be located a distance z(c), from the centroid surface which is convenient for modeling stiffened-plate assemblies. The implementation of the new theory into the VICONOPT buckling and vibration analysis and optimum design program code is described. Either classical plate theory (CPT) or first-order shear-deformation plate theory (SDPT) may be selected in VICONOPT. Comparisons of numerical results for several example problems with different loading states are made. Results from the new curved-plate analysis compare well with closed-form solution results and with results from known example problems in the literature. Finally, a design-optimization study of two different cylindrical shells subject to uniform axial compression is presented.
DOE R&D Accomplishments Database
Weinberg, Alvin M.; Noderer, L. C.
1951-05-15
The large scale release of nuclear energy in a uranium fission chain reaction involves two essentially distinct physical phenomena. On the one hand there are the individual nuclear processes such as fission, neutron capture, and neutron scattering. These are essentially quantum mechanical in character, and their theory is non-classical. On the other hand, there is the process of diffusion -- in particular, diffusion of neutrons, which is of fundamental importance in a nuclear chain reaction. This process is classical; insofar as the theory of the nuclear chain reaction depends on the theory of neutron diffusion, the mathematical study of chain reactions is an application of classical, not quantum mechanical, techniques.
Deterministic quantum dense coding networks
NASA Astrophysics Data System (ADS)
Roy, Saptarshi; Chanda, Titas; Das, Tamoghna; Sen(De), Aditi; Sen, Ujjwal
2018-07-01
We consider the scenario of deterministic classical information transmission between multiple senders and a single receiver, when they a priori share a multipartite quantum state - an attempt towards building a deterministic dense coding network. Specifically, we prove that in the case of two or three senders and a single receiver, generalized Greenberger-Horne-Zeilinger (gGHZ) states are not beneficial for sending classical information deterministically beyond the classical limit, except when the shared state is the GHZ state itself. On the other hand, three- and four-qubit generalized W (gW) states with specific parameters as well as the four-qubit Dicke states can provide a quantum advantage of sending the information in deterministic dense coding. Interestingly however, numerical simulations in the three-qubit scenario reveal that the percentage of states from the GHZ-class that are deterministic dense codeable is higher than that of states from the W-class.
Exploring Ultrahigh-Intensity Laser-Plasma Interaction Physics with QED Particle-in-Cell Simulations
NASA Astrophysics Data System (ADS)
Luedtke, S. V.; Yin, L.; Labun, L. A.; Albright, B. J.; Stark, D. J.; Bird, R. F.; Nystrom, W. D.; Hegelich, B. M.
2017-10-01
Next generation high-intensity lasers are reaching intensity regimes where new physics-quantum electrodynamics (QED) corrections to otherwise classical plasma dynamics-becomes important. Modeling laser-plasma interactions in these extreme settings presents a challenge to traditional particle-in-cell (PIC) codes, which either do not have radiation reaction or include only classical radiation reaction. We discuss a semi-classical approach to adding quantum radiation reaction and photon production to the PIC code VPIC. We explore these intensity regimes with VPIC, compare with results from the PIC code PSC, and report on ongoing work to expand the capability of VPIC in these regimes. This work was supported by the U.S. DOE, Los Alamos National Laboratory Science program, LDRD program, NNSA (DE-NA0002008), and AFOSR (FA9550-14-1-0045). HPC resources provided by TACC, XSEDE, and LANL Institutional Computing.
Remote sensing of the solar photosphere: a tale of two methods
NASA Astrophysics Data System (ADS)
Viavattene, G.; Berrilli, F.; Collados Vera, M.; Del Moro, D.; Giovannelli, L.; Ruiz Cobo, B.; Zuccarello, F.
2018-01-01
Solar spectro-polarimetry is a powerful tool to investigate the physical processes occurring in the solar atmosphere. The different states of polarization and wavelengths have in fact encoded the information about the thermodynamic state of the solar plasma and the interacting magnetic field. In particular, the radiative transfer theory allows us to invert the spectro-polarimetric data to obtain the physical parameters of the different atmospheric layers and, in particular, of the photosphere. In this work, we present a comparison between two methods used to analyze spectro-polarimetric data: the classical Center of Gravity method in the weak field approximation and an inversion code that solves numerically the radiative transfer equation. The Center of Gravity method returns reliable values for the magnetic field and for the line-of-sight velocity in those regions where the weak field approximation is valid (field strength below 400 G), while the inversion code is able to return the stratification of many physical parameters in the layers where the spectral line used for the inversion is formed.
Classical conformality in the Standard Model from Coleman’s theory
NASA Astrophysics Data System (ADS)
Kawana, Kiyoharu
2016-09-01
The classical conformality (CC) is one of the possible candidates for explaining the gauge hierarchy of the Standard Model (SM). We show that it is naturally obtained from the Coleman’s theory on baby universe.
NASA Astrophysics Data System (ADS)
Baumeler, ńmin; Feix, Adrien; Wolf, Stefan
2014-10-01
Quantum theory in a global spacetime gives rise to nonlocal correlations, which cannot be explained causally in a satisfactory way; this motivates the study of theories with reduced global assumptions. Oreshkov, Costa, and Brukner [Nat. Commun. 3, 1092 (2012), 10.1038/ncomms2076] proposed a framework in which quantum theory is valid locally but where, at the same time, no global spacetime, i.e., predefined causal order, is assumed beyond the absence of logical paradoxes. It was shown for the two-party case, however, that a global causal order always emerges in the classical limit. Quite naturally, it has been conjectured that the same also holds in the multiparty setting. We show that, counter to this belief, classical correlations locally compatible with classical probability theory exist that allow for deterministic signaling between three or more parties incompatible with any predefined causal order.
A post-classical theory of enamel biomineralization… and why we need one.
Simmer, James P; Richardson, Amelia S; Hu, Yuan-Yuan; Smith, Charles E; Ching-Chun Hu, Jan
2012-09-01
Enamel crystals are unique in shape, orientation and organization. They are hundreds of thousands times longer than they are wide, run parallel to each other, are oriented with respect to the ameloblast membrane at the mineralization front and are organized into rod or interrod enamel. The classical theory of amelogenesis postulates that extracellular matrix proteins shape crystallites by specifically inhibiting ion deposition on the crystal sides, orient them by binding multiple crystallites and establish higher levels of crystal organization. Elements of the classical theory are supported in principle by in vitro studies; however, the classical theory does not explain how enamel forms in vivo. In this review, we describe how amelogenesis is highly integrated with ameloblast cell activities and how the shape, orientation and organization of enamel mineral ribbons are established by a mineralization front apparatus along the secretory surface of the ameloblast cell membrane.
Horodecki, Michał; Oppenheim, Jonathan; Winter, Andreas
2005-08-04
Information--be it classical or quantum--is measured by the amount of communication needed to convey it. In the classical case, if the receiver has some prior information about the messages being conveyed, less communication is needed. Here we explore the concept of prior quantum information: given an unknown quantum state distributed over two systems, we determine how much quantum communication is needed to transfer the full state to one system. This communication measures the partial information one system needs, conditioned on its prior information. We find that it is given by the conditional entropy--a quantity that was known previously, but lacked an operational meaning. In the classical case, partial information must always be positive, but we find that in the quantum world this physical quantity can be negative. If the partial information is positive, its sender needs to communicate this number of quantum bits to the receiver; if it is negative, then sender and receiver instead gain the corresponding potential for future quantum communication. We introduce a protocol that we term 'quantum state merging' which optimally transfers partial information. We show how it enables a systematic understanding of quantum network theory, and discuss several important applications including distributed compression, noiseless coding with side information, multiple access channels and assisted entanglement distillation.
A network coding based routing protocol for underwater sensor networks.
Wu, Huayang; Chen, Min; Guan, Xin
2012-01-01
Due to the particularities of the underwater environment, some negative factors will seriously interfere with data transmission rates, reliability of data communication, communication range, and network throughput and energy consumption of underwater sensor networks (UWSNs). Thus, full consideration of node energy savings, while maintaining a quick, correct and effective data transmission, extending the network life cycle are essential when routing protocols for underwater sensor networks are studied. In this paper, we have proposed a novel routing algorithm for UWSNs. To increase energy consumption efficiency and extend network lifetime, we propose a time-slot based routing algorithm (TSR).We designed a probability balanced mechanism and applied it to TSR. The theory of network coding is introduced to TSBR to meet the requirement of further reducing node energy consumption and extending network lifetime. Hence, time-slot based balanced network coding (TSBNC) comes into being. We evaluated the proposed time-slot based balancing routing algorithm and compared it with other classical underwater routing protocols. The simulation results show that the proposed protocol can reduce the probability of node conflicts, shorten the process of routing construction, balance energy consumption of each node and effectively prolong the network lifetime.
A Network Coding Based Routing Protocol for Underwater Sensor Networks
Wu, Huayang; Chen, Min; Guan, Xin
2012-01-01
Due to the particularities of the underwater environment, some negative factors will seriously interfere with data transmission rates, reliability of data communication, communication range, and network throughput and energy consumption of underwater sensor networks (UWSNs). Thus, full consideration of node energy savings, while maintaining a quick, correct and effective data transmission, extending the network life cycle are essential when routing protocols for underwater sensor networks are studied. In this paper, we have proposed a novel routing algorithm for UWSNs. To increase energy consumption efficiency and extend network lifetime, we propose a time-slot based routing algorithm (TSR).We designed a probability balanced mechanism and applied it to TSR. The theory of network coding is introduced to TSBR to meet the requirement of further reducing node energy consumption and extending network lifetime. Hence, time-slot based balanced network coding (TSBNC) comes into being. We evaluated the proposed time-slot based balancing routing algorithm and compared it with other classical underwater routing protocols. The simulation results show that the proposed protocol can reduce the probability of node conflicts, shorten the process of routing construction, balance energy consumption of each node and effectively prolong the network lifetime. PMID:22666045
MAC/GMC 4.0 User's Manual: Example Problem Manual. Volume 3
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Arnold, Steven M.
2002-01-01
This document is the third volume in the three volume set of User's Manuals for the Micromechanics Analysis Code with Generalized Method of Cells Version 4.0 (MAC/GMC 4.0). Volume 1 is the Theory Manual, Volume 2 is the Keywords Manual, and this document is the Example Problems Manual. MAC/GMC 4.0 is a composite material and laminate analysis software program developed at the NASA Glenn Research Center. It is based on the generalized method of cells (GMC) micromechanics theory, which provides access to the local stress and strain fields in the composite material. This access grants GMC the ability to accommodate arbitrary local models for inelastic material behavior and various types of damage and failure analysis. MAC/GMC 4.0 has been built around GMC to provide the theory with a user-friendly framework, along with a library of local inelastic, damage, and failure models. Further, application of simulated thermo-mechanical loading, generation of output results, and selection of architectures to represent the composite material, have been automated in MAC/GMC 4.0. Finally, classical lamination theory has been implemented within MAC/GMC 4.0 wherein GMC is used to model the composite material response of each ply. Consequently, the full range of GMC composite material capabilities is available for analysis of arbitrary laminate configurations as well. This volume provides in-depth descriptions of 43 example problems, which were specially designed to highlight many of the most important capabilities of the code. The actual input files associated with each example problem are distributed with the MAC/GMC 4.0 software; thus providing the user with a convenient starting point for their own specialized problems of interest.
Statistical mechanics in the context of special relativity. II.
Kaniadakis, G
2005-09-01
The special relativity laws emerge as one-parameter (light speed) generalizations of the corresponding laws of classical physics. These generalizations, imposed by the Lorentz transformations, affect both the definition of the various physical observables (e.g., momentum, energy, etc.), as well as the mathematical apparatus of the theory. Here, following the general lines of [Phys. Rev. E 66, 056125 (2002)], we show that the Lorentz transformations impose also a proper one-parameter generalization of the classical Boltzmann-Gibbs-Shannon entropy. The obtained relativistic entropy permits us to construct a coherent and self-consistent relativistic statistical theory, preserving the main features of the ordinary statistical theory, which is recovered in the classical limit. The predicted distribution function is a one-parameter continuous deformation of the classical Maxwell-Boltzmann distribution and has a simple analytic form, showing power law tails in accordance with the experimental evidence. Furthermore, this statistical mechanics can be obtained as the stationary case of a generalized kinetic theory governed by an evolution equation obeying the H theorem and reproducing the Boltzmann equation of the ordinary kinetics in the classical limit.
Giulio, Massimo Di
2018-05-19
A discriminative statistical test among the different theories proposed to explain the origin of the genetic code is presented. Gathering the amino acids into polarity and biosynthetic classes that are the first expression of the physicochemical theory of the origin of the genetic code and the second expression of the coevolution theory, these classes are utilized in the Fisher's exact test to establish their significance within the genetic code table. Linking to the rows and columns of the genetic code of probabilities that express the statistical significance of these classes, I have finally been in the condition to be able to calculate a χ value to link to both the physicochemical theory and to the coevolution theory that would express the corroboration level referred to these theories. The comparison between these two χ values showed that the coevolution theory is able to explain - in this strictly empirical analysis - the origin of the genetic code better than that of the physicochemical theory. Copyright © 2018 Elsevier B.V. All rights reserved.
Single-shot secure quantum network coding on butterfly network with free public communication
NASA Astrophysics Data System (ADS)
Owari, Masaki; Kato, Go; Hayashi, Masahito
2018-01-01
Quantum network coding on the butterfly network has been studied as a typical example of quantum multiple cast network. We propose a secure quantum network code for the butterfly network with free public classical communication in the multiple unicast setting under restricted eavesdropper’s power. This protocol certainly transmits quantum states when there is no attack. We also show the secrecy with shared randomness as additional resource when the eavesdropper wiretaps one of the channels in the butterfly network and also derives the information sending through public classical communication. Our protocol does not require verification process, which ensures single-shot security.
The polarization signature from the circumstellar disks of classical Be stars
NASA Astrophysics Data System (ADS)
Halonen, R. J.; Jones, C. E.
2012-05-01
The scattering of light in the nonspherical circumstellar envelopes of classical Be stars produces distinct polarimetric properties that can be used to investigate the physical nature of the scattering environment. Both the continuum and emission line polarization are potentially important diagnostic tools in the modeling of these systems. We combine the use of a new multiple scattering code with an established non-LTE radiative transfer code to study the characteristic wavelength-dependence of the intrinsic polarization of classical Be stars. We construct models using realistic chemical composition and self-consistent calculations of the thermal structure of the disk, and then determine the fraction of emergent polarized light. In particular, the aim of this theoretical research project is to investigate the effect of gas density and metallicity on the observed polarization properties of classical Be stars.
Vinck, Martin; Bosman, Conrado A.
2016-01-01
During visual stimulation, neurons in visual cortex often exhibit rhythmic and synchronous firing in the gamma-frequency (30–90 Hz) band. Whether this phenomenon plays a functional role during visual processing is not fully clear and remains heavily debated. In this article, we explore the function of gamma-synchronization in the context of predictive and efficient coding theories. These theories hold that sensory neurons utilize the statistical regularities in the natural world in order to improve the efficiency of the neural code, and to optimize the inference of the stimulus causes of the sensory data. In visual cortex, this relies on the integration of classical receptive field (CRF) data with predictions from the surround. Here we outline two main hypotheses about gamma-synchronization in visual cortex. First, we hypothesize that the precision of gamma-synchronization reflects the extent to which CRF data can be accurately predicted by the surround. Second, we hypothesize that different cortical columns synchronize to the extent that they accurately predict each other’s CRF visual input. We argue that these two hypotheses can account for a large number of empirical observations made on the stimulus dependencies of gamma-synchronization. Furthermore, we show that they are consistent with the known laminar dependencies of gamma-synchronization and the spatial profile of intercolumnar gamma-synchronization, as well as the dependence of gamma-synchronization on experience and development. Based on our two main hypotheses, we outline two additional hypotheses. First, we hypothesize that the precision of gamma-synchronization shows, in general, a negative dependence on RF size. In support, we review evidence showing that gamma-synchronization decreases in strength along the visual hierarchy, and tends to be more prominent in species with small V1 RFs. Second, we hypothesize that gamma-synchronized network dynamics facilitate the emergence of spiking output that is particularly information-rich and sparse. PMID:27199684
Towers of generalized divisible quantum codes
NASA Astrophysics Data System (ADS)
Haah, Jeongwan
2018-04-01
A divisible binary classical code is one in which every code word has weight divisible by a fixed integer. If the divisor is 2ν for a positive integer ν , then one can construct a Calderbank-Shor-Steane (CSS) code, where X -stabilizer space is the divisible classical code, that admits a transversal gate in the ν th level of Clifford hierarchy. We consider a generalization of the divisibility by allowing a coefficient vector of odd integers with which every code word has zero dot product modulo the divisor. In this generalized sense, we construct a CSS code with divisor 2ν +1 and code distance d from any CSS code of code distance d and divisor 2ν where the transversal X is a nontrivial logical operator. The encoding rate of the new code is approximately d times smaller than that of the old code. In particular, for large d and ν ≥2 , our construction yields a CSS code of parameters [[O (dν -1) ,Ω (d ) ,d ] ] admitting a transversal gate at the ν th level of Clifford hierarchy. For our construction we introduce a conversion from magic state distillation protocols based on Clifford measurements to those based on codes with transversal T gates. Our tower contains, as a subclass, generalized triply even CSS codes that have appeared in so-called gauge fixing or code switching methods.
Leading-order classical Lagrangians for the nonminimal standard-model extension
NASA Astrophysics Data System (ADS)
Reis, J. A. A. S.; Schreck, M.
2018-03-01
In this paper, we derive the general leading-order classical Lagrangian covering all fermion operators of the nonminimal standard-model extension (SME). Such a Lagrangian is considered to be the point-particle analog of the effective field theory description of Lorentz violation that is provided by the SME. At leading order in Lorentz violation, the Lagrangian obtained satisfies the set of five nonlinear equations that govern the map from the field theory to the classical description. This result can be of use for phenomenological studies of classical bodies in gravitational fields.
JOURNAL SCOPE GUIDELINES: Paper classification scheme
NASA Astrophysics Data System (ADS)
2005-06-01
This scheme is used to clarify the journal's scope and enable authors and readers to more easily locate the appropriate section for their work. For each of the sections listed in the scope statement we suggest some more detailed subject areas which help define that subject area. These lists are by no means exhaustive and are intended only as a guide to the type of papers we envisage appearing in each section. We acknowledge that no classification scheme can be perfect and that there are some papers which might be placed in more than one section. We are happy to provide further advice on paper classification to authors upon request (please email jphysa@iop.org). 1. Statistical physics numerical and computational methods statistical mechanics, phase transitions and critical phenomena quantum condensed matter theory Bose-Einstein condensation strongly correlated electron systems exactly solvable models in statistical mechanics lattice models, random walks and combinatorics field-theoretical models in statistical mechanics disordered systems, spin glasses and neural networks nonequilibrium systems network theory 2. Chaotic and complex systems nonlinear dynamics and classical chaos fractals and multifractals quantum chaos classical and quantum transport cellular automata granular systems and self-organization pattern formation biophysical models 3. Mathematical physics combinatorics algebraic structures and number theory matrix theory classical and quantum groups, symmetry and representation theory Lie algebras, special functions and orthogonal polynomials ordinary and partial differential equations difference and functional equations integrable systems soliton theory functional analysis and operator theory inverse problems geometry, differential geometry and topology numerical approximation and analysis geometric integration computational methods 4. Quantum mechanics and quantum information theory coherent states eigenvalue problems supersymmetric quantum mechanics scattering theory relativistic quantum mechanics semiclassical approximations foundations of quantum mechanics and measurement theory entanglement and quantum nonlocality geometric phases and quantum tomography quantum tunnelling decoherence and open systems quantum cryptography, communication and computation theoretical quantum optics 5. Classical and quantum field theory quantum field theory gauge and conformal field theory quantum electrodynamics and quantum chromodynamics Casimir effect integrable field theory random matrix theory applications in field theory string theory and its developments classical field theory and electromagnetism metamaterials 6. Fluid and plasma theory turbulence fundamental plasma physics kinetic theory magnetohydrodynamics and multifluid descriptions strongly coupled plasmas one-component plasmas non-neutral plasmas astrophysical and dusty plasmas
Quasi-Static Analysis of Round LaRC THUNDER Actuators
NASA Technical Reports Server (NTRS)
Campbell, Joel F.
2007-01-01
An analytic approach is developed to predict the shape and displacement with voltage in the quasi-static limit of round LaRC Thunder Actuators. The problem is treated with classical lamination theory and Von Karman non-linear analysis. In the case of classical lamination theory exact analytic solutions are found. It is shown that classical lamination theory is insufficient to describe the physical situation for large actuators but is sufficient for very small actuators. Numerical results are presented for the non-linear analysis and compared with experimental measurements. Snap-through behavior, bifurcation, and stability are presented and discussed.
Quasi-Static Analysis of LaRC THUNDER Actuators
NASA Technical Reports Server (NTRS)
Campbell, Joel F.
2007-01-01
An analytic approach is developed to predict the shape and displacement with voltage in the quasi-static limit of LaRC Thunder Actuators. The problem is treated with classical lamination theory and Von Karman non-linear analysis. In the case of classical lamination theory exact analytic solutions are found. It is shown that classical lamination theory is insufficient to describe the physical situation for large actuators but is sufficient for very small actuators. Numerical results are presented for the non-linear analysis and compared with experimental measurements. Snap-through behavior, bifurcation, and stability are presented and discussed.
Navigating the grounded theory terrain. Part 2.
Hunter, Andrew; Murphy, Kathy; Grealish, Annmarie; Casey, Dympna; Keady, John
2011-01-01
In this paper, the choice of classic grounded theory will be discussed and justified in the context of the first author's PhD research. The methodological discussion takes place within the context of PhD research entitled: Development of a stakeholder-led framework for a structured education programme that will prepare nurses and healthcare assistants to deliver a psychosocial intervention for people with dementia. There is a lack of research and limited understanding of the effect of psychosocial interventions on people with dementia. The first author thought classic grounded theory a suitable research methodology to investigate as it is held to be ideal for areas of research where there is little understanding of the social processes at work. The literature relating to the practical application of classic grounded theory is illustrated using examples relating to four key grounded theory components: Theory development: using constant comparison and memoing, Methodological rigour, Emergence of a core category, Inclusion of self and engagement with participants. Following discussion of the choice and application of classic grounded theory, this paper explores the need for researchers to visit and understand the various grounded theory options. This paper argues that researchers new to grounded theory must be familiar with and understand the various options. The researchers will then be able to apply the methodologies they choose consistently and critically. Doing so will allow them to develop theory rigorously and they will ultimately be able to better defend their final methodological destinations.
Mathematical model of the SH-3G helicopter
NASA Technical Reports Server (NTRS)
Phillips, J. D.
1982-01-01
A mathematical model of the Sikorsky SH-3G helicopter based on classical nonlinear, quasi-steady rotor theory was developed. The model was validated statically and dynamically by comparison with Navy flight-test data. The model incorporates ad hoc revisions which address the ideal assumptions of classical rotor theory and improve the static trim characteristics to provide a more realistic simulation, while retaining the simplicity of the classical model.
Geometric Algebra for Physicists
NASA Astrophysics Data System (ADS)
Doran, Chris; Lasenby, Anthony
2007-11-01
Preface; Notation; 1. Introduction; 2. Geometric algebra in two and three dimensions; 3. Classical mechanics; 4. Foundations of geometric algebra; 5. Relativity and spacetime; 6. Geometric calculus; 7. Classical electrodynamics; 8. Quantum theory and spinors; 9. Multiparticle states and quantum entanglement; 10. Geometry; 11. Further topics in calculus and group theory; 12. Lagrangian and Hamiltonian techniques; 13. Symmetry and gauge theory; 14. Gravitation; Bibliography; Index.
The Nature of Quantum Truth: Logic, Set Theory, & Mathematics in the Context of Quantum Theory
NASA Astrophysics Data System (ADS)
Frey, Kimberly
The purpose of this dissertation is to construct a radically new type of mathematics whose underlying logic differs from the ordinary classical logic used in standard mathematics, and which we feel may be more natural for applications in quantum mechanics. Specifically, we begin by constructing a first order quantum logic, the development of which closely parallels that of ordinary (classical) first order logic --- the essential differences are in the nature of the logical axioms, which, in our construction, are motivated by quantum theory. After showing that the axiomatic first order logic we develop is sound and complete (with respect to a particular class of models), this logic is then used as a foundation on which to build (axiomatic) mathematical systems --- and we refer to the resulting new mathematics as "quantum mathematics." As noted above, the hope is that this form of mathematics is more natural than classical mathematics for the description of quantum systems, and will enable us to address some foundational aspects of quantum theory which are still troublesome --- e.g. the measurement problem --- as well as possibly even inform our thinking about quantum gravity. After constructing the underlying logic, we investigate properties of several mathematical systems --- e.g. axiom systems for abstract algebras, group theory, linear algebra, etc. --- in the presence of this quantum logic. In the process, we demonstrate that the resulting quantum mathematical systems have some strange, but very interesting features, which indicates a richness in the structure of mathematics that is classically inaccessible. Moreover, some of these features do indeed suggest possible applications to foundational questions in quantum theory. We continue our investigation of quantum mathematics by constructing an axiomatic quantum set theory, which we show satisfies certain desirable criteria. Ultimately, we hope that such a set theory will lead to a foundation for quantum mathematics in a sense which parallels the foundational role of classical set theory in classical mathematics. One immediate application of the quantum set theory we develop is to provide a foundation on which to construct quantum natural numbers, which are the quantum analog of the classical counting numbers. It turns out that in a special class of models, there exists a 1-1 correspondence between the quantum natural numbers and bounded observables in quantum theory whose eigenvalues are (ordinary) natural numbers. This 1-1 correspondence is remarkably satisfying, and not only gives us great confidence in our quantum set theory, but indicates the naturalness of such models for quantum theory itself. We go on to develop a Peano-like arithmetic for these new "numbers," as well as consider some of its consequences. Finally, we conclude by summarizing our results, and discussing directions for future work.
Nucleation theory - Is replacement free energy needed?. [error analysis of capillary approximation
NASA Technical Reports Server (NTRS)
Doremus, R. H.
1982-01-01
It has been suggested that the classical theory of nucleation of liquid from its vapor as developed by Volmer and Weber (1926) needs modification with a factor referred to as the replacement free energy and that the capillary approximation underlying the classical theory is in error. Here, the classical nucleation equation is derived from fluctuation theory, Gibb's result for the reversible work to form a critical nucleus, and the rate of collision of gas molecules with a surface. The capillary approximation is not used in the derivation. The chemical potential of small drops is then considered, and it is shown that the capillary approximation can be derived from thermodynamic equations. The results show that no corrections to Volmer's equation are needed.
Effective model hierarchies for dynamic and static classical density functional theories
NASA Astrophysics Data System (ADS)
Majaniemi, S.; Provatas, N.; Nonomura, M.
2010-09-01
The origin and methodology of deriving effective model hierarchies are presented with applications to solidification of crystalline solids. In particular, it is discussed how the form of the equations of motion and the effective parameters on larger scales can be obtained from the more microscopic models. It will be shown that tying together the dynamic structure of the projection operator formalism with static classical density functional theories can lead to incomplete (mass) transport properties even though the linearized hydrodynamics on large scales is correctly reproduced. To facilitate a more natural way of binding together the dynamics of the macrovariables and classical density functional theory, a dynamic generalization of density functional theory based on the nonequilibrium generating functional is suggested.
Using extant literature in a grounded theory study: a personal account.
Yarwood-Ross, Lee; Jack, Kirsten
2015-03-01
To provide a personal account of the factors in a doctoral study that led to the adoption of classic grounded theory principles relating to the use of literature. Novice researchers considering grounded theory methodology will become aware of the contentious issue of how and when extant literature should be incorporated into a study. The three main grounded theory approaches are classic, Straussian and constructivist, and the seminal texts provide conflicting beliefs surrounding the use of literature. A classic approach avoids a pre-study literature review to minimise preconceptions and emphasises the constant comparison method, while the Straussian and constructivist approaches focus more on the beneficial aspects of an initial literature review and researcher reflexivity. The debate also extends into the wider academic community, where no consensus exists. This is a methodological paper detailing the authors' engagement in the debate surrounding the role of the literature in a grounded theory study. In the authors' experience, researchers can best understand the use of literature in grounded theory through immersion in the seminal texts, engaging with wider academic literature, and examining their preconceptions of the substantive area. The authors concluded that classic grounded theory principles were appropriate in the context of their doctoral study. Novice researchers will have their own sets of circumstances when preparing their studies and should become aware of the different perspectives to make decisions that they can ultimately justify. This paper can be used by other novice researchers as an example of the decision-making process that led to delaying a pre-study literature review and identifies the resources used to write a research proposal when using a classic grounded theory approach.
Position-based coding and convex splitting for private communication over quantum channels
NASA Astrophysics Data System (ADS)
Wilde, Mark M.
2017-10-01
The classical-input quantum-output (cq) wiretap channel is a communication model involving a classical sender X, a legitimate quantum receiver B, and a quantum eavesdropper E. The goal of a private communication protocol that uses such a channel is for the sender X to transmit a message in such a way that the legitimate receiver B can decode it reliably, while the eavesdropper E learns essentially nothing about which message was transmitted. The ɛ -one-shot private capacity of a cq wiretap channel is equal to the maximum number of bits that can be transmitted over the channel, such that the privacy error is no larger than ɛ \\in (0,1). The present paper provides a lower bound on the ɛ -one-shot private classical capacity, by exploiting the recently developed techniques of Anshu, Devabathini, Jain, and Warsi, called position-based coding and convex splitting. The lower bound is equal to a difference of the hypothesis testing mutual information between X and B and the "alternate" smooth max-information between X and E. The one-shot lower bound then leads to a non-trivial lower bound on the second-order coding rate for private classical communication over a memoryless cq wiretap channel.
NASA Astrophysics Data System (ADS)
Choudhary, Kamal; Congo, Faical Yannick P.; Liang, Tao; Becker, Chandler; Hennig, Richard G.; Tavazza, Francesca
2017-01-01
Classical empirical potentials/force-fields (FF) provide atomistic insights into material phenomena through molecular dynamics and Monte Carlo simulations. Despite their wide applicability, a systematic evaluation of materials properties using such potentials and, especially, an easy-to-use user-interface for their comparison is still lacking. To address this deficiency, we computed energetics and elastic properties of variety of materials such as metals and ceramics using a wide range of empirical potentials and compared them to density functional theory (DFT) as well as to experimental data, where available. The database currently consists of 3248 entries including energetics and elastic property calculations, and it is still increasing. We also include computational tools for convex-hull plots for DFT and FF calculations. The data covers 1471 materials and 116 force-fields. In addition, both the complete database and the software coding used in the process have been released for public use online (presently at http://www.ctcms.nist.gov/˜knc6/periodic.html) in a user-friendly way designed to enable further material design and discovery.
Choudhary, Kamal; Congo, Faical Yannick P.; Liang, Tao; Becker, Chandler; Hennig, Richard G.; Tavazza, Francesca
2017-01-01
Classical empirical potentials/force-fields (FF) provide atomistic insights into material phenomena through molecular dynamics and Monte Carlo simulations. Despite their wide applicability, a systematic evaluation of materials properties using such potentials and, especially, an easy-to-use user-interface for their comparison is still lacking. To address this deficiency, we computed energetics and elastic properties of variety of materials such as metals and ceramics using a wide range of empirical potentials and compared them to density functional theory (DFT) as well as to experimental data, where available. The database currently consists of 3248 entries including energetics and elastic property calculations, and it is still increasing. We also include computational tools for convex-hull plots for DFT and FF calculations. The data covers 1471 materials and 116 force-fields. In addition, both the complete database and the software coding used in the process have been released for public use online (presently at http://www.ctcms.nist.gov/∼knc6/periodic.html) in a user-friendly way designed to enable further material design and discovery. PMID:28140407
Equilibrium β-limits in classical stellarators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loizu, Joaquim; Hudson, S. R.; Nuhrenberg, C.
Here, a numerical investigation is carried out to understand the equilibrium β-limit in a classical stellarator. The stepped-pressure equilibrium code is used in order to assess whether or not magnetic islands and stochastic field-lines can emerge at high β. Two modes of operation are considered: a zero-net-current stellarator and a fixed-iota stellarator. Despite the fact that relaxation is allowed, the former is shown to maintain good flux surfaces up to the equilibrium β-limit predicted by ideal-magnetohydrodynamics (MHD), above which a separatrix forms. The latter, which has no ideal equilibrium β-limit, is shown to develop regions of magnetic islands and chaosmore » at sufficiently high β, thereby providing a ‘non-ideal β-limit’. Perhaps surprisingly, however, the value of β at which the Shafranov shift of the axis reaches a fraction of the minor radius follows in all cases the scaling laws predicted by ideal-MHD. We compare our results to the High-Beta-Stellarator theory of Freidberg and derive a new prediction for the non-ideal equilibrium β-limit above which chaos emerges.« less
Equilibrium β-limits in classical stellarators
Loizu, Joaquim; Hudson, S. R.; Nuhrenberg, C.; ...
2017-11-17
Here, a numerical investigation is carried out to understand the equilibrium β-limit in a classical stellarator. The stepped-pressure equilibrium code is used in order to assess whether or not magnetic islands and stochastic field-lines can emerge at high β. Two modes of operation are considered: a zero-net-current stellarator and a fixed-iota stellarator. Despite the fact that relaxation is allowed, the former is shown to maintain good flux surfaces up to the equilibrium β-limit predicted by ideal-magnetohydrodynamics (MHD), above which a separatrix forms. The latter, which has no ideal equilibrium β-limit, is shown to develop regions of magnetic islands and chaosmore » at sufficiently high β, thereby providing a ‘non-ideal β-limit’. Perhaps surprisingly, however, the value of β at which the Shafranov shift of the axis reaches a fraction of the minor radius follows in all cases the scaling laws predicted by ideal-MHD. We compare our results to the High-Beta-Stellarator theory of Freidberg and derive a new prediction for the non-ideal equilibrium β-limit above which chaos emerges.« less
NASA Astrophysics Data System (ADS)
Hall, Michael L.; Doster, J. Michael
1990-03-01
The dynamic behavior of liquid metal heat pipe models is strongly influenced by the choice of evaporation and condensation modeling techniques. Classic kinetic theory descriptions of the evaporation and condensation processes are often inadequate for real situations; empirical accommodation coefficients are commonly utilized to reflect nonideal mass transfer rates. The complex geometries and flow fields found in proposed heat pipe systems cause considerable deviation from the classical models. the THROHPUT code, which has been described in previous works, was developed to model transient liquid metal heat pipe behavior from frozen startup conditions to steady state full power operation. It is used here to evaluate the sensitivity of transient liquid metal heat pipe models to the choice of evaporation and condensation accommodation coefficients. Comparisons are made with experimental liquid metal heat pipe data. It is found that heat pipe behavior can be predicted with the proper choice of the accommodation coefficients. However, the common assumption of spatially constant accommodation coefficients is found to be a limiting factor in the model.
Classical BV Theories on Manifolds with Boundary
NASA Astrophysics Data System (ADS)
Cattaneo, Alberto S.; Mnev, Pavel; Reshetikhin, Nicolai
2014-12-01
In this paper we extend the classical BV framework to gauge theories on spacetime manifolds with boundary. In particular, we connect the BV construction in the bulk with the BFV construction on the boundary and we develop its extension to strata of higher codimension in the case of manifolds with corners. We present several examples including electrodynamics, Yang-Mills theory and topological field theories coming from the AKSZ construction, in particular, the Chern-Simons theory, the BF theory, and the Poisson sigma model. This paper is the first step towards developing the perturbative quantization of such theories on manifolds with boundary in a way consistent with gluing.
Development of a unified constitutive model for an isotropic nickel base superalloy Rene 80
NASA Technical Reports Server (NTRS)
Ramaswamy, V. G.; Vanstone, R. H.; Laflen, J. H.; Stouffer, D. C.
1988-01-01
Accurate analysis of stress-strain behavior is of critical importance in the evaluation of life capabilities of hot section turbine engine components such as turbine blades and vanes. The constitutive equations used in the finite element analysis of such components must be capable of modeling a variety of complex behavior exhibited at high temperatures by cast superalloys. The classical separation of plasticity and creep employed in most of the finite element codes in use today is known to be deficient in modeling elevated temperature time dependent phenomena. Rate dependent, unified constitutive theories can overcome many of these difficulties. A new unified constitutive theory was developed to model the high temperature, time dependent behavior of Rene' 80 which is a cast turbine blade and vane nickel base superalloy. Considerations in model development included the cyclic softening behavior of Rene' 80, rate independence at lower temperatures and the development of a new model for static recovery.
The Kelvin-Helmholtz instability of boundary-layer plasmas in the kinetic regime
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steinbusch, Benedikt, E-mail: b.steinbusch@fz-juelich.de; Gibbon, Paul, E-mail: p.gibbon@fz-juelich.de; Department of Mathematics, Centre for Mathematical Plasma Astrophysics, Katholieke Universiteit Leuven
2016-05-15
The dynamics of the Kelvin-Helmholtz instability are investigated in the kinetic, high-frequency regime with a novel, two-dimensional, mesh-free tree code. In contrast to earlier studies which focused on specially prepared equilibrium configurations in order to compare with fluid theory, a more naturally occurring plasma-vacuum boundary layer is considered here with relevance to both space plasma and linear plasma devices. Quantitative comparisons of the linear phase are made between the fluid and kinetic models. After establishing the validity of this technique via comparison to linear theory and conventional particle-in-cell simulation for classical benchmark problems, a quantitative analysis of the more complexmore » magnetized plasma-vacuum layer is presented and discussed. It is found that in this scenario, the finite Larmor orbits of the ions result in significant departures from the effective shear velocity and width underlying the instability growth, leading to generally slower development and stronger nonlinear coupling between fast growing short-wavelength modes and longer wavelengths.« less
Wibral, Michael; Priesemann, Viola; Kay, Jim W; Lizier, Joseph T; Phillips, William A
2017-03-01
In many neural systems anatomical motifs are present repeatedly, but despite their structural similarity they can serve very different tasks. A prime example for such a motif is the canonical microcircuit of six-layered neo-cortex, which is repeated across cortical areas, and is involved in a number of different tasks (e.g. sensory, cognitive, or motor tasks). This observation has spawned interest in finding a common underlying principle, a 'goal function', of information processing implemented in this structure. By definition such a goal function, if universal, cannot be cast in processing-domain specific language (e.g. 'edge filtering', 'working memory'). Thus, to formulate such a principle, we have to use a domain-independent framework. Information theory offers such a framework. However, while the classical framework of information theory focuses on the relation between one input and one output (Shannon's mutual information), we argue that neural information processing crucially depends on the combination of multiple inputs to create the output of a processor. To account for this, we use a very recent extension of Shannon Information theory, called partial information decomposition (PID). PID allows to quantify the information that several inputs provide individually (unique information), redundantly (shared information) or only jointly (synergistic information) about the output. First, we review the framework of PID. Then we apply it to reevaluate and analyze several earlier proposals of information theoretic neural goal functions (predictive coding, infomax and coherent infomax, efficient coding). We find that PID allows to compare these goal functions in a common framework, and also provides a versatile approach to design new goal functions from first principles. Building on this, we design and analyze a novel goal function, called 'coding with synergy', which builds on combining external input and prior knowledge in a synergistic manner. We suggest that this novel goal function may be highly useful in neural information processing. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
An Examination of the Flynn Effect in the National Intelligence Test in Estonia
ERIC Educational Resources Information Center
Shiu, William
2012-01-01
This study examined the Flynn Effect (FE; i.e., the rise in IQ scores over time) in Estonia from Scale B of the National Intelligence Test using both classical test theory (CTT) and item response theory (IRT) methods. Secondary data from two cohorts (1934, n = 890 and 2006, n = 913) of students were analyzed, using both classical test theory (CTT)…
Bukhvostov-Lipatov model and quantum-classical duality
NASA Astrophysics Data System (ADS)
Bazhanov, Vladimir V.; Lukyanov, Sergei L.; Runov, Boris A.
2018-02-01
The Bukhvostov-Lipatov model is an exactly soluble model of two interacting Dirac fermions in 1 + 1 dimensions. The model describes weakly interacting instantons and anti-instantons in the O (3) non-linear sigma model. In our previous work [arxiv:arXiv:1607.04839] we have proposed an exact formula for the vacuum energy of the Bukhvostov-Lipatov model in terms of special solutions of the classical sinh-Gordon equation, which can be viewed as an example of a remarkable duality between integrable quantum field theories and integrable classical field theories in two dimensions. Here we present a complete derivation of this duality based on the classical inverse scattering transform method, traditional Bethe ansatz techniques and analytic theory of ordinary differential equations. In particular, we show that the Bethe ansatz equations defining the vacuum state of the quantum theory also define connection coefficients of an auxiliary linear problem for the classical sinh-Gordon equation. Moreover, we also present details of the derivation of the non-linear integral equations determining the vacuum energy and other spectral characteristics of the model in the case when the vacuum state is filled by 2-string solutions of the Bethe ansatz equations.
Whitley, Heather D.; Scullard, Christian R.; Benedict, Lorin X.; ...
2014-12-04
Here, we present a discussion of kinetic theory treatments of linear electrical and thermal transport in hydrogen plasmas, for a regime of interest to inertial confinement fusion applications. In order to assess the accuracy of one of the more involved of these approaches, classical Lenard-Balescu theory, we perform classical molecular dynamics simulations of hydrogen plasmas using 2-body quantum statistical potentials and compute both electrical and thermal conductivity from out particle trajectories using the Kubo approach. Our classical Lenard-Balescu results employing the identical statistical potentials agree well with the simulations.
A comparative study of diffraction of shallow-water waves by high-level IGN and GN equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, B.B.; Ertekin, R.C.; College of Shipbuilding Engineering, Harbin Engineering University, 150001 Harbin
2015-02-15
This work is on the nonlinear diffraction analysis of shallow-water waves, impinging on submerged obstacles, by two related theories, namely the classical Green–Naghdi (GN) equations and the Irrotational Green–Naghdi (IGN) equations, both sets of equations being at high levels and derived for incompressible and inviscid flows. Recently, the high-level Green–Naghdi equations have been applied to some wave transformation problems. The high-level IGN equations have also been used in the last decade to study certain wave propagation problems. However, past works on these theories used different numerical methods to solve these nonlinear and unsteady sets of differential equations and at differentmore » levels. Moreover, different physical problems have been solved in the past. Therefore, it has not been possible to understand the differences produced by these two sets of theories and their range of applicability so far. We are thus motivated to make a direct comparison of the results produced by these theories by use of the same numerical method to solve physically the same wave diffraction problems. We focus on comparing these two theories by using similar codes; only the equations used are different but other parts of the codes, such as the wave-maker, damping zone, discretion method, matrix solver, etc., are exactly the same. This way, we eliminate many potential sources of differences that could be produced by the solution of different equations. The physical problems include the presence of various submerged obstacles that can be used for example as breakwaters or to represent the continental shelf. A numerical wave tank is created by placing a wavemaker on one end and a wave absorbing beach on the other. The nonlinear and unsteady sets of differential equations are solved by the finite-difference method. The results are compared with different equations as well as with the available experimental data.« less
A comparative study of diffraction of shallow-water waves by high-level IGN and GN equations
NASA Astrophysics Data System (ADS)
Zhao, B. B.; Ertekin, R. C.; Duan, W. Y.
2015-02-01
This work is on the nonlinear diffraction analysis of shallow-water waves, impinging on submerged obstacles, by two related theories, namely the classical Green-Naghdi (GN) equations and the Irrotational Green-Naghdi (IGN) equations, both sets of equations being at high levels and derived for incompressible and inviscid flows. Recently, the high-level Green-Naghdi equations have been applied to some wave transformation problems. The high-level IGN equations have also been used in the last decade to study certain wave propagation problems. However, past works on these theories used different numerical methods to solve these nonlinear and unsteady sets of differential equations and at different levels. Moreover, different physical problems have been solved in the past. Therefore, it has not been possible to understand the differences produced by these two sets of theories and their range of applicability so far. We are thus motivated to make a direct comparison of the results produced by these theories by use of the same numerical method to solve physically the same wave diffraction problems. We focus on comparing these two theories by using similar codes; only the equations used are different but other parts of the codes, such as the wave-maker, damping zone, discretion method, matrix solver, etc., are exactly the same. This way, we eliminate many potential sources of differences that could be produced by the solution of different equations. The physical problems include the presence of various submerged obstacles that can be used for example as breakwaters or to represent the continental shelf. A numerical wave tank is created by placing a wavemaker on one end and a wave absorbing beach on the other. The nonlinear and unsteady sets of differential equations are solved by the finite-difference method. The results are compared with different equations as well as with the available experimental data.
Further Development of an Optimal Design Approach Applied to Axial Magnetic Bearings
NASA Technical Reports Server (NTRS)
Bloodgood, V. Dale, Jr.; Groom, Nelson J.; Britcher, Colin P.
2000-01-01
Classical design methods involved in magnetic bearings and magnetic suspension systems have always had their limitations. Because of this, the overall effectiveness of a design has always relied heavily on the skill and experience of the individual designer. This paper combines two approaches that have been developed to aid the accuracy and efficiency of magnetostatic design. The first approach integrates classical magnetic circuit theory with modern optimization theory to increase design efficiency. The second approach uses loss factors to increase the accuracy of classical magnetic circuit theory. As an example, an axial magnetic thrust bearing is designed for minimum power.
Evaluation of Dental Shade Guide Variability Using Cross-Polarized Photography.
Gurrea, Jon; Gurrea, Marta; Bruguera, August; Sampaio, Camila S; Janal, Malvin; Bonfante, Estevam; Coelho, Paulo G; Hirata, Ronaldo
2016-01-01
This study evaluated color variability in the A hue between the VITA Classical (VITA Zahnfabrik) shade guide and four other VITA-coded ceramic shade guides using a Canon EOS 60D camera and software (Photoshop CC, Adobe). A total of 125 photographs were taken, 5 per shade tab for each of 5 shades (A1 to A4) from the following shade guides: VITA Classical (control), IPS e.max Ceram (Ivoclar Vivadent), IPS d.SIGN (Ivoclar Vivadent), Initial ZI (GC), and Creation CC (Creation Willi Geller). Photos were processed with Adobe Photoshop CC to allow standardized evaluation of hue, chroma, and value between shade tabs. None of the VITA-coded shade tabs fully matched the VITA Classical shade tab for hue, chroma, or value. The VITA-coded shade guides evaluated herein showed an overall unmatched shade in all tabs when compared with the control, suggesting that shade selection should be made using the guide produced by the manufacturer of the ceramic intended for the final restoration.
Hoare, Karen J; Mills, Jane; Francis, Karen
2012-12-01
The terminology used to analyse data in a grounded theory study can be confusing. Different grounded theorists use a variety of terms which all have similar meanings. In the following study, we use terms adopted by Charmaz including: initial, focused and axial coding. Initial codes are used to analyse data with an emphasis on identifying gerunds, a verb acting as a noun. If initial codes are relevant to the developing theory, they are grouped with similar codes into categories. Categories become saturated when there are no new codes identified in the data. Axial codes are used to link categories together into a grounded theory process. Memo writing accompanies this data sifting and sorting. The following article explains how one initial code became a category providing a worked example of the grounded theory method of constant comparative analysis. The interplay between coding and categorization is facilitated by the constant comparative method. © 2012 Wiley Publishing Asia Pty Ltd.
New Methods in Non-Perturbative QCD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Unsal, Mithat
2017-01-31
In this work, we investigate the properties of quantum chromodynamics (QCD), by using newly developing mathematics and physics formalisms. Almost all of the mass in the visible universe emerges from a quantum chromodynamics (QCD), which has a completely negligible microscopic mass content. An intimately related issue in QCD is the quark confinement problem. Answers to non-perturbative questions in QCD remained largely elusive despite much effort over the years. It is also believed that the usual perturbation theory is inadequate to address these kinds of problems. Perturbation theory gives a divergent asymptotic series (even when the theory is properly renormalized), andmore » there are non-perturbative phenomena which never appear at any order in perturbation theory. Recently, a fascinating bridge between perturbation theory and non-perturbative effects has been found: a formalism called resurgence theory in mathematics tells us that perturbative data and non-perturbative data are intimately related. Translating this to the language of quantum field theory, it turns out that non-perturbative information is present in a coded form in perturbation theory and it can be decoded. We take advantage of this feature, which is particularly useful to understand some unresolved mysteries of QCD from first principles. In particular, we use: a) Circle compactifications which provide a semi-classical window to study confinement and mass gap problems, and calculable prototypes of the deconfinement phase transition; b) Resurgence theory and transseries which provide a unified framework for perturbative and non-perturbative expansion; c) Analytic continuation of path integrals and Lefschetz thimbles which may be useful to address sign problem in QCD at finite density.« less
Ethical and Stylistic Implications in Delivering Conference Papers.
ERIC Educational Resources Information Center
Enos, Theresa
1986-01-01
Analyzes shortcomings of conference papers intended for the eye rather than the ear. Referring to classical oratory, speech act theory, and cognitive theory, recommends revising papers for oral presentation by using classical disposition; deductive rather than inductive argument; formulaic repetition of words and phrases; non-inverted clause…
Quantum theory for 1D X-ray free electron laser
NASA Astrophysics Data System (ADS)
Anisimov, Petr M.
2018-06-01
Classical 1D X-ray Free Electron Laser (X-ray FEL) theory has stood the test of time by guiding FEL design and development prior to any full-scale analysis. Future X-ray FELs and inverse-Compton sources, where photon recoil approaches an electron energy spread value, push the classical theory to its limits of applicability. After substantial efforts by the community to find what those limits are, there is no universally agreed upon quantum approach to design and development of future X-ray sources. We offer a new approach to formulate the quantum theory for 1D X-ray FELs that has an obvious connection to the classical theory, which allows for immediate transfer of knowledge between the two regimes. We exploit this connection in order to draw quantum mechanical conclusions about the quantum nature of electrons and generated radiation in terms of FEL variables.
Topological order and memory time in marginally-self-correcting quantum memory
NASA Astrophysics Data System (ADS)
Siva, Karthik; Yoshida, Beni
2017-03-01
We examine two proposals for marginally-self-correcting quantum memory: the cubic code by Haah and the welded code by Michnicki. In particular, we prove explicitly that they are absent of topological order above zero temperature, as their Gibbs ensembles can be prepared via a short-depth quantum circuit from classical ensembles. Our proof technique naturally gives rise to the notion of free energy associated with excitations. Further, we develop a framework for an ergodic decomposition of Davies generators in CSS codes which enables formal reduction to simpler classical memory problems. We then show that memory time in the welded code is doubly exponential in inverse temperature via the Peierls argument. These results introduce further connections between thermal topological order and self-correction from the viewpoint of free energy and quantum circuit depth.
Plasmon mass scale and quantum fluctuations of classical fields on a real time lattice
NASA Astrophysics Data System (ADS)
Kurkela, Aleksi; Lappi, Tuomas; Peuron, Jarkko
2018-03-01
Classical real-time lattice simulations play an important role in understanding non-equilibrium phenomena in gauge theories and are used in particular to model the prethermal evolution of heavy-ion collisions. Above the Debye scale the classical Yang-Mills (CYM) theory can be matched smoothly to kinetic theory. First we study the limits of the quasiparticle picture of the CYM fields by determining the plasmon mass of the system using 3 different methods. Then we argue that one needs a numerical calculation of a system of classical gauge fields and small linearized fluctuations, which correspond to quantum fluctuations, in a way that keeps the separation between the two manifest. We demonstrate and test an implementation of an algorithm with the linearized fluctuation showing that the linearization indeed works and that the Gauss's law is conserved.
Efficient preparation of large-block-code ancilla states for fault-tolerant quantum computation
NASA Astrophysics Data System (ADS)
Zheng, Yi-Cong; Lai, Ching-Yi; Brun, Todd A.
2018-03-01
Fault-tolerant quantum computation (FTQC) schemes that use multiqubit large block codes can potentially reduce the resource overhead to a great extent. A major obstacle is the requirement for a large number of clean ancilla states of different types without correlated errors inside each block. These ancilla states are usually logical stabilizer states of the data-code blocks, which are generally difficult to prepare if the code size is large. Previously, we have proposed an ancilla distillation protocol for Calderbank-Shor-Steane (CSS) codes by classical error-correcting codes. It was assumed that the quantum gates in the distillation circuit were perfect; however, in reality, noisy quantum gates may introduce correlated errors that are not treatable by the protocol. In this paper, we show that additional postselection by another classical error-detecting code can be applied to remove almost all correlated errors. Consequently, the revised protocol is fully fault tolerant and capable of preparing a large set of stabilizer states sufficient for FTQC using large block codes. At the same time, the yield rate can be boosted from O (t-2) to O (1 ) in practice for an [[n ,k ,d =2 t +1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nomura, Yasunori; Salzetta, Nico; Sanches, Fabio
We study the Hilbert space structure of classical spacetimes under the assumption that entanglement in holographic theories determines semiclassical geometry. We show that this simple assumption has profound implications; for example, a superposition of classical spacetimes may lead to another classical spacetime. Despite its unconventional nature, this picture admits the standard interpretation of superpositions of well-defined semiclassical spacetimes in the limit that the number of holographic degrees of freedom becomes large. We illustrate these ideas using a model for the holographic theory of cosmological spacetimes.
Classical theory of radiating strings
NASA Technical Reports Server (NTRS)
Copeland, Edmund J.; Haws, D.; Hindmarsh, M.
1990-01-01
The divergent part of the self force of a radiating string coupled to gravity, an antisymmetric tensor and a dilaton in four dimensions are calculated to first order in classical perturbation theory. While this divergence can be absorbed into a renormalization of the string tension, demanding that both it and the divergence in the energy momentum tensor vanish forces the string to have the couplings of compactified N = 1 D = 10 supergravity. In effect, supersymmetry cures the classical infinities.
Emergence of a classical Universe from quantum gravity and cosmology.
Kiefer, Claus
2012-09-28
I describe how we can understand the classical appearance of our world from a universal quantum theory. The essential ingredient is the process of decoherence. I start with a general discussion in ordinary quantum theory and then turn to quantum gravity and quantum cosmology. There is a whole hierarchy of classicality from the global gravitational field to the fluctuations in the cosmic microwave background, which serve as the seeds for the structure in the Universe.
Classical gluon and graviton radiation from the bi-adjoint scalar double copy
NASA Astrophysics Data System (ADS)
Goldberger, Walter D.; Prabhu, Siddharth G.; Thompson, Jedidiah O.
2017-09-01
We find double-copy relations between classical radiating solutions in Yang-Mills theory coupled to dynamical color charges and their counterparts in a cubic bi-adjoint scalar field theory which interacts linearly with particles carrying bi-adjoint charge. The particular color-to-kinematics replacements we employ are motivated by the Bern-Carrasco-Johansson double-copy correspondence for on-shell amplitudes in gauge and gravity theories. They are identical to those recently used to establish relations between classical radiating solutions in gauge theory and in dilaton gravity. Our explicit bi-adjoint solutions are constructed to second order in a perturbative expansion, and map under the double copy onto gauge theory solutions which involve at most cubic gluon self-interactions. If the correspondence is found to persist to higher orders in perturbation theory, our results suggest the possibility of calculating gravitational radiation from colliding compact objects, directly from a scalar field with vastly simpler (purely cubic) Feynman vertices.
Combinatorial Market Processing for Multilateral Coordination
2005-09-01
8 In the classical auction theory literature, most of the attention is focused on one-sided, single-item auctions [86]. There is now a growing body of...Programming in Infinite-dimensional Spaces: Theory and Applications, Wiley, 1987. [3] K. J. Arrow, “An extension of the basic theorems of classical ...Commodities, Princeton University Press, 1969. [43] D. Friedman and J. Rust, The Double Auction Market: Institutions, Theories, and Evidence, Addison
ERIC Educational Resources Information Center
Boyer, Timothy H.
1985-01-01
The classical vacuum of physics is not empty, but contains a distinctive pattern of electromagnetic fields. Discovery of the vacuum, thermal spectrum, classical electron theory, zero-point spectrum, and effects of acceleration are discussed. Connection between thermal radiation and the classical vacuum reveals unexpected unity in the laws of…
Random walk in generalized quantum theory
NASA Astrophysics Data System (ADS)
Martin, Xavier; O'Connor, Denjoe; Sorkin, Rafael D.
2005-01-01
One can view quantum mechanics as a generalization of classical probability theory that provides for pairwise interference among alternatives. Adopting this perspective, we “quantize” the classical random walk by finding, subject to a certain condition of “strong positivity”, the most general Markovian, translationally invariant “decoherence functional” with nearest neighbor transitions.
Creating Semantic Waves: Using Legitimation Code Theory as a Tool to Aid the Teaching of Chemistry
ERIC Educational Resources Information Center
Blackie, Margaret A. L.
2014-01-01
This is a conceptual paper aimed at chemistry educators. The purpose of this paper is to illustrate the use of the semantic code of Legitimation Code Theory in chemistry teaching. Chemistry is an abstract subject which many students struggle to grasp. Legitimation Code Theory provides a way of separating out abstraction from complexity both of…
Neo-classical theory of competition or Adam Smith's hand as mathematized ideology
NASA Astrophysics Data System (ADS)
McCauley, Joseph L.
2001-10-01
Orthodox economic theory (utility maximization, rational agents, efficient markets in equilibrium) is based on arbitrarily postulated, nonempiric notions. The disagreement between economic reality and a key feature of neo-classical economic theory was criticized empirically by Osborne. I show that the orthodox theory is internally self-inconsistent for the very reason suggested by Osborne: lack of invertibility of demand and supply as functions of price to obtain price as functions of supply and demand. The reason for the noninvertibililty arises from nonintegrable excess demand dynamics, a feature of their theory completely ignored by economists.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Banik, Manik, E-mail: manik11ju@gmail.com
Steering is one of the most counter intuitive non-classical features of bipartite quantum system, first noticed by Schrödinger at the early days of quantum theory. On the other hand, measurement incompatibility is another non-classical feature of quantum theory, initially pointed out by Bohr. Recently, Quintino et al. [Phys. Rev. Lett. 113, 160402 (2014)] and Uola et al. [Phys. Rev. Lett. 113, 160403 (2014)] have investigated the relation between these two distinct non-classical features. They have shown that a set of measurements is not jointly measurable (i.e., incompatible) if and only if they can be used for demonstrating Schrödinger-Einstein-Podolsky-Rosen steering. Themore » concept of steering has been generalized for more general abstract tensor product theories rather than just Hilbert space quantum mechanics. In this article, we discuss that the notion of measurement incompatibility can be extended for general probability theories. Further, we show that the connection between steering and measurement incompatibility holds in a border class of tensor product theories rather than just quantum theory.« less
What is Quantum Mechanics? A Minimal Formulation
NASA Astrophysics Data System (ADS)
Friedberg, R.; Hohenberg, P. C.
2018-03-01
This paper presents a minimal formulation of nonrelativistic quantum mechanics, by which is meant a formulation which describes the theory in a succinct, self-contained, clear, unambiguous and of course correct manner. The bulk of the presentation is the so-called "microscopic theory", applicable to any closed system S of arbitrary size N, using concepts referring to S alone, without resort to external apparatus or external agents. An example of a similar minimal microscopic theory is the standard formulation of classical mechanics, which serves as the template for a minimal quantum theory. The only substantive assumption required is the replacement of the classical Euclidean phase space by Hilbert space in the quantum case, with the attendant all-important phenomenon of quantum incompatibility. Two fundamental theorems of Hilbert space, the Kochen-Specker-Bell theorem and Gleason's theorem, then lead inevitably to the well-known Born probability rule. For both classical and quantum mechanics, questions of physical implementation and experimental verification of the predictions of the theories are the domain of the macroscopic theory, which is argued to be a special case or application of the more general microscopic theory.
Theory of mind deficit in adult patients with congenital heart disease.
Chiavarino, Claudia; Bianchino, Claudia; Brach-Prever, Silvia; Riggi, Chiara; Palumbo, Luigi; Bara, Bruno G; Bosco, Francesca M
2015-10-01
This article provides the first assessment of theory of mind, that is, the ability to reason about mental states, in adult patients with congenital heart disease. Patients with congenital heart disease and matched healthy controls were administered classical theory of mind tasks and a semi-structured interview which provides a multidimensional evaluation of theory of mind (Theory of Mind Assessment Scale). The patients with congenital heart disease performed worse than the controls on the Theory of Mind Assessment Scale, whereas they did as well as the control group on the classical theory-of-mind tasks. These findings provide the first evidence that adults with congenital heart disease may display specific impairments in theory of mind. © The Author(s) 2013.
Classical Physics and the Bounds of Quantum Correlations.
Frustaglia, Diego; Baltanás, José P; Velázquez-Ahumada, María C; Fernández-Prieto, Armando; Lujambio, Aintzane; Losada, Vicente; Freire, Manuel J; Cabello, Adán
2016-06-24
A unifying principle explaining the numerical bounds of quantum correlations remains elusive, despite the efforts devoted to identifying it. Here, we show that these bounds are indeed not exclusive to quantum theory: for any abstract correlation scenario with compatible measurements, models based on classical waves produce probability distributions indistinguishable from those of quantum theory and, therefore, share the same bounds. We demonstrate this finding by implementing classical microwaves that propagate along meter-size transmission-line circuits and reproduce the probabilities of three emblematic quantum experiments. Our results show that the "quantum" bounds would also occur in a classical universe without quanta. The implications of this observation are discussed.
Open or closed? Dirac, Heisenberg, and the relation between classical and quantum mechanics
NASA Astrophysics Data System (ADS)
Bokulich, Alisa
2004-09-01
This paper describes a long-standing, though little known, debate between Dirac and Heisenberg over the nature of scientific methodology, theory change, and intertheoretic relations. Following Heisenberg's terminology, their disagreements can be summarized as a debate over whether the classical and quantum theories are "open" or "closed." A close examination of this debate sheds new light on the philosophical views of two of the great founders of quantum theory.
The role of a posteriori mathematics in physics
NASA Astrophysics Data System (ADS)
MacKinnon, Edward
2018-05-01
The calculus that co-evolved with classical mechanics relied on definitions of functions and differentials that accommodated physical intuitions. In the early nineteenth century mathematicians began the rigorous reformulation of calculus and eventually succeeded in putting almost all of mathematics on a set-theoretic foundation. Physicists traditionally ignore this rigorous mathematics. Physicists often rely on a posteriori math, a practice of using physical considerations to determine mathematical formulations. This is illustrated by examples from classical and quantum physics. A justification of such practice stems from a consideration of the role of phenomenological theories in classical physics and effective theories in contemporary physics. This relates to the larger question of how physical theories should be interpreted.
NASA Technical Reports Server (NTRS)
Paquette, John A.; Nuth, Joseph A., III
2011-01-01
Classical nucleation theory has been used in models of dust nucleation in circumstellar outflows around oxygen-rich asymptotic giant branch stars. One objection to the application of classical nucleation theory (CNT) to astrophysical systems of this sort is that an equilibrium distribution of clusters (assumed by CNT) is unlikely to exist in such conditions due to a low collision rate of condensable species. A model of silicate grain nucleation and growth was modified to evaluate the effect of a nucleation flux orders of magnitUde below the equilibrium value. The results show that a lack of chemical equilibrium has only a small effect on the ultimate grain distribution.
A Non Local Electron Heat Transport Model for Multi-Dimensional Fluid Codes
NASA Astrophysics Data System (ADS)
Schurtz, Guy
2000-10-01
Apparent inhibition of thermal heat flow is one of the most ancient problems in computational Inertial Fusion and flux-limited Spitzer-Harm conduction has been a mainstay in multi-dimensional hydrodynamic codes for more than 25 years. Theoretical investigation of the problem indicates that heat transport in laser produced plasmas has to be considered as a non local process. Various authors contributed to the non local theory and proposed convolution formulas designed for practical implementation in one-dimensional fluid codes. Though the theory, confirmed by kinetic calculations, actually predicts a reduced heat flux, it fails to explain the very small limiters required in two-dimensional simulations. Fokker-Planck simulations by Epperlein, Rickard and Bell [PRL 61, 2453 (1988)] demonstrated that non local effects could lead to a strong reduction of heat flow in two dimensions, even in situations where a one-dimensional analysis suggests that the heat flow is nearly classical. We developed at CEA/DAM a non local electron heat transport model suitable for implementation in our two-dimensional radiation hydrodynamic code FCI2. This model may be envisionned as the first step of an iterative solution of the Fokker-Planck equations; it takes the mathematical form of multigroup diffusion equations, the solution of which yields both the heat flux and the departure of the electron distribution function to the Maxwellian. Although direct implementation of the model is straightforward, formal solutions of it can be expressed in convolution form, exhibiting a three-dimensional tensor propagator. Reduction to one dimension retrieves the original formula of Luciani, Mora and Virmont [PRL 51, 1664 (1983)]. Intense magnetic fields may be generated by thermal effects in laser targets; these fields, as well as non local effects, will inhibit electron conduction. We present simulations where both effects are taken into account and shortly discuss the coupling strategy between them.
S-Duality, Deconstruction and Confinement for a Marginal Deformation of N=4 SUSY Yang-Mills
NASA Astrophysics Data System (ADS)
Dorey, Nick
2004-08-01
We study an exactly marginal deformation of Script N = 4 SUSY Yang-Mills with gauge group U(N) using field theory and string theory methods. The classical theory has a Higgs branch for rational values of the deformation parameter. We argue that the quantum theory also has an S-dual confining branch which cannot be seen classically. The low-energy effective theory on these branches is a six-dimensional non-commutative gauge theory with sixteen supercharges. Confinement of magnetic and electric charges, on the Higgs and confining branches respectively, occurs due to the formation of BPS-saturated strings in the low energy theory. The results also suggest a new way of deconstructing Little String Theory as a large-N limit of a confining gauge theory in four dimensions.
High-pressure phase transitions - Examples of classical predictability
NASA Astrophysics Data System (ADS)
Celebonovic, Vladan
1992-09-01
The applicability of the Savic and Kasanin (1962-1967) classical theory of dense matter to laboratory experiments requiring estimates of high-pressure phase transitions was examined by determining phase transition pressures for a set of 19 chemical substances (including elements, hydrocarbons, metal oxides, and salts) for which experimental data were available. A comparison between experimental and transition points and those predicted by the Savic-Kasanin theory showed that the theory can be used for estimating values of transition pressures. The results also support conclusions obtained in previous astronomical applications of the Savic-Kasanin theory.
Kraus, Wayne A; Wagner, Albert F
1986-04-01
A triatomic classical trajectory code has been modified by extensive vectorization of the algorithms to achieve much improved performance on an FPS 164 attached processor. Extensive timings on both the FPS 164 and a VAX 11/780 with floating point accelerator are presented as a function of the number of trajectories simultaneously run. The timing tests involve a potential energy surface of the LEPS variety and trajectories with 1000 time steps. The results indicate that vectorization results in timing improvements on both the VAX and the FPS. For larger numbers of trajectories run simultaneously, up to a factor of 25 improvement in speed occurs between VAX and FPS vectorized code. Copyright © 1986 John Wiley & Sons, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khrennikov, Andrei
We present fundamentals of a prequantum model with hidden variables of the classical field type. In some sense this is the comeback of classical wave mechanics. Our approach also can be considered as incorporation of quantum mechanics into classical signal theory. All quantum averages (including correlations of entangled systems) can be represented as classical signal averages and correlations.
NASA Technical Reports Server (NTRS)
Baysal, Oktay
1986-01-01
An explicit-implicit and an implicit two-dimensional Navier-Stokes code along with various grid generation capabilities were developed. A series of classical benckmark cases were simulated using these codes.
Córcoles, A.D.; Magesan, Easwar; Srinivasan, Srikanth J.; Cross, Andrew W.; Steffen, M.; Gambetta, Jay M.; Chow, Jerry M.
2015-01-01
The ability to detect and deal with errors when manipulating quantum systems is a fundamental requirement for fault-tolerant quantum computing. Unlike classical bits that are subject to only digital bit-flip errors, quantum bits are susceptible to a much larger spectrum of errors, for which any complete quantum error-correcting code must account. Whilst classical bit-flip detection can be realized via a linear array of qubits, a general fault-tolerant quantum error-correcting code requires extending into a higher-dimensional lattice. Here we present a quantum error detection protocol on a two-by-two planar lattice of superconducting qubits. The protocol detects an arbitrary quantum error on an encoded two-qubit entangled state via quantum non-demolition parity measurements on another pair of error syndrome qubits. This result represents a building block towards larger lattices amenable to fault-tolerant quantum error correction architectures such as the surface code. PMID:25923200
Córcoles, A D; Magesan, Easwar; Srinivasan, Srikanth J; Cross, Andrew W; Steffen, M; Gambetta, Jay M; Chow, Jerry M
2015-04-29
The ability to detect and deal with errors when manipulating quantum systems is a fundamental requirement for fault-tolerant quantum computing. Unlike classical bits that are subject to only digital bit-flip errors, quantum bits are susceptible to a much larger spectrum of errors, for which any complete quantum error-correcting code must account. Whilst classical bit-flip detection can be realized via a linear array of qubits, a general fault-tolerant quantum error-correcting code requires extending into a higher-dimensional lattice. Here we present a quantum error detection protocol on a two-by-two planar lattice of superconducting qubits. The protocol detects an arbitrary quantum error on an encoded two-qubit entangled state via quantum non-demolition parity measurements on another pair of error syndrome qubits. This result represents a building block towards larger lattices amenable to fault-tolerant quantum error correction architectures such as the surface code.
Belief propagation decoding of quantum channels by passing quantum messages
NASA Astrophysics Data System (ADS)
Renes, Joseph M.
2017-07-01
The belief propagation (BP) algorithm is a powerful tool in a wide range of disciplines from statistical physics to machine learning to computational biology, and is ubiquitous in decoding classical error-correcting codes. The algorithm works by passing messages between nodes of the factor graph associated with the code and enables efficient decoding of the channel, in some cases even up to the Shannon capacity. Here we construct the first BP algorithm which passes quantum messages on the factor graph and is capable of decoding the classical-quantum channel with pure state outputs. This gives explicit decoding circuits whose number of gates is quadratic in the code length. We also show that this decoder can be modified to work with polar codes for the pure state channel and as part of a decoder for transmitting quantum information over the amplitude damping channel. These represent the first explicit capacity-achieving decoders for non-Pauli channels.
Uniting the Spheres: Modern Feminist Theory and Classic Texts in AP English
ERIC Educational Resources Information Center
Drew, Simao J. A.; Bosnic, Brenda G.
2008-01-01
High school teachers Simao J. A. Drew and Brenda G. Bosnic help familiarize students with gender role analysis and feminist theory. Students examine classic literature and contemporary texts, considering characters' historical, literary, and social contexts while expanding their understanding of how patterns of identity and gender norms exist and…
Aesthetic Creativity: Insights from Classical Literary Theory on Creative Learning
ERIC Educational Resources Information Center
Hellstrom, Tomas Georg
2011-01-01
This paper addresses the subject of textual creativity by drawing on work done in classical literary theory and criticism, specifically new criticism, structuralism and early poststructuralism. The question of how readers and writers engage creatively with the text is closely related to educational concerns, though they are often thought of as…
ERIC Educational Resources Information Center
Bazaldua, Diego A. Luna; Lee, Young-Sun; Keller, Bryan; Fellers, Lauren
2017-01-01
The performance of various classical test theory (CTT) item discrimination estimators has been compared in the literature using both empirical and simulated data, resulting in mixed results regarding the preference of some discrimination estimators over others. This study analyzes the performance of various item discrimination estimators in CTT:…
Louis Guttman's Contributions to Classical Test Theory
ERIC Educational Resources Information Center
Zimmerman, Donald W.; Williams, Richard H.; Zumbo, Bruno D.; Ross, Donald
2005-01-01
This article focuses on Louis Guttman's contributions to the classical theory of educational and psychological tests, one of the lesser known of his many contributions to quantitative methods in the social sciences. Guttman's work in this field provided a rigorous mathematical basis for ideas that, for many decades after Spearman's initial work,…
Generalization of the Activated Complex Theory of Reaction Rates. II. Classical Mechanical Treatment
DOE R&D Accomplishments Database
Marcus, R. A.
1964-01-01
In its usual classical form activated complex theory assumes a particular expression for the kinetic energy of the reacting system -- one associated with a rectilinear motion along the reaction coordinate. The derivation of the rate expression given in the present paper is based on the general kinetic energy expression.
NASA Astrophysics Data System (ADS)
Yang, Chen
2018-05-01
The transitions from classical theories to quantum theories have attracted many interests. This paper demonstrates the analogy between the electromagnetic potentials and wave-like dynamic variables with their connections to quantum theory for audiences at advanced undergraduate level and above. In the first part, the counterpart relations in the classical electrodynamics (e.g. gauge transform and Lorenz condition) and classical mechanics (e.g. Legendre transform and free particle condition) are presented. These relations lead to similar governing equations of the field variables and dynamic variables. The Lorenz gauge, scalar potential and vector potential manifest a one-to-one similarity to the action, Hamiltonian and momentum, respectively. In the second part, the connections between the classical pictures of electromagnetic field and particle to quantum picture are presented. By characterising the states of electromagnetic field and particle via their (corresponding) variables, their evolution pictures manifest the same algebraic structure (isomorphic). Subsequently, pictures of the electromagnetic field and particle are compared to the quantum picture and their interconnections are given. A brief summary of the obtained results are presented at the end of the paper.
Beyond Valence and Magnitude: a Flexible Evaluative Coding System in the Brain
Gu, Ruolei; Lei, Zhihui; Broster, Lucas; Wu, Tingting; Jiang, Yang; Luo, Yue-jia
2013-01-01
Outcome evaluation is a cognitive process that plays an important role in our daily lives. In most paradigms utilized in the field of experimental psychology, outcome valence and outcome magnitude are the two major features investigated. The classical “independent coding model” suggest that outcome valence and outcome magnitude are evaluated by separate neural mechanisms that may be mapped onto discrete event-related potential (ERP) components: feedback-related negativity (FRN) and the P3, respectively. To examine this model, we presented outcome valence and magnitude sequentially rather than simultaneously. The results reveal that when only outcome valence or magnitude is known, both the FRN and the P3 encode that outcome feature; when both aspects of outcome are known, the cognitive functions of the two components dissociate: the FRN responds to the information available in the current context, while the P3 pattern depends on outcome presentation sequence. The current study indicates that the human evaluative system, indexed in part by the FRN and the P3, is more flexible than previous theories suggested. PMID:22019775
One-way quantum repeaters with quantum Reed-Solomon codes
NASA Astrophysics Data System (ADS)
Muralidharan, Sreraman; Zou, Chang-Ling; Li, Linshu; Jiang, Liang
2018-05-01
We show that quantum Reed-Solomon codes constructed from classical Reed-Solomon codes can approach the capacity on the quantum erasure channel of d -level systems for large dimension d . We study the performance of one-way quantum repeaters with these codes and obtain a significant improvement in key generation rate compared to previously investigated encoding schemes with quantum parity codes and quantum polynomial codes. We also compare the three generations of quantum repeaters using quantum Reed-Solomon codes and identify parameter regimes where each generation performs the best.
Quantum-correlation breaking channels, quantum conditional probability and Perron-Frobenius theory
NASA Astrophysics Data System (ADS)
Chruściński, Dariusz
2013-03-01
Using the quantum analog of conditional probability and classical Bayes theorem we discuss some aspects of particular entanglement breaking channels: quantum-classical and classical-classical channels. Applying the quantum analog of Perron-Frobenius theorem we generalize the recent result of Korbicz et al. (2012) [8] on full and spectrum broadcasting from quantum-classical channels to arbitrary quantum channels.
Operator Formulation of Classical Mechanics.
ERIC Educational Resources Information Center
Cohn, Jack
1980-01-01
Discusses the construction of an operator formulation of classical mechanics which is directly concerned with wave packets in configuration space and is more similar to that of convential quantum theory than other extant operator formulations of classical mechanics. (Author/HM)
Epistemic View of Quantum States and Communication Complexity of Quantum Channels
NASA Astrophysics Data System (ADS)
Montina, Alberto
2012-09-01
The communication complexity of a quantum channel is the minimal amount of classical communication required for classically simulating a process of state preparation, transmission through the channel and subsequent measurement. It establishes a limit on the power of quantum communication in terms of classical resources. We show that classical simulations employing a finite amount of communication can be derived from a special class of hidden variable theories where quantum states represent statistical knowledge about the classical state and not an element of reality. This special class has attracted strong interest very recently. The communication cost of each derived simulation is given by the mutual information between the quantum state and the classical state of the parent hidden variable theory. Finally, we find that the communication complexity for single qubits is smaller than 1.28 bits. The previous known upper bound was 1.85 bits.
Extended theory of harmonic maps connects general relativity to chaos and quantum mechanism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Gang; Duan, Yi-Shi
General relativity and quantum mechanism are two separate rules of modern physics explaining how nature works. Both theories are accurate, but the direct connection between two theories was not yet clarified. Recently, researchers blur the line between classical and quantum physics by connecting chaos and entanglement equation. Here in this paper, we showed the Duan's extended HM theory, which has the solution of the general relativity, can also have the solutions of the classic chaos equations and even the solution of Schrödinger equation in quantum physics, suggesting the extended theory of harmonic maps may act as a universal theory ofmore » physics.« less
Extended theory of harmonic maps connects general relativity to chaos and quantum mechanism
Ren, Gang; Duan, Yi-Shi
2017-07-20
General relativity and quantum mechanism are two separate rules of modern physics explaining how nature works. Both theories are accurate, but the direct connection between two theories was not yet clarified. Recently, researchers blur the line between classical and quantum physics by connecting chaos and entanglement equation. Here in this paper, we showed the Duan's extended HM theory, which has the solution of the general relativity, can also have the solutions of the classic chaos equations and even the solution of Schrödinger equation in quantum physics, suggesting the extended theory of harmonic maps may act as a universal theory ofmore » physics.« less
Computation in generalised probabilisitic theories
NASA Astrophysics Data System (ADS)
Lee, Ciarán M.; Barrett, Jonathan
2015-08-01
From the general difficulty of simulating quantum systems using classical systems, and in particular the existence of an efficient quantum algorithm for factoring, it is likely that quantum computation is intrinsically more powerful than classical computation. At present, the best upper bound known for the power of quantum computation is that {{BQP}}\\subseteq {{AWPP}}, where {{AWPP}} is a classical complexity class (known to be included in {{PP}}, hence {{PSPACE}}). This work investigates limits on computational power that are imposed by simple physical, or information theoretic, principles. To this end, we define a circuit-based model of computation in a class of operationally-defined theories more general than quantum theory, and ask: what is the minimal set of physical assumptions under which the above inclusions still hold? We show that given only an assumption of tomographic locality (roughly, that multipartite states and transformations can be characterized by local measurements), efficient computations are contained in {{AWPP}}. This inclusion still holds even without assuming a basic notion of causality (where the notion is, roughly, that probabilities for outcomes cannot depend on future measurement choices). Following Aaronson, we extend the computational model by allowing post-selection on measurement outcomes. Aaronson showed that the corresponding quantum complexity class, {{PostBQP}}, is equal to {{PP}}. Given only the assumption of tomographic locality, the inclusion in {{PP}} still holds for post-selected computation in general theories. Hence in a world with post-selection, quantum theory is optimal for computation in the space of all operational theories. We then consider whether one can obtain relativized complexity results for general theories. It is not obvious how to define a sensible notion of a computational oracle in the general framework that reduces to the standard notion in the quantum case. Nevertheless, it is possible to define computation relative to a ‘classical oracle’. Then, we show there exists a classical oracle relative to which efficient computation in any theory satisfying the causality assumption does not include {{NP}}.
Quantum correlations and dynamics from classical random fields valued in complex Hilbert spaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khrennikov, Andrei
2010-08-15
One of the crucial differences between mathematical models of classical and quantum mechanics (QM) is the use of the tensor product of the state spaces of subsystems as the state space of the corresponding composite system. (To describe an ensemble of classical composite systems, one uses random variables taking values in the Cartesian product of the state spaces of subsystems.) We show that, nevertheless, it is possible to establish a natural correspondence between the classical and the quantum probabilistic descriptions of composite systems. Quantum averages for composite systems (including entangled) can be represented as averages with respect to classical randommore » fields. It is essentially what Albert Einstein dreamed of. QM is represented as classical statistical mechanics with infinite-dimensional phase space. While the mathematical construction is completely rigorous, its physical interpretation is a complicated problem. We present the basic physical interpretation of prequantum classical statistical field theory in Sec. II. However, this is only the first step toward real physical theory.« less
Opening Switch Research on a Plasma Focus VI.
1988-02-26
Sausage Instability in the Plasma Focus In this section the classical Kruskal- Schwarzschild 3 theory for the sausage mode is applied to the pinch phase...on 1) the shape of the pinch, 2) axial flow of plasma, and 3) self-generated magnetic fields are also presented. The Kruskal- Schwarzschild Theory The...classical mhd theory for the m=O mode in a plasma supported by a magnetic field against gravity; this is the well-known Kruskal- Schwarzschild
Nanoscale Capillary Flows in Alumina: Testing the Limits of Classical Theory.
Lei, Wenwen; McKenzie, David R
2016-07-21
Anodic aluminum oxide (AAO) membranes have well-formed cylindrical channels, as small as 10 nm in diameter, in a close packed hexagonal array. The channels in AAO membranes simulate very small leaks that may be present for example in an aluminum oxide device encapsulation. The 10 nm alumina channel is the smallest that has been studied to date for its moisture flow properties and provides a stringent test of classical capillary theory. We measure the rate at which moisture penetrates channels with diameters in the range of 10 to 120 nm with moist air present at 1 atm on one side and dry air at the same total pressure on the other. We extend classical theory for water leak rates at high humidities by allowing for variable meniscus curvature at the entrance and show that the extended theory explains why the flow increases greatly when capillary filling occurs and enables the contact angle to be determined. At low humidities our measurements for air-filled channels agree well with theory for the interdiffusive flow of water vapor in air. The flow rate of water-filled channels is one order of magnitude less than expected from classical capillary filling theory and is coincidentally equal to the helium flow rate, validating the use of helium leak testing for evaluating moisture flows in aluminum oxide leaks.
Aeroelastic stability of wind turbine blade/aileron systems
NASA Technical Reports Server (NTRS)
Strain, J. C.; Mirandy, L.
1995-01-01
Aeroelastic stability analyses have been performed for the MOD-5A blade/aileron system. Various configurations having different aileron torsional stiffness, mass unbalance, and control system damping have been investigated. The analysis was conducted using a code recently developed by the General Electric Company - AILSTAB. The code extracts eigenvalues for a three degree of freedom system, consisting of: (1) a blade flapwise mode; (2) a blade torsional mode; and (3) an aileron torsional mode. Mode shapes are supplied as input and the aileron can be specified over an arbitrary length of the blade span. Quasi-steady aerodynamic strip theory is used to compute aerodynamic derivatives of the wing-aileron combination as a function of spanwise position. Equations of motion are summarized herein. The program provides rotating blade stability boundaries for torsional divergence, classical flutter (bending/torsion) and wing/aileron flutter. It has been checked out against fixed-wing results published by Theodorsen and Garrick. The MOD-5A system is stable with respect to divergence and classical flutter for all practical rotor speeds. Aileron torsional stiffness must exceed a minimum critical value to prevent aileron flutter. The nominal control system stiffness greatly exceeds this minimum during normal operation. The basic system, however, is unstable for the case of a free (or floating) aileron. The instability can be removed either by the addition of torsional damping or mass-balancing the ailerons. The MOD-5A design was performed by the General Electric Company, Advanced Energy Program Department under Contract DEN3-153 with NASA Lewis Research Center and sponsored by the Department of Energy.
Mechanism on brain information processing: Energy coding
NASA Astrophysics Data System (ADS)
Wang, Rubin; Zhang, Zhikang; Jiao, Xianfa
2006-09-01
According to the experimental result of signal transmission and neuronal energetic demands being tightly coupled to information coding in the cerebral cortex, the authors present a brand new scientific theory that offers a unique mechanism for brain information processing. They demonstrate that the neural coding produced by the activity of the brain is well described by the theory of energy coding. Due to the energy coding model's ability to reveal mechanisms of brain information processing based upon known biophysical properties, they cannot only reproduce various experimental results of neuroelectrophysiology but also quantitatively explain the recent experimental results from neuroscientists at Yale University by means of the principle of energy coding. Due to the theory of energy coding to bridge the gap between functional connections within a biological neural network and energetic consumption, they estimate that the theory has very important consequences for quantitative research of cognitive function.
Energy coding in biological neural networks
Zhang, Zhikang
2007-01-01
According to the experimental result of signal transmission and neuronal energetic demands being tightly coupled to information coding in the cerebral cortex, we present a brand new scientific theory that offers an unique mechanism for brain information processing. We demonstrate that the neural coding produced by the activity of the brain is well described by our theory of energy coding. Due to the energy coding model’s ability to reveal mechanisms of brain information processing based upon known biophysical properties, we can not only reproduce various experimental results of neuro-electrophysiology, but also quantitatively explain the recent experimental results from neuroscientists at Yale University by means of the principle of energy coding. Due to the theory of energy coding to bridge the gap between functional connections within a biological neural network and energetic consumption, we estimate that the theory has very important consequences for quantitative research of cognitive function. PMID:19003513
Noninvasive fetal QRS detection using an echo state network and dynamic programming.
Lukoševičius, Mantas; Marozas, Vaidotas
2014-08-01
We address a classical fetal QRS detection problem from abdominal ECG recordings with a data-driven statistical machine learning approach. Our goal is to have a powerful, yet conceptually clean, solution. There are two novel key components at the heart of our approach: an echo state recurrent neural network that is trained to indicate fetal QRS complexes, and several increasingly sophisticated versions of statistics-based dynamic programming algorithms, which are derived from and rooted in probability theory. We also employ a standard technique for preprocessing and removing maternal ECG complexes from the signals, but do not take this as the main focus of this work. The proposed approach is quite generic and can be extended to other types of signals and annotations. Open-source code is provided.
Application of quasi-distributions for solving inverse problems of neutron and {gamma}-ray transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pogosbekyan, L.R.; Lysov, D.A.
The considered inverse problems deal with the calculation of the unknown values of nuclear installations by means of the known (goal) functionals of neutron/{gamma}-ray distributions. The example of these problems might be the calculation of the automatic control rods position as function of neutron sensors reading, or the calculation of experimentally-corrected values of cross-sections, isotopes concentration, fuel enrichment via the measured functional. The authors have developed the new method to solve inverse problem. It finds flux density as quasi-solution of the particles conservation linear system adjointed to equalities for functionals. The method is more effective compared to the one basedmore » on the classical perturbation theory. It is suitable for vectorization and it can be used successfully in optimization codes.« less
Estimation of homogeneous nucleation flux via a kinetic model
NASA Technical Reports Server (NTRS)
Wilcox, C. F.; Bauer, S. H.
1991-01-01
The proposed kinetic model for condensation under homogeneous conditions, and the onset of unidirectional cluster growth in supersaturated gases, does not suffer from the conceptual flaws that characterize classical nucleation theory. When a full set of simultaneous rate equation is solved, a characteristic time emerges, for each cluster size, at which the production rate, and its rate of conversion to the next size (n + 1) are equal. Procedures for estimating the essential parameters are proposed; condensation fluxes J(kin) exp ss are evaluated. Since there are practical limits to the cluster size that can be incorporated in the set of simultaneous first-order differential equations, a code was developed for computing an approximate J(th) exp ss based on estimates of a 'constrained equilibrium' distribution, and identification of its minimum.
Hamilton-Jacobi theory in multisymplectic classical field theories
NASA Astrophysics Data System (ADS)
de León, Manuel; Prieto-Martínez, Pedro Daniel; Román-Roy, Narciso; Vilariño, Silvia
2017-09-01
The geometric framework for the Hamilton-Jacobi theory developed in the studies of Cariñena et al. [Int. J. Geom. Methods Mod. Phys. 3(7), 1417-1458 (2006)], Cariñena et al. [Int. J. Geom. Methods Mod. Phys. 13(2), 1650017 (2015)], and de León et al. [Variations, Geometry and Physics (Nova Science Publishers, New York, 2009)] is extended for multisymplectic first-order classical field theories. The Hamilton-Jacobi problem is stated for the Lagrangian and the Hamiltonian formalisms of these theories as a particular case of a more general problem, and the classical Hamilton-Jacobi equation for field theories is recovered from this geometrical setting. Particular and complete solutions to these problems are defined and characterized in several equivalent ways in both formalisms, and the equivalence between them is proved. The use of distributions in jet bundles that represent the solutions to the field equations is the fundamental tool in this formulation. Some examples are analyzed and, in particular, the Hamilton-Jacobi equation for non-autonomous mechanical systems is obtained as a special case of our results.
NASA Astrophysics Data System (ADS)
Jara, Daniel; de Dreuzy, Jean-Raynald; Cochepin, Benoit
2017-12-01
Reactive transport modeling contributes to understand geophysical and geochemical processes in subsurface environments. Operator splitting methods have been proposed as non-intrusive coupling techniques that optimize the use of existing chemistry and transport codes. In this spirit, we propose a coupler relying on external geochemical and transport codes with appropriate operator segmentation that enables possible developments of additional splitting methods. We provide an object-oriented implementation in TReacLab developed in the MATLAB environment in a free open source frame with an accessible repository. TReacLab contains classical coupling methods, template interfaces and calling functions for two classical transport and reactive software (PHREEQC and COMSOL). It is tested on four classical benchmarks with homogeneous and heterogeneous reactions at equilibrium or kinetically-controlled. We show that full decoupling to the implementation level has a cost in terms of accuracy compared to more integrated and optimized codes. Use of non-intrusive implementations like TReacLab are still justified for coupling independent transport and chemical software at a minimal development effort but should be systematically and carefully assessed.
Properties of the Boltzmann equation in the classical approximation
Epelbaum, Thomas; Gelis, François; Tanji, Naoto; ...
2014-12-30
We examine the Boltzmann equation with elastic point-like scalar interactions in two different versions of the the classical approximation. Although solving numerically the Boltzmann equation with the unapproximated collision term poses no problem, this allows one to study the effect of the ultraviolet cutoff in these approximations. This cutoff dependence in the classical approximations of the Boltzmann equation is closely related to the non-renormalizability of the classical statistical approximation of the underlying quantum field theory. The kinetic theory setup that we consider here allows one to study in a much simpler way the dependence on the ultraviolet cutoff, since onemore » has also access to the non-approximated result for comparison.« less
Experimental Observation of Two Features Unexpected from the Classical Theories of Rubber Elasticity
NASA Astrophysics Data System (ADS)
Nishi, Kengo; Fujii, Kenta; Chung, Ung-il; Shibayama, Mitsuhiro; Sakai, Takamasa
2017-12-01
Although the elastic modulus of a Gaussian chain network is thought to be successfully described by classical theories of rubber elasticity, such as the affine and phantom models, verification experiments are largely lacking owing to difficulties in precisely controlling of the network structure. We prepared well-defined model polymer networks experimentally, and measured the elastic modulus G for a broad range of polymer concentrations and connectivity probabilities, p . In our experiment, we observed two features that were distinct from those predicted by classical theories. First, we observed the critical behavior G ˜|p -pc|1.95 near the sol-gel transition. This scaling law is different from the prediction of classical theories, but can be explained by analogy between the electric conductivity of resistor networks and the elasticity of polymer networks. Here, pc is the sol-gel transition point. Furthermore, we found that the experimental G -p relations in the region above C* did not follow the affine or phantom theories. Instead, all the G /G0-p curves fell onto a single master curve when G was normalized by the elastic modulus at p =1 , G0. We show that the effective medium approximation for Gaussian chain networks explains this master curve.
ERIC Educational Resources Information Center
MacMillan, Peter D.
2000-01-01
Compared classical test theory (CTT), generalizability theory (GT), and multifaceted Rasch model (MFRM) approaches to detecting and correcting for rater variability using responses of 4,930 high school students graded by 3 raters on 9 scales. The MFRM approach identified far more raters as different than did the CTT analysis. GT and Rasch…
Marshaling Resources: A Classic Grounded Theory Study of Online Learners
ERIC Educational Resources Information Center
Yalof, Barbara
2012-01-01
Students who enroll in online courses comprise one quarter of an increasingly diverse student body in higher education today. Yet, it is not uncommon for an online program to lose over 50% of its enrolled students prior to graduation. This study used a classic grounded theory qualitative methodology to investigate the persistent problem of…
ERIC Educational Resources Information Center
Gotsch-Thomson, Susan
1990-01-01
Describes how gender is integrated into a classical social theory course by including a female theorist in the reading assignments and using "The Handmaid's Tale" by Margaret Atwood as the basis for class discussion. Reviews the course objectives and readings; describes the process of the class discussions; and provides student…
The Development of Bayesian Theory and Its Applications in Business and Bioinformatics
NASA Astrophysics Data System (ADS)
Zhang, Yifei
2018-03-01
Bayesian Theory originated from an Essay of a British mathematician named Thomas Bayes in 1763, and after its development in 20th century, Bayesian Statistics has been taking a significant part in statistical study of all fields. Due to the recent breakthrough of high-dimensional integral, Bayesian Statistics has been improved and perfected, and now it can be used to solve problems that Classical Statistics failed to solve. This paper summarizes Bayesian Statistics’ history, concepts and applications, which are illustrated in five parts: the history of Bayesian Statistics, the weakness of Classical Statistics, Bayesian Theory and its development and applications. The first two parts make a comparison between Bayesian Statistics and Classical Statistics in a macroscopic aspect. And the last three parts focus on Bayesian Theory in specific -- from introducing some particular Bayesian Statistics’ concepts to listing their development and finally their applications.
NASA Astrophysics Data System (ADS)
Mojahedi, Mahdi; Shekoohinejad, Hamidreza
2018-02-01
In this paper, temperature distribution in the continuous and pulsed end-pumped Nd:YAG rod crystal is determined using nonclassical and classical heat conduction theories. In order to find the temperature distribution in crystal, heat transfer differential equations of crystal with consideration of boundary conditions are derived based on non-Fourier's model and temperature distribution of the crystal is achieved by an analytical method. Then, by transferring non-Fourier differential equations to matrix equations, using finite element method, temperature and stress of every point of crystal are calculated in the time domain. According to the results, a comparison between classical and nonclassical theories is represented to investigate rupture power values. In continuous end pumping with equal input powers, non-Fourier theory predicts greater temperature and stress compared to Fourier theory. It also shows that with an increase in relaxation time, crystal rupture power decreases. Despite of these results, in single rectangular pulsed end-pumping condition, with an equal input power, Fourier theory indicates higher temperature and stress rather than non-Fourier theory. It is also observed that, when the relaxation time increases, maximum amounts of temperature and stress decrease.
Question 6: coevolution theory of the genetic code: a proven theory.
Wong, Jeffrey Tze-Fei
2007-10-01
The coevolution theory proposes that primordial proteins consisted only of those amino acids readily obtainable from the prebiotic environment, representing about half the twenty encoded amino acids of today, and the missing amino acids entered the system as the code expanded along with pathways of amino acid biosynthesis. The isolation of genetic code mutants, and the antiquity of pretran synthesis revealed by the comparative genomics of tRNAs and aminoacyl-tRNA synthetases, have combined to provide a rigorous proof of the four fundamental tenets of the theory, thus solving the riddle of the structure of the universal genetic code.
Gambini, R; Pullin, J
2000-12-18
We consider general relativity with a cosmological constant as a perturbative expansion around a completely solvable diffeomorphism invariant field theory. This theory is the lambda --> infinity limit of general relativity. This allows an explicit perturbative computational setup in which the quantum states of the theory and the classical observables can be explicitly computed. An unexpected relationship arises at a quantum level between the discrete spectrum of the volume operator and the allowed values of the cosmological constant.
NASA Astrophysics Data System (ADS)
Wild, Walter James
1988-12-01
External nuclear medicine diagnostic imaging of early primary and metastatic lung cancer tumors is difficult due to the poor sensitivity and resolution of existing gamma cameras. Nonimaging counting detectors used for internal tumor detection give ambiguous results because distant background variations are difficult to discriminate from neighboring tumor sites. This suggests that an internal imaging nuclear medicine probe, particularly an esophageal probe, may be advantageously used to detect small tumors because of the ability to discriminate against background variations and the capability to get close to sites neighboring the esophagus. The design, theory of operation, preliminary bench tests, characterization of noise behavior and optimization of such an imaging probe is the central theme of this work. The central concept lies in the representation of the aperture shell by a sequence of binary digits. This, coupled with the mode of operation which is data encoding within an axial slice of space, leads to the fundamental imaging equation in which the coding operation is conveniently described by a circulant matrix operator. The coding/decoding process is a classic coded-aperture problem, and various estimators to achieve decoding are discussed. Some estimators require a priori information about the object (or object class) being imaged; the only unbiased estimator that does not impose this requirement is the simple inverse-matrix operator. The effects of noise on the estimate (or reconstruction) is discussed for general noise models and various codes/decoding operators. The choice of an optimal aperture for detector count times of clinical relevance is examined using a statistical class-separability formalism.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lusanna, Luca
2004-08-19
The four (electro-magnetic, weak, strong and gravitational) interactions are described by singular Lagrangians and by Dirac-Bergmann theory of Hamiltonian constraints. As a consequence a subset of the original configuration variables are gauge variables, not determined by the equations of motion. Only at the Hamiltonian level it is possible to separate the gauge variables from the deterministic physical degrees of freedom, the Dirac observables, and to formulate a well posed Cauchy problem for them both in special and general relativity. Then the requirement of causality dictates the choice of retarded solutions at the classical level. However both the problems of themore » classical theory of the electron, leading to the choice of (1/2) (retarded + advanced) solutions, and the regularization of quantum field theory, leading to the Feynman propagator, introduce anticipatory aspects. The determination of the relativistic Darwin potential as a semi-classical approximation to the Lienard-Wiechert solution for particles with Grassmann-valued electric charges, regularizing the Coulomb self-energies, shows that these anticipatory effects live beyond the semi-classical approximation (tree level) under the form of radiative corrections, at least for the electro-magnetic interaction.Talk and 'best contribution' at The Sixth International Conference on Computing Anticipatory Systems CASYS'03, Liege August 11-16, 2003.« less
The dynamical mass of a classical Cepheid variable star in an eclipsing binary system.
Pietrzyński, G; Thompson, I B; Gieren, W; Graczyk, D; Bono, G; Udalski, A; Soszyński, I; Minniti, D; Pilecki, B
2010-11-25
Stellar pulsation theory provides a means of determining the masses of pulsating classical Cepheid supergiants-it is the pulsation that causes their luminosity to vary. Such pulsational masses are found to be smaller than the masses derived from stellar evolution theory: this is the Cepheid mass discrepancy problem, for which a solution is missing. An independent, accurate dynamical mass determination for a classical Cepheid variable star (as opposed to type-II Cepheids, low-mass stars with a very different evolutionary history) in a binary system is needed in order to determine which is correct. The accuracy of previous efforts to establish a dynamical Cepheid mass from Galactic single-lined non-eclipsing binaries was typically about 15-30% (refs 6, 7), which is not good enough to resolve the mass discrepancy problem. In spite of many observational efforts, no firm detection of a classical Cepheid in an eclipsing double-lined binary has hitherto been reported. Here we report the discovery of a classical Cepheid in a well detached, double-lined eclipsing binary in the Large Magellanic Cloud. We determine the mass to a precision of 1% and show that it agrees with its pulsation mass, providing strong evidence that pulsation theory correctly and precisely predicts the masses of classical Cepheids.
Phase-Sensitive Coherence and the Classical-Quantum Boundary in Ghost Imaging
NASA Technical Reports Server (NTRS)
Erkmen, Baris I.; Hardy, Nicholas D.; Venkatraman, Dheera; Wong, Franco N. C.; Shapiro, Jeffrey H.
2011-01-01
The theory of partial coherence has a long and storied history in classical statistical optics. the vast majority of this work addresses fields that are statistically stationary in time, hence their complex envelopes only have phase-insensitive correlations. The quantum optics of squeezed-state generation, however, depends on nonlinear interactions producing baseband field operators with phase-insensitive and phase-sensitive correlations. Utilizing quantum light to enhance imaging has been a topic of considerable current interest, much of it involving biphotons, i.e., streams of entangled-photon pairs. Biphotons have been employed for quantum versions of optical coherence tomography, ghost imaging, holography, and lithography. However, their seemingly quantum features have been mimicked with classical-sate light, questioning wherein lies the classical-quantum boundary. We have shown, for the case of Gaussian-state light, that this boundary is intimately connected to the theory of phase-sensitive partial coherence. Here we present that theory, contrasting it with the familiar case of phase-insensitive partial coherence, and use it to elucidate the classical-quantum boundary of ghost imaging. We show, both theoretically and experimentally, that classical phase-sensitive light produces ghost imaging most closely mimicking those obtained in biphotons, and we derived the spatial resolution, image contrast, and signal-to-noise ratio of a standoff-sensing ghost imager, taking into account target-induced speckle.
Long distance quantum communication with quantum Reed-Solomon codes
NASA Astrophysics Data System (ADS)
Muralidharan, Sreraman; Zou, Chang-Ling; Li, Linshu; Jiang, Liang; Jianggroup Team
We study the construction of quantum Reed Solomon codes from classical Reed Solomon codes and show that they achieve the capacity of quantum erasure channel for multi-level quantum systems. We extend the application of quantum Reed Solomon codes to long distance quantum communication, investigate the local resource overhead needed for the functioning of one-way quantum repeaters with these codes, and numerically identify the parameter regime where these codes perform better than the known quantum polynomial codes and quantum parity codes . Finally, we discuss the implementation of these codes into time-bin photonic states of qubits and qudits respectively, and optimize the performance for one-way quantum repeaters.
Di Giulio, Massimo
2017-02-07
Whereas it is extremely easy to prove that "if the biosynthetic relationships between amino acids were fundamental in the structuring of the genetic code, then their physico-chemical properties might also be revealed in the genetic code table"; it is, on the contrary, impossible to prove that "if the physico-chemical properties of amino acids were fundamental in the structuring of the genetic code, then the presence of the biosynthetic relationships between amino acids should not be revealed in the genetic code". And, given that in the genetic code table are mirrored both the biosynthetic relationships between amino acids and their physico-chemical properties, all this would be a test that would falsify the physico-chemical theories of the origin of the genetic code. That is to say, if the physico-chemical properties of amino acids had a fundamental role in organizing the genetic code, then we would not have duly revealed the presence - in the genetic code - of the biosynthetic relationships between amino acids, and on the contrary this has been observed. Therefore, this falsifies the physico-chemical theories of genetic code origin. Whereas, the coevolution theory of the origin of the genetic code would be corroborated by this analysis, because it would be able to give a description of evolution of the genetic code more coherent with the indisputable empirical observations that link both the biosynthetic relationships of amino acids and their physico-chemical properties to the evolutionary organization of the genetic code. Copyright © 2016 Elsevier Ltd. All rights reserved.
Bertrand's theorem and virial theorem in fractional classical mechanics
NASA Astrophysics Data System (ADS)
Yu, Rui-Yan; Wang, Towe
2017-09-01
Fractional classical mechanics is the classical counterpart of fractional quantum mechanics. The central force problem in this theory is investigated. Bertrand's theorem is generalized, and virial theorem is revisited, both in three spatial dimensions. In order to produce stable, closed, non-circular orbits, the inverse-square law and the Hooke's law should be modified in fractional classical mechanics.
Equilibrium 𝛽-limits in classical stellarators
NASA Astrophysics Data System (ADS)
Loizu, J.; Hudson, S. R.; Nührenberg, C.; Geiger, J.; Helander, P.
2017-12-01
A numerical investigation is carried out to understand the equilibrium -limit in a classical stellarator. The stepped-pressure equilibrium code (Hudson et al., Phys. Plasmas, vol. 19 (11), 2012) is used in order to assess whether or not magnetic islands and stochastic field-lines can emerge at high . Two modes of operation are considered: a zero-net-current stellarator and a fixed-iota stellarator. Despite the fact that relaxation is allowed (Taylor, Rev. Mod. Phys., vol. 58 (3), 1986, pp. 741-763), the former is shown to maintain good flux surfaces up to the equilibrium -limit predicted by ideal-magnetohydrodynamics (MHD), above which a separatrix forms. The latter, which has no ideal equilibrium -limit, is shown to develop regions of magnetic islands and chaos at sufficiently high , thereby providing a `non-ideal -limit'. Perhaps surprisingly, however, the value of at which the Shafranov shift of the axis reaches a fraction of the minor radius follows in all cases the scaling laws predicted by ideal-MHD. We compare our results to the High-Beta-Stellarator theory of Freidberg (Ideal MHD, 2014, Cambridge University Press) and derive a new prediction for the non-ideal equilibrium -limit above which chaos emerges.
The Tensile Strength of Liquid Nitrogen
NASA Astrophysics Data System (ADS)
Huang, Jian
1992-01-01
The tensile strength of liquids has been a puzzling subject. On the one hand, the classical nucleation theory has met great success in predicting the nucleation rates of superheated liquids. On the other hand, most of reported experimental values of the tensile strength for different liquids are far below the prediction from the classical nucleation theory. In this study, homogeneous nucleation in liquid nitrogen and its tensile strength have been investigated. Different approaches for determining the pressure amplitude were studied carefully. It is shown that Raman-Nath theory, as modified by the introduction of an effective interaction length, can be used to determine the pressure amplitude in the focal plane of a focusing ultrasonic transducer. The results obtained from different diffraction orders are consistent and in good agreement with other approaches including Debye's theory and solving the KZK equation. The measurement of the tensile strength was carried out in a high pressure stainless steel dewar. A High intensity ultrasonic wave was focused into a small volume of liquid nitrogen in a short time period. A probe laser beam passes through the focal region of a concave spherical transducer with small aperture angle and the transmitted light is detected with a photodiode. The pressure amplitude at the focus is calculated based on the acoustic power radiated into the liquid. In the experiment, the electrical signal on the transducer is gated at its resonance frequency with gate widths of 20 mus to 0.2 ms and temperature range from 77 K to near 100 K. The calculated pressure amplitude is in agreement with the prediction of classical nucleation theory for the nucleation rates from 10^6 to 10^ {11} (bubbles/cm^3 sec). This work provides the experimental evidence that the validity of the classical nucleation theory can be extended to the region of the negative pressure up to -90 atm. This is only the second cryogenic liquid to reach the tensile strength predicted from the classical nucleation theory.
Recent advances in coding theory for near error-free communications
NASA Technical Reports Server (NTRS)
Cheung, K.-M.; Deutsch, L. J.; Dolinar, S. J.; Mceliece, R. J.; Pollara, F.; Shahshahani, M.; Swanson, L.
1991-01-01
Channel and source coding theories are discussed. The following subject areas are covered: large constraint length convolutional codes (the Galileo code); decoder design (the big Viterbi decoder); Voyager's and Galileo's data compression scheme; current research in data compression for images; neural networks for soft decoding; neural networks for source decoding; finite-state codes; and fractals for data compression.
Nonequilibrium dynamics of the O( N ) model on dS3 and AdS crunches
NASA Astrophysics Data System (ADS)
Kumar, S. Prem; Vaganov, Vladislav
2018-03-01
We study the nonperturbative quantum evolution of the interacting O( N ) vector model at large- N , formulated on a spatial two-sphere, with time dependent couplings which diverge at finite time. This model - the so-called "E-frame" theory, is related via a conformal transformation to the interacting O( N ) model in three dimensional global de Sitter spacetime with time independent couplings. We show that with a purely quartic, relevant deformation the quantum evolution of the E-frame model is regular even when the classical theory is rendered singular at the end of time by the diverging coupling. Time evolution drives the E-frame theory to the large- N Wilson-Fisher fixed point when the classical coupling diverges. We study the quantum evolution numerically for a variety of initial conditions and demonstrate the finiteness of the energy at the classical "end of time". With an additional (time dependent) mass deformation, quantum backreaction lowers the mass, with a putative smooth time evolution only possible in the limit of infinite quartic coupling. We discuss the relevance of these results for the resolution of crunch singularities in AdS geometries dual to E-frame theories with a classical gravity dual.
Semenov, Alexander; Babikov, Dmitri
2015-12-17
The mixed quantum classical theory, MQCT, for inelastic scattering of two molecules is developed, in which the internal (rotational, vibrational) motion of both collision partners is treated with quantum mechanics, and the molecule-molecule scattering (translational motion) is described by classical trajectories. The resultant MQCT formalism includes a system of coupled differential equations for quantum probability amplitudes, and the classical equations of motion in the mean-field potential. Numerical tests of this theory are carried out for several most important rotational state-to-state transitions in the N2 + H2 system, in a broad range of collision energies. Besides scattering resonances (at low collision energies) excellent agreement with full-quantum results is obtained, including the excitation thresholds, the maxima of cross sections, and even some smaller features, such as slight oscillations of energy dependencies. Most importantly, at higher energies the results of MQCT are nearly identical to the full quantum results, which makes this approach a good alternative to the full-quantum calculations that become computationally expensive at higher collision energies and for heavier collision partners. Extensions of this theory to include vibrational transitions or general asymmetric-top rotor (polyatomic) molecules are relatively straightforward.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berry, Ray Alden; Zou, Ling; Zhao, Haihua
This document summarizes the physical models and mathematical formulations used in the RELAP-7 code. In summary, the MOOSE based RELAP-7 code development is an ongoing effort. The MOOSE framework enables rapid development of the RELAP-7 code. The developmental efforts and results demonstrate that the RELAP-7 project is on a path to success. This theory manual documents the main features implemented into the RELAP-7 code. Because the code is an ongoing development effort, this RELAP-7 Theory Manual will evolve with periodic updates to keep it current with the state of the development, implementation, and model additions/revisions.
Classical theory of atom-surface scattering: The rainbow effect
NASA Astrophysics Data System (ADS)
Miret-Artés, Salvador; Pollak, Eli
2012-07-01
The scattering of heavy atoms and molecules from surfaces is oftentimes dominated by classical mechanics. A large body of experiments have gathered data on the angular distributions of the scattered species, their energy loss distribution, sticking probability, dependence on surface temperature and more. For many years these phenomena have been considered theoretically in the framework of the “washboard model” in which the interaction of the incident particle with the surface is described in terms of hard wall potentials. Although this class of models has helped in elucidating some of the features it left open many questions such as: true potentials are clearly not hard wall potentials, it does not provide a realistic framework for phonon scattering, and it cannot explain the incident angle and incident energy dependence of rainbow scattering, nor can it provide a consistent theory for sticking. In recent years we have been developing a classical perturbation theory approach which has provided new insight into the dynamics of atom-surface scattering. The theory includes both surface corrugation as well as interaction with surface phonons in terms of harmonic baths which are linearly coupled to the system coordinates. This model has been successful in elucidating many new features of rainbow scattering in terms of frictions and bath fluctuations or noise. It has also given new insight into the origins of asymmetry in atomic scattering from surfaces. New phenomena deduced from the theory include friction induced rainbows, energy loss rainbows, a theory of super-rainbows, and more. In this review we present the classical theory of atom-surface scattering as well as extensions and implications for semiclassical scattering and the further development of a quantum theory of surface scattering. Special emphasis is given to the inversion of scattering data into information on the particle-surface interactions.
Classical theory of atom-surface scattering: The rainbow effect
NASA Astrophysics Data System (ADS)
Miret-Artés, Salvador; Pollak, Eli
The scattering of heavy atoms and molecules from surfaces is oftentimes dominated by classical mechanics. A large body of experiments have gathered data on the angular distributions of the scattered species, their energy loss distribution, sticking probability, dependence on surface temperature and more. For many years these phenomena have been considered theoretically in the framework of the "washboard model" in which the interaction of the incident particle with the surface is described in terms of hard wall potentials. Although this class of models has helped in elucidating some of the features it left open many questions such as: true potentials are clearly not hard wall potentials, it does not provide a realistic framework for phonon scattering, and it cannot explain the incident angle and incident energy dependence of rainbow scattering, nor can it provide a consistent theory for sticking. In recent years we have been developing a classical perturbation theory approach which has provided new insight into the dynamics of atom-surface scattering. The theory includes both surface corrugation as well as interaction with surface phonons in terms of harmonic baths which are linearly coupled to the system coordinates. This model has been successful in elucidating many new features of rainbow scattering in terms of frictions and bath fluctuations or noise. It has also given new insight into the origins of asymmetry in atomic scattering from surfaces. New phenomena deduced from the theory include friction induced rainbows, energy loss rainbows, a theory of super-rainbows, and more. In this review we present the classical theory of atom-surface scattering as well as extensions and implications for semiclassical scattering and the further development of a quantum theory of surface scattering. Special emphasis is given to the inversion of scattering data into information on the particle-surface interactions.
Bojowald, Martin
2008-01-01
Quantum gravity is expected to be necessary in order to understand situations in which classical general relativity breaks down. In particular in cosmology one has to deal with initial singularities, i.e., the fact that the backward evolution of a classical spacetime inevitably comes to an end after a finite amount of proper time. This presents a breakdown of the classical picture and requires an extended theory for a meaningful description. Since small length scales and high curvatures are involved, quantum effects must play a role. Not only the singularity itself but also the surrounding spacetime is then modified. One particular theory is loop quantum cosmology, an application of loop quantum gravity to homogeneous systems, which removes classical singularities. Its implications can be studied at different levels. The main effects are introduced into effective classical equations, which allow one to avoid the interpretational problems of quantum theory. They give rise to new kinds of early-universe phenomenology with applications to inflation and cyclic models. To resolve classical singularities and to understand the structure of geometry around them, the quantum description is necessary. Classical evolution is then replaced by a difference equation for a wave function, which allows an extension of quantum spacetime beyond classical singularities. One main question is how these homogeneous scenarios are related to full loop quantum gravity, which can be dealt with at the level of distributional symmetric states. Finally, the new structure of spacetime arising in loop quantum gravity and its application to cosmology sheds light on more general issues, such as the nature of time. Supplementary material is available for this article at 10.12942/lrr-2008-4.
Accommodating interruptions: A grounded theory of young people with asthma.
Hughes, Mary; Savage, Eileen; Andrews, Tom
2018-01-01
The aim of this study was to develop an explanatory theory on the lives of young people with asthma, issues affecting them and the impact of asthma on their day-to-day lives. Accommodating Interruptions is a theory that explains young people's concerns about living with asthma. Although national and international asthma management guidelines exist, it is accepted that the symptom control of asthma among the young people population is poor. This study was undertaken using Classic Grounded Theory. Data were collected through in-depth interviews and clinic consultations with young people aged 11-16 years who had asthma for over 1 year. Data were also collected from participant diaries. Constant comparative analysis, theoretical coding and memo writing were used to develop the substantive theory. The theory explains how young people resolve their main concern of being restricted by Accommodating Interruptions in their lives. They do this by assimilating behaviours in balance finding, moderating influence, fitting in and assuming control minimising the effects of asthma on their everyday lives. The theory of Accommodating Interruptions explains young people's asthma management behaviours in a new way. It allows us to understand how and why young people behave the way they do because they want to participate and be included in everyday activities, events and relationships. The theory adds to the body of knowledge on how young people with asthma live their day-to-day lives and it challenges some existing viewpoints in the literature regarding their behaviours. The findings have implications for developing services to support young people in a more meaningful way as they accommodate the interruptions associated with asthma in their lives. © 2017 John Wiley & Sons Ltd.
Quantum Kronecker sum-product low-density parity-check codes with finite rate
NASA Astrophysics Data System (ADS)
Kovalev, Alexey A.; Pryadko, Leonid P.
2013-07-01
We introduce an ansatz for quantum codes which gives the hypergraph-product (generalized toric) codes by Tillich and Zémor and generalized bicycle codes by MacKay as limiting cases. The construction allows for both the lower and the upper bounds on the minimum distance; they scale as a square root of the block length. Many thus defined codes have a finite rate and limited-weight stabilizer generators, an analog of classical low-density parity-check (LDPC) codes. Compared to the hypergraph-product codes, hyperbicycle codes generally have a wider range of parameters; in particular, they can have a higher rate while preserving the estimated error threshold.
Aging Theories for Establishing Safe Life Spans of Airborne Critical Structural Components
NASA Technical Reports Server (NTRS)
Ko, William L.
2003-01-01
New aging theories have been developed to establish the safe life span of airborne critical structural components such as B-52B aircraft pylon hooks for carrying air-launch drop-test vehicles. The new aging theories use the equivalent-constant-amplitude loading spectrum to represent the actual random loading spectrum with the same damaging effect. The crack growth due to random loading cycling of the first flight is calculated using the half-cycle theory, and then extrapolated to all the crack growths of the subsequent flights. The predictions of the new aging theories (finite difference aging theory and closed-form aging theory) are compared with the classical flight-test life theory and the previously developed Ko first- and Ko second-order aging theories. The new aging theories predict the number of safe flights as considerably lower than that predicted by the classical aging theory, and slightly lower than those predicted by the Ko first- and Ko second-order aging theories due to the inclusion of all the higher order terms.
Bowers, Jeffrey S
2009-01-01
A fundamental claim associated with parallel distributed processing (PDP) theories of cognition is that knowledge is coded in a distributed manner in mind and brain. This approach rejects the claim that knowledge is coded in a localist fashion, with words, objects, and simple concepts (e.g. "dog"), that is, coded with their own dedicated representations. One of the putative advantages of this approach is that the theories are biologically plausible. Indeed, advocates of the PDP approach often highlight the close parallels between distributed representations learned in connectionist models and neural coding in brain and often dismiss localist (grandmother cell) theories as biologically implausible. The author reviews a range a data that strongly challenge this claim and shows that localist models provide a better account of single-cell recording studies. The author also contrast local and alternative distributed coding schemes (sparse and coarse coding) and argues that common rejection of grandmother cell theories in neuroscience is due to a misunderstanding about how localist models behave. The author concludes that the localist representations embedded in theories of perception and cognition are consistent with neuroscience; biology only calls into question the distributed representations often learned in PDP models.
On the co-creation of classical and modern physics.
Staley, Richard
2005-12-01
While the concept of "classical physics" has long framed our understanding of the environment from which modern physics emerged, it has consistently been read back into a period in which the physicists concerned initially considered their work in quite other terms. This essay explores the shifting currency of the rich cultural image of the classical/ modern divide by tracing empirically different uses of "classical" within the physics community from the 1890s to 1911. A study of fin-de-siècle addresses shows that the earliest general uses of the concept proved controversial. Our present understanding of the term was in large part shaped by its incorporation (in different ways) within the emerging theories of relativity and quantum theory--where the content of "classical" physics was defined by proponents of the new. Studying the diverse ways in which Boltzmann, Larmor, Poincaré, Einstein, Minkowski, and Planck invoked the term "classical" will help clarify the critical relations between physicists' research programs and their use of worldview arguments in fashioning modern physics.
Contact stresses in gear teeth: A new method of analysis
NASA Technical Reports Server (NTRS)
Somprakit, Paisan; Huston, Ronald L.; Oswald, Fred B.
1991-01-01
A new, innovative procedure called point load superposition for determining the contact stresses in mating gear teeth. It is believed that this procedure will greatly extend both the range of applicability and the accuracy of gear contact stress analysis. Point load superposition is based upon fundamental solutions from the theory of elasticity. It is an iterative numerical procedure which has distinct advantages over the classical Hertz method, the finite element method, and over existing applications with the boundary element method. Specifically, friction and sliding effects, which are either excluded from or difficult to study with the classical methods, are routinely handled with the new procedure. Presented here are the basic theory and the algorithms. Several examples are given. Results are consistent with those of the classical theories. Applications to spur gears are discussed.
System Design Description for the TMAD Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Finfrock, S.H.
This document serves as the System Design Description (SDD) for the TMAD Code System, which includes the TMAD code and the LIBMAKR code. The SDD provides a detailed description of the theory behind the code, and the implementation of that theory. It is essential for anyone who is attempting to review or modify the code or who otherwise needs to understand the internal workings of the code. In addition, this document includes, in Appendix A, the System Requirements Specification for the TMAD System.
Di Giulio, Massimo
2017-11-07
The coevolution theory of the origin of the genetic code suggests that the organization of the genetic code coevolved with the biosynthetic relationships between amino acids. The mechanism that allowed this coevolution was based on tRNA-like molecules on which-this theory-would postulate the biosynthetic transformations between amino acids to have occurred. This mechanism makes a prediction on how the role conducted by the aminoacyl-tRNA synthetases (ARSs), in the origin of the genetic code, should have been. Indeed, if the biosynthetic transformations between amino acids occurred on tRNA-like molecules, then there was no need to link amino acids to these molecules because amino acids were already charged on tRNA-like molecules, as the coevolution theory suggests. In spite of the fact that ARSs make the genetic code responsible for the first interaction between a component of nucleic acids and that of proteins, for the coevolution theory the role of ARSs should have been entirely marginal in the genetic code origin. Therefore, I have conducted a further analysis of the distribution of the two classes of ARSs and of their subclasses-in the genetic code table-in order to perform a falsification test of the coevolution theory. Indeed, in the case in which the distribution of ARSs within the genetic code would have been highly significant, then the coevolution theory would be falsified since the mechanism on which it is based would not predict a fundamental role of ARSs in the origin of the genetic code. I found that the statistical significance of the distribution of the two classes of ARSs in the table of the genetic code is low or marginal, whereas that of the subclasses of ARSs statistically significant. However, this is in perfect agreement with the postulates of the coevolution theory. Indeed, the only case of statistical significance-regarding the classes of ARSs-is appreciable for the CAG code, whereas for its complement-the UNN/NUN code-only a marginal significance is measurable. These two codes codify roughly for the two ARS classes, in particular, the CAG code for the class II while the UNN/NUN code for the class I. Furthermore, the subclasses of ARSs show a statistical significance of their distribution in the genetic code table. Nevertheless, the more sensible explanation for these observations would be the following. The observation that would link the two classes of ARSs to the CAG and UNN/NUN codes, and the statistical significance of the distribution of the subclasses of ARSs in the genetic code table, would be only a secondary effect due to the highly significant distribution of the polarity of amino acids and their biosynthetic relationships in the genetic code. That is to say, the polarity of amino acids and their biosynthetic relationships would have conditioned the evolution of ARSs so that their presence in the genetic code would have been detectable. Even if the ARSs would not have-on their own-influenced directly the evolutionary organization of the genetic code. In other words, the role that ARSs had in the origin of the genetic code would have been entirely marginal. This conclusion would be in perfect accord with the predictions of the coevolution theory. Conversely, this conclusion would be in contrast-at least partially-with the physicochemical theories of the origin of the genetic code because they would foresee an absolutely more active role of ARSs in the origin of the organization of the genetic code. Copyright © 2017 Elsevier Ltd. All rights reserved.
A Survey of Progress in Coding Theory in the Soviet Union. Final Report.
ERIC Educational Resources Information Center
Kautz, William H.; Levitt, Karl N.
The results of a comprehensive technical survey of all published Soviet literature in coding theory and its applications--over 400 papers and books appearing before March 1967--are described in this report. Noteworthy Soviet contributions are discussed, including codes for the noiseless channel, codes that correct asymetric errors, decoding for…
Dual coding theory, word abstractness, and emotion: a critical review of Kousta et al. (2011).
Paivio, Allan
2013-02-01
Kousta, Vigliocco, Del Campo, Vinson, and Andrews (2011) questioned the adequacy of dual coding theory and the context availability model as explanations of representational and processing differences between concrete and abstract words. They proposed an alternative approach that focuses on the role of emotional content in the processing of abstract concepts. Their dual coding critique is, however, based on impoverished and, in some respects, incorrect interpretations of the theory and its implications. This response corrects those gaps and misinterpretations and summarizes research findings that show predicted variations in the effects of dual coding variables in different tasks and contexts. Especially emphasized is an empirically supported dual coding theory of emotion that goes beyond the Kousta et al. emphasis on emotion in abstract semantics. 2013 APA, all rights reserved
Brassey, Charlotte A.; Margetts, Lee; Kitchener, Andrew C.; Withers, Philip J.; Manning, Phillip L.; Sellers, William I.
2013-01-01
Classic beam theory is frequently used in biomechanics to model the stress behaviour of vertebrate long bones, particularly when creating intraspecific scaling models. Although methodologically straightforward, classic beam theory requires complex irregular bones to be approximated as slender beams, and the errors associated with simplifying complex organic structures to such an extent are unknown. Alternative approaches, such as finite element analysis (FEA), while much more time-consuming to perform, require no such assumptions. This study compares the results obtained using classic beam theory with those from FEA to quantify the beam theory errors and to provide recommendations about when a full FEA is essential for reasonable biomechanical predictions. High-resolution computed tomographic scans of eight vertebrate long bones were used to calculate diaphyseal stress owing to various loading regimes. Under compression, FEA values of minimum principal stress (σmin) were on average 142 per cent (±28% s.e.) larger than those predicted by beam theory, with deviation between the two models correlated to shaft curvature (two-tailed p = 0.03, r2 = 0.56). Under bending, FEA values of maximum principal stress (σmax) and beam theory values differed on average by 12 per cent (±4% s.e.), with deviation between the models significantly correlated to cross-sectional asymmetry at midshaft (two-tailed p = 0.02, r2 = 0.62). In torsion, assuming maximum stress values occurred at the location of minimum cortical thickness brought beam theory and FEA values closest in line, and in this case FEA values of τtorsion were on average 14 per cent (±5% s.e.) higher than beam theory. Therefore, FEA is the preferred modelling solution when estimates of absolute diaphyseal stress are required, although values calculated by beam theory for bending may be acceptable in some situations. PMID:23173199
Generalized Quantum Theory of Bianchi IX Cosmologies
NASA Astrophysics Data System (ADS)
Craig, David; Hartle, James
2003-04-01
We apply sum-over-histories generalized quantum theory to the closed homogeneous minisuperspace Bianchi IX cosmological model. We sketch how the probabilities in decoherent sets of alternative, coarse-grained histories of this model universe are calculated. We consider in particular, the probabilities for classical evolution in a suitable coarse-graining. For a restricted class of initial conditions and coarse grainings we exhibit the approximate decoherence of alternative histories in which the universe behaves classically and those in which it does not, illustrating the prediction that these universes will evolve in an approximately classical manner with a probability near unity.
Generalized mutual information and Tsirelson's bound
NASA Astrophysics Data System (ADS)
Wakakuwa, Eyuri; Murao, Mio
2014-12-01
We introduce a generalization of the quantum mutual information between a classical system and a quantum system into the mutual information between a classical system and a system described by general probabilistic theories. We apply this generalized mutual information (GMI) to a derivation of Tsirelson's bound from information causality, and prove that Tsirelson's bound can be derived from the chain rule of the GMI. By using the GMI, we formulate the "no-supersignalling condition" (NSS), that the assistance of correlations does not enhance the capability of classical communication. We prove that NSS is never violated in any no-signalling theory.
Generalized mutual information and Tsirelson's bound
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wakakuwa, Eyuri; Murao, Mio
2014-12-04
We introduce a generalization of the quantum mutual information between a classical system and a quantum system into the mutual information between a classical system and a system described by general probabilistic theories. We apply this generalized mutual information (GMI) to a derivation of Tsirelson's bound from information causality, and prove that Tsirelson's bound can be derived from the chain rule of the GMI. By using the GMI, we formulate the 'no-supersignalling condition' (NSS), that the assistance of correlations does not enhance the capability of classical communication. We prove that NSS is never violated in any no-signalling theory.
Budiyono, Agung; Rohrlich, Daniel
2017-11-03
Where does quantum mechanics part ways with classical mechanics? How does quantum randomness differ fundamentally from classical randomness? We cannot fully explain how the theories differ until we can derive them within a single axiomatic framework, allowing an unambiguous account of how one theory is the limit of the other. Here we derive non-relativistic quantum mechanics and classical statistical mechanics within a common framework. The common axioms include conservation of average energy and conservation of probability current. But two axioms distinguish quantum mechanics from classical statistical mechanics: an "ontic extension" defines a nonseparable (global) random variable that generates physical correlations, and an "epistemic restriction" constrains allowed phase space distributions. The ontic extension and epistemic restriction, with strength on the order of Planck's constant, imply quantum entanglement and uncertainty relations. This framework suggests that the wave function is epistemic, yet it does not provide an ontic dynamics for individual systems.
"Fathers" and "sons" of theories in cell physiology: the membrane theory.
Matveev, V V; Wheatley, D N
2005-12-16
The last 50 years in the history of life sciences are remarkable for a new important feature that looks as a great threat for their future. A profound specialization dominating in quickly developing fields of science causes a crisis of the scientific method. The essence of the method is a unity of two elements, the experimental data and the theory that explains them. To us, "fathers" of science, classically, were the creators of new ideas and theories. They were the true experts of their own theories. It is only they who have the right to say: "I am the theory". In other words, they were carriers of theories, of the theoretical knowledge. The fathers provided the necessary logical integrity to their theories, since theories in biology have still to be based on strict mathematical proofs. It is not true for sons. As a result of massive specialization, modern experts operate in very confined close spaces. They formulate particular rules far from the level of theory. The main theories of science are known to them only at the textbook level. Nowadays, nobody can say: "I am the theory". With whom, then is it possible to discuss today on a broader theoretical level? How can a classical theory--for example, the membrane one--be changed or even disproved under these conditions? How can the "sons" with their narrow education catch sight of membrane theory defects? As a result, "global" theories have few critics and control. Due to specialization, we have lost the ability to work at the experimental level of biology within the correct or appropriate theoretical context. The scientific method in its classic form is now being rapidly eroded. A good case can be made for "Membrane Theory", to which we will largely refer throughout this article.
The Coding of Biological Information: From Nucleotide Sequence to Protein Recognition
NASA Astrophysics Data System (ADS)
Štambuk, Nikola
The paper reviews the classic results of Swanson, Dayhoff, Grantham, Blalock and Root-Bernstein, which link genetic code nucleotide patterns to the protein structure, evolution and molecular recognition. Symbolic representation of the binary addresses defining particular nucleotide and amino acid properties is discussed, with consideration of: structure and metric of the code, direct correspondence between amino acid and nucleotide information, and molecular recognition of the interacting protein motifs coded by the complementary DNA and RNA strands.
Berthelsen, Connie Bøttcher; Lindhardt, Tove; Frederiksen, Kirsten
2017-06-01
This paper presents a discussion of the differences in using participant observation as a data collection method by comparing the classic grounded theory methodology of Barney Glaser with the constructivist grounded theory methodology by Kathy Charmaz. Participant observations allow nursing researchers to experience activities and interactions directly in situ. However, using participant observations as a data collection method can be done in many ways, depending on the chosen grounded theory methodology, and may produce different results. This discussion shows that how the differences between using participant observations in classic and constructivist grounded theory can be considerable and that grounded theory researchers should adhere to the method descriptions of performing participant observations according to the selected grounded theory methodology to enhance the quality of research. © 2016 Nordic College of Caring Science.
The Institution of Sociological Theory in Canada.
Guzman, Cinthya; Silver, Daniel
2018-02-01
Using theory syllabi and departmental data collected for three academic years, this paper investigates the institutional practice of theory in sociology departments across Canada. In particular, it examines the position of theory within the sociological curriculum, and how this varies among universities. Taken together, our analyses indicate that theory remains deeply institutionalized at the core of sociological education and Canadian sociologists' self-understanding; that theorists as a whole show some coherence in how they define themselves, but differ in various ways, especially along lines of region, intellectual background, and gender; that despite these differences, the classical versus contemporary heuristic largely cuts across these divides, as does the strongly ingrained position of a small group of European authors as classics of the discipline as a whole. Nevertheless, who is a classic remains an unsettled question, alternatives to the "classical versus contemporary" heuristic do exist, and theorists' syllabi reveal diverse "others" as potential candidates. Our findings show that the field of sociology is neither marked by universal agreement nor by absolute division when it comes to its theoretical underpinnings. To the extent that they reveal a unified field, the findings suggest that unity lies more in a distinctive form than in a distinctive content, which defines the space and structure of the field of sociology. © 2018 Canadian Sociological Association/La Société canadienne de sociologie.
On the effective field theory of intersecting D3-branes
NASA Astrophysics Data System (ADS)
Abbaspur, Reza
2018-05-01
We study the effective field theory of two intersecting D3-branes with one common dimension along the lines recently proposed in ref. [1]. We introduce a systematic way of deriving the classical effective action to arbitrary orders in perturbation theory. Using a proper renormalization prescription to handle logarithmic divergencies arising at all orders in the perturbation series, we recover the first order renormalization group equation of ref. [1] plus an infinite set of higher order equations. We show the consistency of the higher order equations with the first order one and hence interpret the first order result as an exact RG flow equation in the classical theory.
NASA Technical Reports Server (NTRS)
Zeng, X. C.; Stroud, D.
1989-01-01
The previously developed Ginzburg-Landau theory for calculating the crystal-melt interfacial tension of bcc elements to treat the classical one-component plasma (OCP), the charged fermion system, and the Bose crystal. For the OCP, a direct application of the theory of Shih et al. (1987) yields for the surface tension 0.0012(Z-squared e-squared/a-cubed), where Ze is the ionic charge and a is the radius of the ionic sphere. Bose crystal-melt interface is treated by a quantum extension of the classical density-functional theory, using the Feynman formalism to estimate the relevant correlation functions. The theory is applied to the metastable He-4 solid-superfluid interface at T = 0, with a resulting surface tension of 0.085 erg/sq cm, in reasonable agreement with the value extrapolated from the measured surface tension of the bcc solid in the range 1.46-1.76 K. These results suggest that the density-functional approach is a satisfactory mean-field theory for estimating the equilibrium properties of liquid-solid interfaces, given knowledge of the uniform phases.
The infinite medium Green's function for neutron transport in plane geometry 40 years later
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ganapol, B.D.
1993-01-01
In 1953, the first of what was supposed to be two volumes on neutron transport theory was published. The monograph, entitled [open quotes]Introduction to the Theory of Neutron Diffusion[close quotes] by Case et al., appeared as a Los Alamos National Laboratory report and was to be followed by a second volume, which never appeared as intended because of the death of Placzek. Instead, Case and Zweifel collaborated on the now classic work entitled Linear Transport Theory 2 in which the underlying mathematical theory of linear transport was presented. The initial monograph, however, represented the coming of age of neutron transportmore » theory, which had its roots in radiative transfer and kinetic theory. In addition, it provided the first benchmark results along with the mathematical development for several fundamental neutron transport problems. In particular, one-dimensional infinite medium Green's functions for the monoenergetic transport equation in plane and spherical geometries were considered complete with numerical results to be used as standards to guide code development for applications. Unfortunately, because of the limited computational resources of the day, some numerical results were incorrect. Also, only conventional mathematics and numerical methods were used because the transport theorists of the day were just becoming acquainted with more modern mathematical approaches. In this paper, Green's function solution is revisited in light of modern numerical benchmarking methods with an emphasis on evaluation rather than theoretical results. The primary motivation for considering the Green's function at this time is its emerging use in solving finite and heterogeneous media transport problems.« less
NASA Astrophysics Data System (ADS)
Brynjolfsson, Ari
2002-04-01
Einstein's general theory of relativity assumes that photons don't change frequency as they move from Sun to Earth. This assumption is correct in classical physics. All experiments proving the general relativity are in the domain of classical physics. This include the tests by Pound et al. of the gravitational redshift of 14.4 keV photons; the rocket experiments by Vessot et al.; the Galileo solar redshift experiments by Krisher et al.; the gravitational deflection of light experiments by Riveros and Vucetich; and delay of echoes of radar signals passing close to Sun as observed by Shapiro et al. Bohr's correspondence principle assures that quantum mechanical theory of general relativity agrees with Einstein's classical theory when frequency and gravitational field gradient approach zero, or when photons cannot interact with the gravitational field. When we treat photons as quantum mechanical particles; we find that gravitational force on photons is reversed (antigravity). This modified theory contradicts the equivalence principle, but is consistent with all experiments. Solar lines and distant stars are redshifted in accordance with author's plasma redshift theory. These changes result in a beautiful consistent cosmology.
An optimization program based on the method of feasible directions: Theory and users guide
NASA Technical Reports Server (NTRS)
Belegundu, Ashok D.; Berke, Laszlo; Patnaik, Surya N.
1994-01-01
The theory and user instructions for an optimization code based on the method of feasible directions are presented. The code was written for wide distribution and ease of attachment to other simulation software. Although the theory of the method of feasible direction was developed in the 1960's, many considerations are involved in its actual implementation as a computer code. Included in the code are a number of features to improve robustness in optimization. The search direction is obtained by solving a quadratic program using an interior method based on Karmarkar's algorithm. The theory is discussed focusing on the important and often overlooked role played by the various parameters guiding the iterations within the program. Also discussed is a robust approach for handling infeasible starting points. The code was validated by solving a variety of structural optimization test problems that have known solutions obtained by other optimization codes. It has been observed that this code is robust: it has solved a variety of problems from different starting points. However, the code is inefficient in that it takes considerable CPU time as compared with certain other available codes. Further work is required to improve its efficiency while retaining its robustness.
ERIC Educational Resources Information Center
Wilson, Mark; Allen, Diane D.; Li, Jun Corser
2006-01-01
This paper compares the approach and resultant outcomes of item response models (IRMs) and classical test theory (CTT). First, it reviews basic ideas of CTT, and compares them to the ideas about using IRMs introduced in an earlier paper. It then applies a comparison scheme based on the AERA/APA/NCME "Standards for Educational and…
ERIC Educational Resources Information Center
Culpepper, Steven Andrew
2013-01-01
A classic topic in the fields of psychometrics and measurement has been the impact of the number of scale categories on test score reliability. This study builds on previous research by further articulating the relationship between item response theory (IRT) and classical test theory (CTT). Equations are presented for comparing the reliability and…
ERIC Educational Resources Information Center
Mason, Brandon; Smithey, Martha
2012-01-01
This study examines Merton's Classical Strain Theory (1938) as a causative factor in intimate partner violence among college students. We theorize that college students experience general life strain and cumulative strain as they pursue the goal of a college degree. We test this strain on the likelihood of using intimate partner violence. Strain…
ERIC Educational Resources Information Center
Schlingman, Wayne M.; Prather, Edward E.; Wallace, Colin S.; Brissenden, Gina; Rudolph, Alexander L.
2012-01-01
This paper is the first in a series of investigations into the data from the recent national study using the Light and Spectroscopy Concept Inventory (LSCI). In this paper, we use classical test theory to form a framework of results that will be used to evaluate individual item difficulties, item discriminations, and the overall reliability of the…
Classical closure theory and Lam's interpretation of epsilon-RNG
NASA Technical Reports Server (NTRS)
Zhou, YE
1995-01-01
Lam's phenomenological epsilon-renormalization group (RNG) model is quite different from the other members of that group. It does not make use of the correspondence principle and the epsilon-expansion procedure. We demonstrate that Lam's epsilon-RNG model is essentially the physical space version of the classical closure theory in spectral space and consider the corresponding treatment of the eddy viscosity and energy backscatter.
New variables for classical and quantum gravity
NASA Technical Reports Server (NTRS)
Ashtekar, Abhay
1986-01-01
A Hamiltonian formulation of general relativity based on certain spinorial variables is introduced. These variables simplify the constraints of general relativity considerably and enable one to imbed the constraint surface in the phase space of Einstein's theory into that of Yang-Mills theory. The imbedding suggests new ways of attacking a number of problems in both classical and quantum gravity. Some illustrative applications are discussed.
ERIC Educational Resources Information Center
Sussman, Joshua; Beaujean, A. Alexander; Worrell, Frank C.; Watson, Stevie
2013-01-01
Item response models (IRMs) were used to analyze Cross Racial Identity Scale (CRIS) scores. Rasch analysis scores were compared with classical test theory (CTT) scores. The partial credit model demonstrated a high goodness of fit and correlations between Rasch and CTT scores ranged from 0.91 to 0.99. CRIS scores are supported by both methods.…
Conveying the Complex: Updating U.S. Joint Systems Analysis Doctrine with Complexity Theory
2013-12-10
screech during a public address, or sustain and amplify it during a guitar solo. Since the systems are nonlinear, understanding cause and effect... Classics , 2007), 12. 34 those frames.58 A technique to cope with the potentially confusing...Reynolds, Paul Davidson. A Primer in Theory Construction. Boston: Allyn and Bacon Classics , 2007. Riolo, Rick L. “The Effects and Evolution of Tag
Quantum kinetic theory of the filamentation instability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bret, A.; Haas, F.
2011-07-15
The quantum electromagnetic dielectric tensor for a multi-species plasma is re-derived from the gauge-invariant Wigner-Maxwell system and presented under a form very similar to the classical one. The resulting expression is then applied to a quantum kinetic theory of the electromagnetic filamentation instability. Comparison is made with the quantum fluid theory including a Bohm pressure term and with the cold classical plasma result. A number of analytical expressions are derived for the cutoff wave vector, the largest growth rate, and the most unstable wave vector.
A classical density-functional theory for describing water interfaces.
Hughes, Jessica; Krebs, Eric J; Roundy, David
2013-01-14
We develop a classical density functional for water which combines the White Bear fundamental-measure theory (FMT) functional for the hard sphere fluid with attractive interactions based on the statistical associating fluid theory variable range (SAFT-VR). This functional reproduces the properties of water at both long and short length scales over a wide range of temperatures and is computationally efficient, comparable to the cost of FMT itself. We demonstrate our functional by applying it to systems composed of two hard rods, four hard rods arranged in a square, and hard spheres in water.
Geometry, topology, and string theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Varadarajan, Uday
A variety of scenarios are considered which shed light upon the uses and limitations of classical geometric and topological notions in string theory. The primary focus is on situations in which D-brane or string probes of a given classical space-time see the geometry quite differently than one might naively expect. In particular, situations in which extra dimensions, non-commutative geometries as well as other non-local structures emerge are explored in detail. Further, a preliminary exploration of such issues in Lorentzian space-times with non-trivial causal structures within string theory is initiated.
ERIC Educational Resources Information Center
Rupley, William H.; Paige, David D.; Rasinski, Timothy V.; Slough, Scott W.
2015-01-01
Pavio's Dual-Coding Theory (1991) and Mayer's Multimedia Principal (2000) form the foundation for proposing a multi-coding theory centered around Multi-Touch Tablets and the newest generation of e-textbooks to scaffold struggling readers in reading and learning from science textbooks. Using E. O. Wilson's "Life on Earth: An Introduction"…
An analysis of the metabolic theory of the origin of the genetic code
NASA Technical Reports Server (NTRS)
Amirnovin, R.; Bada, J. L. (Principal Investigator)
1997-01-01
A computer program was used to test Wong's coevolution theory of the genetic code. The codon correlations between the codons of biosynthetically related amino acids in the universal genetic code and in randomly generated genetic codes were compared. It was determined that many codon correlations are also present within random genetic codes and that among the random codes there are always several which have many more correlations than that found in the universal code. Although the number of correlations depends on the choice of biosynthetically related amino acids, the probability of choosing a random genetic code with the same or greater number of codon correlations as the universal genetic code was found to vary from 0.1% to 34% (with respect to a fairly complete listing of related amino acids). Thus, Wong's theory that the genetic code arose by coevolution with the biosynthetic pathways of amino acids, based on codon correlations between biosynthetically related amino acids, is statistical in nature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Foucar, James G.
The version of STK (Sierra ToolKit) that has long been provided with Trilinos is no longer supported by the core develop- ment team. With the introduction of a the new STK library into Trilinos, the old STK has been renamed to stk classic. This document contains a rough guide of how to port a stk classic code to STK.
Semiclassical theory of electronically nonadiabatic transitions in molecular collision processes
NASA Technical Reports Server (NTRS)
Lam, K. S.; George, T. F.
1979-01-01
An introductory account of the semiclassical theory of the S-matrix for molecular collision processes is presented, with special emphasis on electronically nonadiabatic transitions. This theory is based on the incorporation of classical mechanics with quantum superposition, and in practice makes use of the analytic continuation of classical mechanics into the complex space of time domain. The relevant concepts of molecular scattering theory and related dynamical models are described and the formalism is developed and illustrated with simple examples - collinear collision of the A+BC type. The theory is then extended to include the effects of laser-induced nonadiabatic transitions. Two bound continuum processes collisional ionization and collision-induced emission also amenable to the same general semiclassical treatment are discussed.
The general theory of convolutional codes
NASA Technical Reports Server (NTRS)
Mceliece, R. J.; Stanley, R. P.
1993-01-01
This article presents a self-contained introduction to the algebraic theory of convolutional codes. This introduction is partly a tutorial, but at the same time contains a number of new results which will prove useful for designers of advanced telecommunication systems. Among the new concepts introduced here are the Hilbert series for a convolutional code and the class of compact codes.
NASA Astrophysics Data System (ADS)
Indi Sriprisan, Sirikul; Townsend, Lawrence; Cucinotta, Francis A.; Miller, Thomas M.
Purpose: An analytical knockout-ablation-coalescence model capable of making quantitative predictions of the neutron spectra from high-energy nucleon-nucleus and nucleus-nucleus collisions is being developed for use in space radiation protection studies. The FORTRAN computer code that implements this model is called UBERNSPEC. The knockout or abrasion stage of the model is based on Glauber multiple scattering theory. The ablation part of the model uses the classical evaporation model of Weisskopf-Ewing. In earlier work, the knockout-ablation model has been extended to incorporate important coalescence effects into the formalism. Recently, alpha coalescence has been incorporated, and the ability to predict light ion spectra with the coalescence model added. The earlier versions were limited to nuclei with mass numbers less than 69. In this work, the UBERNSPEC code has been extended to make predictions of secondary neutrons and light ion production from the interactions of heavy charged particles with higher mass numbers (as large as 238). The predictions are compared with published measurements of neutron spectra and light ion energy for a variety of collision pairs. Furthermore, the predicted spectra from this work are compared with the predictions from the recently-developed heavy ion event generator incorporated in the Monte Carlo radiation transport code HETC-HEDS.
The mathematical theory of signal processing and compression-designs
NASA Astrophysics Data System (ADS)
Feria, Erlan H.
2006-05-01
The mathematical theory of signal processing, named processor coding, will be shown to inherently arise as the computational time dual of Shannon's mathematical theory of communication which is also known as source coding. Source coding is concerned with signal source memory space compression while processor coding deals with signal processor computational time compression. Their combination is named compression-designs and referred as Conde in short. A compelling and pedagogically appealing diagram will be discussed highlighting Conde's remarkable successful application to real-world knowledge-aided (KA) airborne moving target indicator (AMTI) radar.
NASA Technical Reports Server (NTRS)
Ioannou, Petros J.; Lindzen, Richard S.
1993-01-01
Classical tidal theory is applied to the atmospheres of the outer planets. The tidal geopotential due to satellites of the outer planets is discussed, and the solution of Laplace's tidal equation for Hough modes appropriate to tides on the outer planets is examined. The vertical structure of tidal modes is described, noting that only relatively high-order meridional mode numbers can propagate vertically with growing amplitude. Expected magnitudes for tides in the visible atmosphere of Jupiter are discussed. The classical theory is extended to planetary interiors taking the effects of spherically and self-gravity into account. The thermodynamic structure of Jupiter is described and the WKB theory of the vertical structure equation is presented. The regions for which inertial, gravity, and acoustic oscillations are possible are delineated. The case of a planet with a neutral interior is treated, discussing the various atmospheric boundary conditions and showing that the tidal response is small.
Physics of automated driving in framework of three-phase traffic theory.
Kerner, Boris S
2018-04-01
We have revealed physical features of automated driving in the framework of the three-phase traffic theory for which there is no fixed time headway to the preceding vehicle. A comparison with the classical model approach to automated driving for which an automated driving vehicle tries to reach a fixed (desired or "optimal") time headway to the preceding vehicle has been made. It turns out that automated driving in the framework of the three-phase traffic theory can exhibit the following advantages in comparison with the classical model of automated driving: (i) The absence of string instability. (ii) Considerably smaller speed disturbances at road bottlenecks. (iii) Automated driving vehicles based on the three-phase theory can decrease the probability of traffic breakdown at the bottleneck in mixed traffic flow consisting of human driving and automated driving vehicles; on the contrary, even a single automated driving vehicle based on the classical approach can provoke traffic breakdown at the bottleneck in mixed traffic flow.
Physics of automated driving in framework of three-phase traffic theory
NASA Astrophysics Data System (ADS)
Kerner, Boris S.
2018-04-01
We have revealed physical features of automated driving in the framework of the three-phase traffic theory for which there is no fixed time headway to the preceding vehicle. A comparison with the classical model approach to automated driving for which an automated driving vehicle tries to reach a fixed (desired or "optimal") time headway to the preceding vehicle has been made. It turns out that automated driving in the framework of the three-phase traffic theory can exhibit the following advantages in comparison with the classical model of automated driving: (i) The absence of string instability. (ii) Considerably smaller speed disturbances at road bottlenecks. (iii) Automated driving vehicles based on the three-phase theory can decrease the probability of traffic breakdown at the bottleneck in mixed traffic flow consisting of human driving and automated driving vehicles; on the contrary, even a single automated driving vehicle based on the classical approach can provoke traffic breakdown at the bottleneck in mixed traffic flow.
Urns and Chameleons: two metaphors for two different types of measurements
NASA Astrophysics Data System (ADS)
Accardi, Luigi
2013-09-01
The awareness of the physical possibility of models of space, alternative with respect to the Euclidean one, begun to emerge towards the end of the 19-th century. At the end of the 20-th century a similar awareness emerged concerning the physical possibility of models of the laws of chance alternative with respect to the classical probabilistic models (Kolmogorov model). In geometry the mathematical construction of several non-Euclidean models of space preceded of about one century their applications in physics, which came with the theory of relativity. In physics the opposite situation took place. In fact, while the first example of non Kolmogorov probabilistic models emerged in quantum physics approximately one century ago, at the beginning of 1900, the awareness of the fact that this new mathematical formalism reflected a new mathematical model of the laws of chance had to wait until the early 1980's. In this long time interval the classical and the new probabilistic models were both used in the description and the interpretation of quantum phenomena and negatively interfered with each other because of the absence (for many decades) of a mathematical theory that clearly delimited the respective domains of application. The result of this interference was the emergence of the so-called the "paradoxes of quantum theory". For several decades there have been many different attempts to solve these paradoxes giving rise to what K. Popper baptized "the great quantum muddle": a debate which has been at the core of the philosophy of science for more than 50 years. However these attempts have led to contradictions between the two fundamental theories of the contemporary physical: the quantum theory and the theory of the relativity. Quantum probability identifies the reason of the emergence of non Kolmogorov models, and therefore of the so-called the paradoxes of quantum theory, in the difference between the notion of passive measurements like "reading pre-existent properties" (urn metaphor) and measurements consisting in reading "a response to an interaction" (chameleon metaphor). The non-trivial point is that one can prove that, while the urn scheme cannot lead to empirical data outside of classic probability, response based measurements can give rise to non classical statistics. The talk will include entirely classical examples of non classical statistics and potential applications to economic, sociological or biomedical phenomena.
A personalist approach to public-health ethics.
Petrini, Carlo; Gainotti, Sabina
2008-08-01
First we give an overview of the historical development of public health. Then we present some public-health deontology codes and some ethical principles. We highlight difficulties in defining ethics for public health, with specific reference to three of them that concern: (i) the adaptability to public health of the classical principles of bioethics; (ii) the duty to respect and safeguard the individual while acting within the community perspective that is typical of public health; and (iii) the application-oriented nature of public health and the general lack of attention towards the ethical implications of collective interventions (compared with research). We then mention some proposals drafted from North American bioethics "principles" and utilitarian, liberal and communitarian views. Drawing from other approaches, personalism is outlined as being the theory that offers a consistent set of values and alternative principles that are relevant for public health.
Simulation of Plasma Transport in a Toroidal Annulus with TEMPEST
NASA Astrophysics Data System (ADS)
Xiong, Z.
2005-10-01
TEMPEST is an edge gyro-kinetic continuum code currently under development at LLNL to study boundary plasma transport over a region extending from inside the H-mode pedestal across the separatrix to the divertor plates. Here we report simulation results from the 4D (θ, ψ, E, μ) TEMPEST, for benchmark purpose, in an annulus region immediately inside the separatrix of a large aspect ratio, circular cross-section tokamak. Besides the normal poloidal trapping regions, there are radial inaccessible regions at a fixed poloid angle, energy and magnetic moment due to the radial variation of the B field. To handle such cases, a fifth-order WENO differencing scheme is used in the radial direction. The particle and heat transport coefficients are obtained for different collisional regimes and compared with the neo-classical transport theory.
De Tiège, Alexis; Van de Peer, Yves; Braeckman, Johan; Tanghe, Koen B
2017-11-22
Although classical evolutionary theory, i.e., population genetics and the Modern Synthesis, was already implicitly 'gene-centred', the organism was, in practice, still generally regarded as the individual unit of which a population is composed. The gene-centred approach to evolution only reached a logical conclusion with the advent of the gene-selectionist or gene's eye view in the 1960s and 1970s. Whereas classical evolutionary theory can only work with (genotypically represented) fitness differences between individual organisms, gene-selectionism is capable of working with fitness differences among genes within the same organism and genome. Here, we explore the explanatory potential of 'intra-organismic' and 'intra-genomic' gene-selectionism, i.e., of a behavioural-ecological 'gene's eye view' on genetic, genomic and organismal evolution. First, we give a general outline of the framework and how it complements the-to some extent-still 'organism-centred' approach of classical evolutionary theory. Secondly, we give a more in-depth assessment of its explanatory potential for biological evolution, i.e., for Darwin's 'common descent with modification' or, more specifically, for 'historical continuity or homology with modular evolutionary change' as it has been studied by evolutionary developmental biology (evo-devo) during the last few decades. In contrast with classical evolutionary theory, evo-devo focuses on 'within-organism' developmental processes. Given the capacity of gene-selectionism to adopt an intra-organismal gene's eye view, we outline the relevance of the latter model for evo-devo. Overall, we aim for the conceptual integration between the gene's eye view on the one hand, and more organism-centred evolutionary models (both classical evolutionary theory and evo-devo) on the other.
Clerc, Daryl G
2016-07-21
An ab initio approach was used to study the molecular-level interactions that connect gene-mutation to changes in an organism׳s phenotype. The study provides new insights into the evolutionary process and presents a simplification whereby changes in phenotypic properties may be studied in terms of the binding affinities of the chemical interactions affected by mutation, rather than by correlation to the genes. The study also reports the role that nonlinear effects play in the progression of organs, and how those effects relate to the classical theory of evolution. Results indicate that the classical theory of evolution occurs as a special case within the ab initio model - a case having two attributes. The first attribute: proteins and promoter regions are not shared among organs. The second attribute: continuous limiting behavior exists in the physical properties of organs as well as in the binding affinity of the associated chemical interactions, with respect to displacements in the chemical properties of proteins and promoter regions induced by mutation. Outside of the special case, second-order coupling contributions are significant and nonlinear effects play an important role, a result corroborated by analyses of published activity levels in binding and transactivation assays. Further, gradations in the state of perfection of an organ may be small or large depending on the type of mutation, and not necessarily closely-separated as maintained by the classical theory. Results also indicate that organs progress with varying degrees of interdependence, the likelihood of successful mutation decreases with increasing complexity of the affected chemical system, and differences between the ab initio model and the classical theory increase with increasing complexity of the organism. Copyright © 2016 The Author. Published by Elsevier Ltd.. All rights reserved.
The Basics: What's Essential about Theory for Community Development Practice?
ERIC Educational Resources Information Center
Hustedde, Ronald J.; Ganowicz, Jacek
2002-01-01
Relates three classical theories (structural functionalism, conflict theory, symbolic interactionism) to fundamental concerns of community development (structure, power, and shared meaning). Links these theories to Giddens' structuration theory, which connects macro and micro structures and community influence on change through cultural norms.…
Stable and Dynamic Coding for Working Memory in Primate Prefrontal Cortex
Watanabe, Kei; Funahashi, Shintaro; Stokes, Mark G.
2017-01-01
Working memory (WM) provides the stability necessary for high-level cognition. Influential theories typically assume that WM depends on the persistence of stable neural representations, yet increasing evidence suggests that neural states are highly dynamic. Here we apply multivariate pattern analysis to explore the population dynamics in primate lateral prefrontal cortex (PFC) during three variants of the classic memory-guided saccade task (recorded in four animals). We observed the hallmark of dynamic population coding across key phases of a working memory task: sensory processing, memory encoding, and response execution. Throughout both these dynamic epochs and the memory delay period, however, the neural representational geometry remained stable. We identified two characteristics that jointly explain these dynamics: (1) time-varying changes in the subpopulation of neurons coding for task variables (i.e., dynamic subpopulations); and (2) time-varying selectivity within neurons (i.e., dynamic selectivity). These results indicate that even in a very simple memory-guided saccade task, PFC neurons display complex dynamics to support stable representations for WM. SIGNIFICANCE STATEMENT Flexible, intelligent behavior requires the maintenance and manipulation of incoming information over various time spans. For short time spans, this faculty is labeled “working memory” (WM). Dominant models propose that WM is maintained by stable, persistent patterns of neural activity in prefrontal cortex (PFC). However, recent evidence suggests that neural activity in PFC is dynamic, even while the contents of WM remain stably represented. Here, we explored the neural dynamics in PFC during a memory-guided saccade task. We found evidence for dynamic population coding in various task epochs, despite striking stability in the neural representational geometry of WM. Furthermore, we identified two distinct cellular mechanisms that contribute to dynamic population coding. PMID:28559375
Non-US data compression and coding research. FASAC Technical Assessment Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gray, R.M.; Cohn, M.; Craver, L.W.
1993-11-01
This assessment of recent data compression and coding research outside the United States examines fundamental and applied work in the basic areas of signal decomposition, quantization, lossless compression, and error control, as well as application development efforts in image/video compression and speech/audio compression. Seven computer scientists and engineers who are active in development of these technologies in US academia, government, and industry carried out the assessment. Strong industrial and academic research groups in Western Europe, Israel, and the Pacific Rim are active in the worldwide search for compression algorithms that provide good tradeoffs among fidelity, bit rate, and computational complexity,more » though the theoretical roots and virtually all of the classical compression algorithms were developed in the United States. Certain areas, such as segmentation coding, model-based coding, and trellis-coded modulation, have developed earlier or in more depth outside the United States, though the United States has maintained its early lead in most areas of theory and algorithm development. Researchers abroad are active in other currently popular areas, such as quantizer design techniques based on neural networks and signal decompositions based on fractals and wavelets, but, in most cases, either similar research is or has been going on in the United States, or the work has not led to useful improvements in compression performance. Because there is a high degree of international cooperation and interaction in this field, good ideas spread rapidly across borders (both ways) through international conferences, journals, and technical exchanges. Though there have been no fundamental data compression breakthroughs in the past five years--outside or inside the United State--there have been an enormous number of significant improvements in both places in the tradeoffs among fidelity, bit rate, and computational complexity.« less
An extension of the coevolution theory of the origin of the genetic code
Di Giulio, Massimo
2008-01-01
Background The coevolution theory of the origin of the genetic code suggests that the genetic code is an imprint of the biosynthetic relationships between amino acids. However, this theory does not seem to attribute a role to the biosynthetic relationships between the earliest amino acids that evolved along the pathways of energetic metabolism. As a result, the coevolution theory is unable to clearly define the very earliest phases of genetic code origin. In order to remove this difficulty, I here suggest an extension of the coevolution theory that attributes a crucial role to the first amino acids that evolved along these biosynthetic pathways and to their biosynthetic relationships, even when defined by the non-amino acid molecules that are their precursors. Results It is re-observed that the first amino acids to evolve along these biosynthetic pathways are predominantly those codified by codons of the type GNN, and this observation is found to be statistically significant. Furthermore, the close biosynthetic relationships between the sibling amino acids Ala-Ser, Ser-Gly, Asp-Glu, and Ala-Val are not random in the genetic code table and reinforce the hypothesis that the biosynthetic relationships between these six amino acids played a crucial role in defining the very earliest phases of genetic code origin. Conclusion All this leads to the hypothesis that there existed a code, GNS, reflecting the biosynthetic relationships between these six amino acids which, as it defines the very earliest phases of genetic code origin, removes the main difficulty of the coevolution theory. Furthermore, it is here discussed how this code might have naturally led to the code codifying only for the domains of the codons of precursor amino acids, as predicted by the coevolution theory. Finally, the hypothesis here suggested also removes other problems of the coevolution theory, such as the existence for certain pairs of amino acids with an unclear biosynthetic relationship between the precursor and product amino acids and the collocation of Ala between the amino acids Val and Leu belonging to the pyruvate biosynthetic family, which the coevolution theory considered as belonging to different biosyntheses. Reviewers This article was reviewed by Rob Knight, Paul Higgs (nominated by Laura Landweber), and Eugene Koonin. PMID:18775066
On classical and quantum dynamics of tachyon-like fields and their cosmological implications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dimitrijević, Dragoljub D., E-mail: ddrag@pmf.ni.ac.rs; Djordjević, Goran S., E-mail: ddrag@pmf.ni.ac.rs; Milošević, Milan, E-mail: ddrag@pmf.ni.ac.rs
2014-11-24
We consider a class of tachyon-like potentials, motivated by string theory, D-brane dynamics and inflation theory in the context of classical and quantum mechanics. A formalism for describing dynamics of tachyon fields in spatially homogenous and one-dimensional - classical and quantum mechanical limit is proposed. A few models with concrete potentials are considered. Additionally, possibilities for p-adic and adelic generalization of these models are discussed. Classical actions and corresponding quantum propagators, in the Feynman path integral approach, are calculated in a form invariant on a change of the background number fields, i.e. on both archimedean and nonarchimedean spaces. Looking formore » a quantum origin of inflation, relevance of p-adic and adelic generalizations are briefly discussed.« less
The Classical Theory of Light Colors: a Paradigm for Description of Particle Interactions
NASA Astrophysics Data System (ADS)
Mazilu, Nicolae; Agop, Maricel; Gatu, Irina; Iacob, Dan Dezideriu; Butuc, Irina; Ghizdovat, Vlad
2016-06-01
The color is an interaction property: of the interaction of light with matter. Classically speaking it is therefore akin to the forces. But while forces engendered the mechanical view of the world, the colors generated the optical view. One of the modern concepts of interaction between the fundamental particles of matter - the quantum chromodynamics - aims to fill the gap between mechanics and optics, in a specific description of strong interactions. We show here that this modern description of the particle interactions has ties with both the classical and quantum theories of light, regardless of the connection between forces and colors. In a word, the light is a universal model in the description of matter. The description involves classical Yang-Mills fields related to color.
NASA Astrophysics Data System (ADS)
Ivanov, Sergey V.; Buzykin, Oleg G.
2016-12-01
A classical approach is applied to calculate pressure broadening coefficients of CO2 vibration-rotational spectral lines perturbed by Ar. Three types of spectra are examined: electric dipole (infrared) absorption; isotropic and anisotropic Raman Q branches. Simple and explicit formulae of the classical impact theory are used along with exact 3D Hamilton equations for CO2-Ar molecular motion. The calculations utilize vibrationally independent most accurate ab initio potential energy surface (PES) of Hutson et al. expanded in Legendre polynomial series up to lmax = 24. New improved algorithm of classical rotational frequency selection is applied. The dependences of CO2 half-widths on rotational quantum number J up to J=100 are computed for the temperatures between 77 and 765 K and compared with available experimental data as well as with the results of fully quantum dynamical calculations performed on the same PES. To make the picture complete, the predictions of two independent variants of the semi-classical Robert-Bonamy formalism for dipole absorption lines are included. This method. however, has demonstrated poor accuracy almost for all temperatures. On the contrary, classical broadening coefficients are in excellent agreement both with measurements and with quantum results at all temperatures. The classical impact theory in its present variant is capable to produce quickly and accurately the pressure broadening coefficients of spectral lines of linear molecules for any J value (including high Js) using full-dimensional ab initio - based PES in the cases where other computational methods are either extremely time consuming (like the quantum close coupling method) or give erroneous results (like semi-classical methods).
The polymer physics of single DNA confined in nanochannels.
Dai, Liang; Renner, C Benjamin; Doyle, Patrick S
2016-06-01
In recent years, applications and experimental studies of DNA in nanochannels have stimulated the investigation of the polymer physics of DNA in confinement. Recent advances in the physics of confined polymers, using DNA as a model polymer, have moved beyond the classic Odijk theory for the strong confinement, and the classic blob theory for the weak confinement. In this review, we present the current understanding of the behaviors of confined polymers while briefly reviewing classic theories. Three aspects of confined DNA are presented: static, dynamic, and topological properties. The relevant simulation methods are also summarized. In addition, comparisons of confined DNA with DNA under tension and DNA in semidilute solution are made to emphasize universal behaviors. Finally, an outlook of the possible future research for confined DNA is given. Copyright © 2015 Elsevier B.V. All rights reserved.
Classical and non-classical effective medium theories: New perspectives
NASA Astrophysics Data System (ADS)
Tsukerman, Igor
2017-05-01
Future research in electrodynamics of periodic electromagnetic composites (metamaterials) can be expected to produce sophisticated homogenization theories valid for any composition and size of the lattice cell. The paper outlines a promising path in that direction, leading to non-asymptotic and nonlocal homogenization models, and highlights aspects of homogenization that are often overlooked: the finite size of the sample and the role of interface boundaries. Classical theories (e.g. Clausius-Mossotti, Maxwell Garnett), while originally derived from a very different set of ideas, fit well into the proposed framework. Nonlocal effects can be included in the model, making an order-of-magnitude accuracy improvements possible. One future challenge is to determine what effective parameters can or cannot be obtained for a given set of constituents of a metamaterial lattice cell, thereby delineating the possible from the impossible in metamaterial design.
A Critique of Schema Theory in Reading and a Dual Coding Alternative (Commentary).
ERIC Educational Resources Information Center
Sadoski, Mark; And Others
1991-01-01
Evaluates schema theory and presents dual coding theory as a theoretical alternative. Argues that schema theory is encumbered by lack of a consistent definition, its roots in idealist epistemology, and mixed empirical support. Argues that results of many empirical studies used to demonstrate the existence of schemata are more consistently…
Numerical Modeling of Poroelastic-Fluid Systems Using High-Resolution Finite Volume Methods
NASA Astrophysics Data System (ADS)
Lemoine, Grady
Poroelasticity theory models the mechanics of porous, fluid-saturated, deformable solids. It was originally developed by Maurice Biot to model geophysical problems, such as seismic waves in oil reservoirs, but has also been applied to modeling living bone and other porous media. Poroelastic media often interact with fluids, such as in ocean bottom acoustics or propagation of waves from soft tissue into bone. This thesis describes the development and testing of high-resolution finite volume numerical methods, and simulation codes implementing these methods, for modeling systems of poroelastic media and fluids in two and three dimensions. These methods operate on both rectilinear grids and logically rectangular mapped grids. To allow the use of these methods, Biot's equations of poroelasticity are formulated as a first-order hyperbolic system with a source term; this source term is incorporated using operator splitting. Some modifications are required to the classical high-resolution finite volume method. Obtaining correct solutions at interfaces between poroelastic media and fluids requires a novel transverse propagation scheme and the removal of the classical second-order correction term at the interface, and in three dimensions a new wave limiting algorithm is also needed to correctly limit shear waves. The accuracy and convergence rates of the methods of this thesis are examined for a variety of analytical solutions, including simple plane waves, reflection and transmission of waves at an interface between different media, and scattering of acoustic waves by a poroelastic cylinder. Solutions are also computed for a variety of test problems from the computational poroelasticity literature, as well as some original test problems designed to mimic possible applications for the simulation code.
ERIC Educational Resources Information Center
Kim, Sooyeon; Livingston, Samuel A.
2017-01-01
The purpose of this simulation study was to assess the accuracy of a classical test theory (CTT)-based procedure for estimating the alternate-forms reliability of scores on a multistage test (MST) having 3 stages. We generated item difficulty and discrimination parameters for 10 parallel, nonoverlapping forms of the complete 3-stage test and…
Wang, Wei; Takeda, Mitsuo
2006-09-01
A new concept of vector and tensor densities is introduced into the general coherence theory of vector electromagnetic fields that is based on energy and energy-flow coherence tensors. Related coherence conservation laws are presented in the form of continuity equations that provide new insights into the propagation of second-order correlation tensors associated with stationary random classical electromagnetic fields.
Application of ply level analysis to flexural wave propagation
NASA Astrophysics Data System (ADS)
Valisetty, R. R.; Rehfield, L. W.
1988-10-01
A brief survey is presented of the shear deformation theories of laminated plates. It indicates that there are certain non-classical influences that affect bending-related behavior in the same way as do the transverse shear stresses. They include bending- and stretching-related section warping and the concomitant non-classical surface parallel stress contributions and the transverse normal stress. A bending theory gives significantly improved performance if these non-classical affects are incorporated. The heterogeneous shear deformations that are characteristic of laminates with highly dissimilar materials, however, require that attention be paid to the modeling of local rotations. In this paper, it is shown that a ply level analysis can be used to model such disparate shear deformations. Here, equilibrium of each layer is analyzed separately. Earlier applications of this analysis include free-edge laminate stresses. It is now extended to the study of flexural wave propagation in laminates. A recently developed homogeneous plate theory is used as a ply level model. Due consideration is given to the non-classical influences and no shear correction factors are introduced extraneously in this theory. The results for the lowest flexural mode of travelling planar harmonic waves indicate that this approach is competitive and yields better results for certain laminates.
Geometric Theory of Reduction of Nonlinear Control Systems
NASA Astrophysics Data System (ADS)
Elkin, V. I.
2018-02-01
The foundations of a differential geometric theory of nonlinear control systems are described on the basis of categorical concepts (isomorphism, factorization, restrictions) by analogy with classical mathematical theories (of linear spaces, groups, etc.).
Comment on Gallistel: behavior theory and information theory: some parallels.
Nevin, John A
2012-05-01
In this article, Gallistel proposes information theory as an approach to some enduring problems in the study of operant and classical conditioning. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Alves, J. L.; Oliveira, M. C.; Menezes, L. F.
2004-06-01
Two constitutive models used to describe the plastic behavior of sheet metals in the numerical simulation of sheet metal forming process are studied: a recently proposed advanced constitutive model based on the Teodosiu microstructural model and the Cazacu Barlat yield criterion is compared with a more classical one, based on the Swift law and the Hill 1948 yield criterion. These constitutive models are implemented into DD3IMP, a finite element home code specifically developed to simulate sheet metal forming processes, which generically is a 3-D elastoplastic finite element code with an updated Lagrangian formulation, following a fully implicit time integration scheme, large elastoplastic strains and rotations. Solid finite elements and parametric surfaces are used to model the blank sheet and tool surfaces, respectively. Some details of the numerical implementation of the constitutive models are given. Finally, the theory is illustrated with the numerical simulation of the deep drawing of a cylindrical cup. The results show that the proposed advanced constitutive model predicts with more exactness the final shape (medium height and ears profile) of the formed part, as one can conclude from the comparison with the experimental results.
Trevors, J T; Masson, L
2011-01-01
During his famous 1943 lecture series at Trinity College Dublin, the reknown physicist Erwin Schrodinger discussed the failure and challenges of interpreting life by classical physics alone and that a new approach, rooted in Quantum principles, must be involved. Quantum events are simply a level of organization below the molecular level. This includes the atomic and subatomic makeup of matter in microbial metabolism and structures, as well as the organic, genetic information code of DNA and RNA. Quantum events at this time do not elucidate, for example, how specific genetic instructions were first encoded in an organic genetic code in microbial cells capable of growth and division, and its subsequent evolution over 3.6 to 4 billion years. However, due to recent technological advances, biologists and physicists are starting to demonstrate linkages between various quantum principles like quantum tunneling, entanglement and coherence in biological processes illustrating that nature has exerted some level quantum control to optimize various processes in living organisms. In this article we explore the role of quantum events in microbial processes and endeavor to show that after nearly 67 years, Schrödinger was prophetic and visionary in his view of quantum theory and its connection with some of the fundamental mechanisms of life.
Evolution of Functional Diversification within Quasispecies
Colizzi, Enrico Sandro; Hogeweg, Paulien
2014-01-01
According to quasispecies theory, high mutation rates limit the amount of information genomes can store (Eigen’s Paradox), whereas genomes with higher degrees of neutrality may be selected even at the expenses of higher replication rates (the “survival of the flattest” effect). Introducing a complex genotype to phenotype map, such as RNA folding, epitomizes such effect because of the existence of neutral networks and their exploitation by evolution, affecting both population structure and genome composition. We reexamine these classical results in the light of an RNA-based system that can evolve its own ecology. Contrary to expectations, we find that quasispecies evolving at high mutation rates are steep and characterized by one master sequence. Importantly, the analysis of the system and the characterization of the evolved quasispecies reveal the emergence of functionalities as phenotypes of nonreplicating genotypes, whose presence is crucial for the overall viability and stability of the system. In other words, the master sequence codes for the information of the entire ecosystem, whereas the decoding happens, stochastically, through mutations. We show that this solution quickly outcompetes strategies based on genomes with a high degree of neutrality. In conclusion, individually coded but ecosystem-based diversity evolves and persists indefinitely close to the Information Threshold. PMID:25056399
Representational Realism, Closed Theories and the Quantum to Classical Limit
NASA Astrophysics Data System (ADS)
de Ronde, Christian
In this chapter, we discuss the representational realist stance as a pluralistontic approach to inter-theoretic relationships. Our stance stresses the fact that physical theories require the necessary consideration of a conceptual level of discourse which determines and configures the specific field of phenomena discussed by each particular theory. We will criticize the orthodox line of research which has grounded the analysis about QM in two (Bohrian) metaphysical presuppositions - accepted in the present as dogmas that all interpretations must follow. We will also examine how the orthodox project of "bridging the gap" between the quantum and the classical domains has constrained the possibilities of research, producing only a limited set of interpretational problems which only focus in the justification of "classical reality" and exclude the possibility of analyzing the possibilities of non-classical conceptual representations of QM. The representational realist stance introduces two new problems, namely, the superposition problem and the contextuality problem, which consider explicitly the conceptual representation of orthodox QM beyond the mere reference to mathematical structures and measurement outcomes. In the final part of the chapter, we revisit, from representational realist perspective, the quantum to classical limit and the orthodox claim that this inter-theoretic relation can be explained through the principle of decoherence.
[Discussion on six errors of formulas corresponding to syndromes in using the classic formulas].
Bao, Yan-ju; Hua, Bao-jin
2012-12-01
The theory of formulas corresponding to syndromes is one of the characteristics of Treatise on Cold Damage and Miscellaneous Diseases (Shanghan Zabing Lun) and one of the main principles in applying classic prescriptions. It is important to take effect by following the principle of formulas corresponding to syndromes. However, some medical practitioners always feel that the actual clinical effect is far less than expected. Six errors in the use of classic prescriptions as well as the theory of formulas corresponding to syndromes are the most important causes to be considered, i.e. paying attention only to the local syndromes while neglecting the whole, paying attention only to formulas corresponding to syndromes while neglecting the pathogenesis, paying attention only to syndromes while neglecting the pulse diagnosis, paying attention only to unilateral prescription but neglecting the combined prescriptions, paying attention only to classic prescriptions while neglecting the modern formulas, and paying attention only to the formulas but neglecting the drug dosage. Therefore, not only the patients' clinical syndromes, but also the combination of main syndrome and pathogenesis simultaneously is necessary in the clinical applications of classic prescriptions and the theory of prescription corresponding to syndrome. In addition, comprehensive syndrome differentiation, modern formulas, current prescriptions, combined prescriptions, and drug dosage all contribute to avoid clinical errors and improve clinical effects.
Evaluation of Persons of Varying Ages.
ERIC Educational Resources Information Center
Stolte, John F.
1996-01-01
Reviews two experiments that strongly support dual coding theory. Dual coding theory holds that communicating concretely (tactile, auditory, or visual stimuli) affects evaluative thinking stronger than communicating abstractly through words and numbers. The experiments applied this theory to the realm of age and evaluation. (MJP)
NASA Astrophysics Data System (ADS)
Cavaglieri, Daniele; Bewley, Thomas; Mashayek, Ali
2015-11-01
We present a new code, Diablo 2.0, for the simulation of the incompressible NSE in channel and duct flows with strong grid stretching near walls. The code leverages the fractional step approach with a few twists. New low-storage IMEX (implicit-explicit) Runge-Kutta time-marching schemes are tested which are superior to the traditional and widely-used CN/RKW3 (Crank-Nicolson/Runge-Kutta-Wray) approach; the new schemes tested are L-stable in their implicit component, and offer improved overall order of accuracy and stability with, remarkably, similar computational cost and storage requirements. For duct flow simulations, our new code also introduces a new smoother for the multigrid solver for the pressure Poisson equation. The classic approach, involving alternating-direction zebra relaxation, is replaced by a new scheme, dubbed tweed relaxation, which achieves the same convergence rate with roughly half the computational cost. The code is then tested on the simulation of a shear flow instability in a duct, a classic problem in fluid mechanics which has been the object of extensive numerical modelling for its role as a canonical pathway to energetic turbulence in several fields of science and engineering.
Influences on and Limitations of Classical Test Theory Reliability Estimates.
ERIC Educational Resources Information Center
Arnold, Margery E.
It is incorrect to say "the test is reliable" because reliability is a function not only of the test itself, but of many factors. The present paper explains how different factors affect classical reliability estimates such as test-retest, interrater, internal consistency, and equivalent forms coefficients. Furthermore, the limits of classical test…
A Comparison of Kinetic Energy and Momentum in Special Relativity and Classical Mechanics
ERIC Educational Resources Information Center
Riggs, Peter J.
2016-01-01
Kinetic energy and momentum are indispensable dynamical quantities in both the special theory of relativity and in classical mechanics. Although momentum and kinetic energy are central to understanding dynamics, the differences between their relativistic and classical notions have not always received adequate treatment in undergraduate teaching.…
Computational Nuclear Physics and Post Hartree-Fock Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lietz, Justin; Sam, Novario; Hjorth-Jensen, M.
We present a computational approach to infinite nuclear matter employing Hartree-Fock theory, many-body perturbation theory and coupled cluster theory. These lectures are closely linked with those of chapters 9, 10 and 11 and serve as input for the correlation functions employed in Monte Carlo calculations in chapter 9, the in-medium similarity renormalization group theory of dense fermionic systems of chapter 10 and the Green's function approach in chapter 11. We provide extensive code examples and benchmark calculations, allowing thereby an eventual reader to start writing her/his own codes. We start with an object-oriented serial code and end with discussions onmore » strategies for porting the code to present and planned high-performance computing facilities.« less
A Comparative Analysis of Three Unique Theories of Organizational Learning
ERIC Educational Resources Information Center
Leavitt, Carol C.
2011-01-01
The purpose of this paper is to present three classical theories on organizational learning and conduct a comparative analysis that highlights their strengths, similarities, and differences. Two of the theories -- experiential learning theory and adaptive -- generative learning theory -- represent the thinking of the cognitive perspective, while…
NASA Technical Reports Server (NTRS)
Chambers, Lin Hartung
1994-01-01
The theory for radiation emission, absorption, and transfer in a thermochemical nonequilibrium flow is presented. The expressions developed reduce correctly to the limit at equilibrium. To implement the theory in a practical computer code, some approximations are used, particularly the smearing of molecular radiation. Details of these approximations are presented and helpful information is included concerning the use of the computer code. This user's manual should benefit both occasional users of the Langley Optimized Radiative Nonequilibrium (LORAN) code and those who wish to use it to experiment with improved models or properties.
NASA Astrophysics Data System (ADS)
Hwang, Jai-Chan; Noh, Hyerim
2005-03-01
We present cosmological perturbation theory based on generalized gravity theories including string theory correction terms and a tachyonic complication. The classical evolution as well as the quantum generation processes in these varieties of gravity theories are presented in unified forms. These apply both to the scalar- and tensor-type perturbations. Analyses are made based on the curvature variable in two different gauge conditions often used in the literature in Einstein’s gravity; these are the curvature variables in the comoving (or uniform-field) gauge and the zero-shear gauge. Applications to generalized slow-roll inflation and its consequent power spectra are derived in unified forms which include a wide range of inflationary scenarios based on Einstein’s gravity and others.
Infinite derivative gravity: non-singular cosmology & blackhole solutions
NASA Astrophysics Data System (ADS)
Mazumdar, A.
Both Einstein’s theory of General Relativity and Newton’s theory of gravity possess a short distance and small time scale catastrophe. The blackhole singularity and cosmological Big Bang singularity problems highlight that current theories of gravity are incomplete description at early times and small distances. I will discuss how one can potentially resolve these fundamental problems at a classical level and quantum level. In particular, I will discuss infinite derivative theories of gravity, where gravitational interactions become weaker in the ultraviolet, and therefore resolving some of the classical singularities, such as Big Bang and Schwarzschild singularity for compact non-singular objects with mass up to 1025 grams. In this lecture, I will discuss quantum aspects of infinite derivative gravity and discuss few aspects which can make the theory asymptotically free in the UV.
Psychodrama: group psychotherapy through role playing.
Kipper, D A
1992-10-01
The theory and the therapeutic procedure of classical psychodrama are described along with brief illustrations. Classical psychodrama and sociodrama stemmed from role theory, enactments, "tele," the reciprocity of choices, and the theory of spontaneity-robopathy and creativity. The discussion focuses on key concepts such as the therapeutic team, the structure of the session, transference and reality, countertransference, the here-and-now and the encounter, the group-as-a-whole, resistance and difficult clients, and affect and cognition. Also described are the neoclassical approaches of psychodrama, action methods, and clinical role playing, and the significance of the concept of behavioral simulation in group psychotherapy.
Phenotypic Graphs and Evolution Unfold the Standard Genetic Code as the Optimal
NASA Astrophysics Data System (ADS)
Zamudio, Gabriel S.; José, Marco V.
2018-03-01
In this work, we explicitly consider the evolution of the Standard Genetic Code (SGC) by assuming two evolutionary stages, to wit, the primeval RNY code and two intermediate codes in between. We used network theory and graph theory to measure the connectivity of each phenotypic graph. The connectivity values are compared to the values of the codes under different randomization scenarios. An error-correcting optimal code is one in which the algebraic connectivity is minimized. We show that the SGC is optimal in regard to its robustness and error-tolerance when compared to all random codes under different assumptions.
Cappelleri, Joseph C.; Lundy, J. Jason; Hays, Ron D.
2014-01-01
Introduction The U.S. Food and Drug Administration’s patient-reported outcome (PRO) guidance document defines content validity as “the extent to which the instrument measures the concept of interest” (FDA, 2009, p. 12). “Construct validity is now generally viewed as a unifying form of validity for psychological measurements, subsuming both content and criterion validity” (Strauss & Smith, 2009, p. 7). Hence both qualitative and quantitative information are essential in evaluating the validity of measures. Methods We review classical test theory and item response theory approaches to evaluating PRO measures including frequency of responses to each category of the items in a multi-item scale, the distribution of scale scores, floor and ceiling effects, the relationship between item response options and the total score, and the extent to which hypothesized “difficulty” (severity) order of items is represented by observed responses. Conclusion Classical test theory and item response theory can be useful in providing a quantitative assessment of items and scales during the content validity phase of patient-reported outcome measures. Depending on the particular type of measure and the specific circumstances, either one or both approaches should be considered to help maximize the content validity of PRO measures. PMID:24811753
Spinning particles, axion radiation, and the classical double copy
NASA Astrophysics Data System (ADS)
Goldberger, Walter D.; Li, Jingping; Prabhu, Siddharth G.
2018-05-01
We extend the perturbative double copy between radiating classical sources in gauge theory and gravity to the case of spinning particles. We construct, to linear order in spins, perturbative radiating solutions to the classical Yang-Mills equations sourced by a set of interacting color charges with chromomagnetic dipole spin couplings. Using a color-to-kinematics replacement rule proposed earlier by one of the authors, these solutions map onto radiation in a theory of interacting particles coupled to massless fields that include the graviton, a scalar (dilaton) ϕ and the Kalb-Ramond axion field Bμ ν. Consistency of the double copy imposes constraints on the parameters of the theory on both the gauge and gravity sides of the correspondence. In particular, the color charges carry a chromomagnetic interaction which, in d =4 , corresponds to a gyromagnetic ratio equal to Dirac's value g =2 . The color-to-kinematics map implies that on the gravity side, the bulk theory of the fields (ϕ ,gμ ν,Bμ ν) has interactions which match those of d -dimensional "string gravity," as is the case both in the BCJ double copy of pure gauge theory scattering amplitudes and the KLT relations between the tree-level S -matrix elements of open and closed string theory.
Lamb wave extraction of dispersion curves in micro/nano-plates using couple stress theories
NASA Astrophysics Data System (ADS)
Ghodrati, Behnam; Yaghootian, Amin; Ghanbar Zadeh, Afshin; Mohammad-Sedighi, Hamid
2018-01-01
In this paper, Lamb wave propagation in a homogeneous and isotropic non-classical micro/nano-plates is investigated. To consider the effect of material microstructure on the wave propagation, three size-dependent models namely indeterminate-, modified- and consistent couple stress theories are used to extract the dispersion equations. In the mentioned theories, a parameter called 'characteristic length' is used to consider the size of material microstructure in the governing equations. To generalize the parametric studies and examine the effect of thickness, propagation wavelength, and characteristic length on the behavior of miniature plate structures, the governing equations are nondimensionalized by defining appropriate dimensionless parameters. Then the dispersion curves for phase and group velocities are plotted in terms of a wide frequency-thickness range to study the lamb waves propagation considering microstructure effects in very high frequencies. According to the illustrated results, it was observed that the couple stress theories in the Cosserat type material predict more rigidity than the classical theory; so that in a plate with constant thickness, by increasing the thickness to characteristic length ratio, the results approach to the classical theory, and by reducing this ratio, wave propagation speed in the plate is significantly increased. In addition, it is demonstrated that for high-frequency Lamb waves, it converges to dispersive Rayleigh wave velocity.
D'Ariano, Giacomo Mauro
2018-07-13
Causality has never gained the status of a 'law' or 'principle' in physics. Some recent literature has even popularized the false idea that causality is a notion that should be banned from theory. Such misconception relies on an alleged universality of the reversibility of the laws of physics, based either on the determinism of classical theory, or on the multiverse interpretation of quantum theory, in both cases motivated by mere interpretational requirements for realism of the theory. Here, I will show that a properly defined unambiguous notion of causality is a theorem of quantum theory, which is also a falsifiable proposition of the theory. Such a notion of causality appeared in the literature within the framework of operational probabilistic theories. It is a genuinely theoretical notion, corresponding to establishing a definite partial order among events, in the same way as we do by using the future causal cone on Minkowski space. The notion of causality is logically completely independent of the misidentified concept of 'determinism', and, being a consequence of quantum theory, is ubiquitous in physics. In addition, as classical theory can be regarded as a restriction of quantum theory, causality holds also in the classical case, although the determinism of the theory trivializes it. I then conclude by arguing that causality naturally establishes an arrow of time. This implies that the scenario of the 'block Universe' and the connected 'past hypothesis' are incompatible with causality, and thus with quantum theory: they are both doomed to remain mere interpretations and, as such, are not falsifiable, similar to the hypothesis of 'super-determinism'.This article is part of a discussion meeting issue 'Foundations of quantum mechanics and their impact on contemporary society'. © 2018 The Author(s).
A Compilation of MATLAB Scripts and Functions for MACGMC Analyses
NASA Technical Reports Server (NTRS)
Murthy, Pappu L. N.; Bednarcyk, Brett A.; Mital, Subodh K.
2017-01-01
The primary aim of the current effort is to provide scripts that automate many of the repetitive pre- and post-processing tasks associated with composite materials analyses using the Micromechanics Analysis Code with the Generalized Method of Cells. This document consists of a compilation of hundreds of scripts that were developed in MATLAB (The Mathworks, Inc., Natick, MA) programming language and consolidated into 16 MATLAB functions. (MACGMC). MACGMC is a composite material and laminate analysis software code developed at NASA Glenn Research Center. The software package has been built around the generalized method of cells (GMC) family of micromechanics theories. The computer code is developed with a user-friendly framework, along with a library of local inelastic, damage, and failure models. Further, application of simulated thermo-mechanical loading, generation of output results, and selection of architectures to represent the composite material have been automated to increase the user friendliness, as well as to make it more robust in terms of input preparation and code execution. Finally, classical lamination theory has been implemented within the software, wherein GMC is used to model the composite material response of each ply. Thus, the full range of GMC composite material capabilities is available for analysis of arbitrary laminate configurations as well. The pre-processing tasks include generation of a multitude of different repeating unit cells (RUCs) for CMCs and PMCs, visualization of RUCs from MACGMC input and output files and generation of the RUC section of a MACGMC input file. The post-processing tasks include visualization of the predicted composite response, such as local stress and strain contours, damage initiation and progression, stress-strain behavior, and fatigue response. In addition to the above, several miscellaneous scripts have been developed that can be used to perform repeated Monte-Carlo simulations to enable probabilistic simulations with minimal manual intervention. This document is formatted to provide MATLAB source files and descriptions of how to utilize them. It is assumed that the user has a basic understanding of how MATLAB scripts work and some MATLAB programming experience.
Soliton Gases and Generalized Hydrodynamics
NASA Astrophysics Data System (ADS)
Doyon, Benjamin; Yoshimura, Takato; Caux, Jean-Sébastien
2018-01-01
We show that the equations of generalized hydrodynamics (GHD), a hydrodynamic theory for integrable quantum systems at the Euler scale, emerge in full generality in a family of classical gases, which generalize the gas of hard rods. In this family, the particles, upon colliding, jump forward or backward by a distance that depends on their velocities, reminiscent of classical soliton scattering. This provides a "molecular dynamics" for GHD: a numerical solver which is efficient, flexible, and which applies to the presence of external force fields. GHD also describes the hydrodynamics of classical soliton gases. We identify the GHD of any quantum model with that of the gas of its solitonlike wave packets, thus providing a remarkable quantum-classical equivalence. The theory is directly applicable, for instance, to integrable quantum chains and to the Lieb-Liniger model realized in cold-atom experiments.
Topological and Orthomodular Modeling of Context in Behavioral Science
NASA Astrophysics Data System (ADS)
Narens, Louis
2017-02-01
Two non-boolean methods are discussed for modeling context in behavioral data and theory. The first is based on intuitionistic logic, which is similar to classical logic except that not every event has a complement. Its probability theory is also similar to classical probability theory except that the definition of probability function needs to be generalized to unions of events instead of applying only to unions of disjoint events. The generalization is needed, because intuitionistic event spaces may not contain enough disjoint events for the classical definition to be effective. The second method develops a version of quantum logic for its underlying probability theory. It differs from Hilbert space logic used in quantum mechanics as a foundation for quantum probability theory in variety of ways. John von Neumann and others have commented about the lack of a relative frequency approach and a rational foundation for this probability theory. This article argues that its version of quantum probability theory does not have such issues. The method based on intuitionistic logic is useful for modeling cognitive interpretations that vary with context, for example, the mood of the decision maker, the context produced by the influence of other items in a choice experiment, etc. The method based on this article's quantum logic is useful for modeling probabilities across contexts, for example, how probabilities of events from different experiments are related.
A classical density functional theory of ionic liquids.
Forsman, Jan; Woodward, Clifford E; Trulsson, Martin
2011-04-28
We present a simple, classical density functional approach to the study of simple models of room temperature ionic liquids. Dispersion attractions as well as ion correlation effects and excluded volume packing are taken into account. The oligomeric structure, common to many ionic liquid molecules, is handled by a polymer density functional treatment. The theory is evaluated by comparisons with simulations, with an emphasis on the differential capacitance, an experimentally measurable quantity of significant practical interest.
Generalized quantum theory of recollapsing homogeneous cosmologies
NASA Astrophysics Data System (ADS)
Craig, David; Hartle, James B.
2004-06-01
A sum-over-histories generalized quantum theory is developed for homogeneous minisuperspace type A Bianchi cosmological models, focusing on the particular example of the classically recollapsing Bianchi type-IX universe. The decoherence functional for such universes is exhibited. We show how the probabilities of decoherent sets of alternative, coarse-grained histories of these model universes can be calculated. We consider in particular the probabilities for classical evolution defined by a suitable coarse graining. For a restricted class of initial conditions and coarse grainings we exhibit the approximate decoherence of alternative histories in which the universe behaves classically and those in which it does not. For these situations we show that the probability is near unity for the universe to recontract classically if it expands classically. We also determine the relative probabilities of quasiclassical trajectories for initial states of WKB form, recovering for such states a precise form of the familiar heuristic “JṡdΣ” rule of quantum cosmology, as well as a generalization of this rule to generic initial states.
Ciliates learn to diagnose and correct classical error syndromes in mating strategies
Clark, Kevin B.
2013-01-01
Preconjugal ciliates learn classical repetition error-correction codes to safeguard mating messages and replies from corruption by “rivals” and local ambient noise. Because individual cells behave as memory channels with Szilárd engine attributes, these coding schemes also might be used to limit, diagnose, and correct mating-signal errors due to noisy intracellular information processing. The present study, therefore, assessed whether heterotrich ciliates effect fault-tolerant signal planning and execution by modifying engine performance, and consequently entropy content of codes, during mock cell–cell communication. Socially meaningful serial vibrations emitted from an ambiguous artificial source initiated ciliate behavioral signaling performances known to advertise mating fitness with varying courtship strategies. Microbes, employing calcium-dependent Hebbian-like decision making, learned to diagnose then correct error syndromes by recursively matching Boltzmann entropies between signal planning and execution stages via “power” or “refrigeration” cycles. All eight serial contraction and reversal strategies incurred errors in entropy magnitude by the execution stage of processing. Absolute errors, however, subtended expected threshold values for single bit-flip errors in three-bit replies, indicating coding schemes protected information content throughout signal production. Ciliate preparedness for vibrations selectively and significantly affected the magnitude and valence of Szilárd engine performance during modal and non-modal strategy corrective cycles. But entropy fidelity for all replies mainly improved across learning trials as refinements in engine efficiency. Fidelity neared maximum levels for only modal signals coded in resilient three-bit repetition error-correction sequences. Together, these findings demonstrate microbes can elevate survival/reproductive success by learning to implement classical fault-tolerant information processing in social contexts. PMID:23966987
NASA Astrophysics Data System (ADS)
Huyskens, P.; Kapuku, F.; Colemonts-Vandevyvere, C.
1990-09-01
In liquids the partners of H bonds constantly change. As a consequence the entities observed by IR spectroscopy are not the same as those considered for thermodynamic properties. For the latter, the H-bonds are shared by all the molecules. The thermodynamic "monomeric fraction", γ, the time fraction during which an alcohol molecule is vaporizable, is the square root of the spectroscopic monomeric fraction, and is the fraction of molecules which, during a time interval of 10 -14 s, have their hydroxylic proton and their lone pairs free. The classical thermodynamic treatments of Mecke and Prigogine consider the spectroscopic entities as real thermodynamic entities. Opposed to this, the mobile order theory considers all the formal molecules as equal but with a reduction of the entropy due to the fact that during a fraction 1-γ of the time, the OH proton follows a neighbouring oxygen atom on its journey through the liquid. Mobile order theory and classic multicomponent treatment lead, in binary mixtures of the associated substance A with the inert substance S, to expressions of the chemical potentials μ A and μ S that are fundamentally different. However, the differences become very important only when the molar volumes overlineVS and overlineVA differ by a factor larger than 2. As a consequence the equations of the classic theory can still fit the experimental vapour pressure data of mixtures of liquid alcohols and liquid alkanes. However, the solubilities of solid alkanes in water for which overlineVS > 3 overlineVA are only correctly predicted by the mobile order theory.
From Foucault to Freire through Facebook: Toward an Integrated Theory of mHealth
ERIC Educational Resources Information Center
Bull, Sheana; Ezeanochie, Nnamdi
2016-01-01
Objective: To document the integration of social science theory in literature on mHealth (mobile health) and consider opportunities for integration of classic theory, health communication theory, and social networking to generate a relevant theory for mHealth program design. Method: A secondary review of research syntheses and meta-analyses…
KEWPIE: A dynamical cascade code for decaying exited compound nuclei
NASA Astrophysics Data System (ADS)
Bouriquet, Bertrand; Abe, Yasuhisa; Boilley, David
2004-05-01
A new dynamical cascade code for decaying hot nuclei is proposed and specially adapted to the synthesis of super-heavy nuclei. For such a case, the interesting channel is of the tiny fraction that will decay through particles emission, thus the code avoids classical Monte-Carlo methods and proposes a new numerical scheme. The time dependence is explicitely taken into account in order to cope with the fact that fission decay rate might not be constant. The code allows to evaluate both statistical and dynamical observables. Results are successfully compared to experimental data.
Generalizability Theory and Classical Test Theory
ERIC Educational Resources Information Center
Brennan, Robert L.
2011-01-01
Broadly conceived, reliability involves quantifying the consistencies and inconsistencies in observed scores. Generalizability theory, or G theory, is particularly well suited to addressing such matters in that it enables an investigator to quantify and distinguish the sources of inconsistencies in observed scores that arise, or could arise, over…
Theories of the Alcoholic Personality.
ERIC Educational Resources Information Center
Cox, W. Miles
Several theories of the alcoholic personality have been devised to determine the relationship between the clusters of personality characteristics of alcoholics and their abuse of alcohol. The oldest and probably best known theory is the dependency theory, formulated in the tradition of classical psychoanalysis, which associates the alcoholic's…
The Giffen Effect: A Note on Economic Purposes.
ERIC Educational Resources Information Center
Williams, William D.
1990-01-01
Describes the Giffen effect: demand for a commodity increases as price increases. Explains how applying control theory eliminates the paradox that the Giffen effect presents to classic economics supply and demand theory. Notes the differences in how conventional demand theory and control theory treat consumer behavior. (CH)
Personality Theories for the 21st Century
ERIC Educational Resources Information Center
McCrae, Robert R.
2011-01-01
Classic personality theories, although intriguing, are outdated. The five-factor model of personality traits reinvigorated personality research, and the resulting findings spurred a new generation of personality theories. These theories assign a central place to traits and acknowledge the crucial role of evolved biology in shaping human…
Continuous Time in Consistent Histories
NASA Astrophysics Data System (ADS)
Savvidou, Konstantina
1999-12-01
We discuss the case of histories labelled by a continuous time parameter in the History Projection Operator consistent-histories quantum theory. We describe how the appropriate representation of the history algebra may be chosen by requiring the existence of projection operators that represent propositions about time averages of the energy. We define the action operator for the consistent histories formalism, as the quantum analogue of the classical action functional, for the simple harmonic oscillator case. We show that the action operator is the generator of two types of time transformations that may be related to the two laws of time-evolution of the standard quantum theory: the `state-vector reduction' and the unitary time-evolution. We construct the corresponding classical histories and demonstrate the relevance with the quantum histories; we demonstrate how the requirement of the temporal logic structure of the theory is sufficient for the definition of classical histories. Furthermore, we show the relation of the action operator to the decoherence functional which describes the dynamics of the system. Finally, the discussion is extended to give a preliminary account of quantum field theory in this approach to the consistent histories formalism.
Enhancing Undergraduate Mathematics Curriculum via Coding Theory and Cryptography
ERIC Educational Resources Information Center
Aydin, Nuh
2009-01-01
The theory of error-correcting codes and cryptography are two relatively recent applications of mathematics to information and communication systems. The mathematical tools used in these fields generally come from algebra, elementary number theory, and combinatorics, including concepts from computational complexity. It is possible to introduce the…
Effects of Extrinsic Mortality on the Evolution of Aging: A Stochastic Modeling Approach
Shokhirev, Maxim Nikolaievich; Johnson, Adiv Adam
2014-01-01
The evolutionary theories of aging are useful for gaining insights into the complex mechanisms underlying senescence. Classical theories argue that high levels of extrinsic mortality should select for the evolution of shorter lifespans and earlier peak fertility. Non-classical theories, in contrast, posit that an increase in extrinsic mortality could select for the evolution of longer lifespans. Although numerous studies support the classical paradigm, recent data challenge classical predictions, finding that high extrinsic mortality can select for the evolution of longer lifespans. To further elucidate the role of extrinsic mortality in the evolution of aging, we implemented a stochastic, agent-based, computational model. We used a simulated annealing optimization approach to predict which model parameters predispose populations to evolve longer or shorter lifespans in response to increased levels of predation. We report that longer lifespans evolved in the presence of rising predation if the cost of mating is relatively high and if energy is available in excess. Conversely, we found that dramatically shorter lifespans evolved when mating costs were relatively low and food was relatively scarce. We also analyzed the effects of increased predation on various parameters related to density dependence and energy allocation. Longer and shorter lifespans were accompanied by increased and decreased investments of energy into somatic maintenance, respectively. Similarly, earlier and later maturation ages were accompanied by increased and decreased energetic investments into early fecundity, respectively. Higher predation significantly decreased the total population size, enlarged the shared resource pool, and redistributed energy reserves for mature individuals. These results both corroborate and refine classical predictions, demonstrating a population-level trade-off between longevity and fecundity and identifying conditions that produce both classical and non-classical lifespan effects. PMID:24466165
Assessing the quantum physics impacts on future x-ray free-electron lasers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmitt, Mark J.; Anisimov, Petr Mikhaylovich
A new quantum mechanical theory of x-ray free electron lasers (XFELs) has been successfully developed that has placed LANL at the forefront of the understanding of quantum effects in XFELs. Our quantum theory describes the interaction of relativistic electrons with x-ray radiation in the periodic magnetic field of an undulator using the same mathematical formalism as classical XFEL theory. This places classical and quantum treatments on the same footing and allows for a continuous transition from one regime to the other eliminating the disparate analytical approaches previously used. Moreover, Dr. Anisimov, the architect of this new theory, is now consideredmore » a resource in the international FEL community for assessing quantum effects in XFELs.« less
Factors Affecting Christian Parents' School Choice Decision Processes: A Grounded Theory Study
ERIC Educational Resources Information Center
Prichard, Tami G.; Swezey, James A.
2016-01-01
This study identifies factors affecting the decision processes for school choice by Christian parents. Grounded theory design incorporated interview transcripts, field notes, and a reflective journal to analyze themes. Comparative analysis, including open, axial, and selective coding, was used to reduce the coded statements to five code families:…
NASA Astrophysics Data System (ADS)
Medley, S. S.; Budny, R. V.; Mansfield, D. K.; Redi, M. H.; Roquemore, A. L.; Fisher, R. K.; Duong, H. H.; McChesney, J. M.; Parks, P. B.; Petrov, M. P.; Gorelenkov, N. N.
1996-10-01
The energy distributions and radial density profiles of the fast confined trapped alpha particles in DT experiments on TFTR are being measured in the energy range 0.5 - 3.5 MeV using the pellet charge exchange (PCX) diagnostic. A brief description of the measurement technique which involves active neutral particle analysis using the ablation cloud surrounding an injected impurity pellet as the neutralizer is presented. This paper focuses on alpha and triton measurements in the core of MHD quiescent TFTR discharges where the expected classical slowing-down and pitch angle scattering effects are not complicated by stochastic ripple diffusion and sawtooth activity. In particular, the first measurement of the alpha slowing-down distribution up to the birth energy, obtained using boron pellet injection, is presented. The measurements are compared with predictions using either the TRANSP Monte Carlo code and/or a Fokker - Planck Post-TRANSP processor code, which assumes that the alphas and tritons are well confined and slow down classically. Both the shape of the measured alpha and triton energy distributions and their density ratios are in good agreement with the code calculations. We can conclude that the PCX measurements are consistent with classical thermalization of the fusion-generated alphas and tritons.
Key Reconciliation for High Performance Quantum Key Distribution
Martinez-Mateo, Jesus; Elkouss, David; Martin, Vicente
2013-01-01
Quantum Key Distribution is carving its place among the tools used to secure communications. While a difficult technology, it enjoys benefits that set it apart from the rest, the most prominent is its provable security based on the laws of physics. QKD requires not only the mastering of signals at the quantum level, but also a classical processing to extract a secret-key from them. This postprocessing has been customarily studied in terms of the efficiency, a figure of merit that offers a biased view of the performance of real devices. Here we argue that it is the throughput the significant magnitude in practical QKD, specially in the case of high speed devices, where the differences are more marked, and give some examples contrasting the usual postprocessing schemes with new ones from modern coding theory. A good understanding of its implications is very important for the design of modern QKD devices. PMID:23546440
NASA Technical Reports Server (NTRS)
Hunter, Craig A.
1995-01-01
An analytical/numerical method has been developed to predict the static thrust performance of non-axisymmetric, two-dimensional convergent-divergent exhaust nozzles. Thermodynamic nozzle performance effects due to over- and underexpansion are modeled using one-dimensional compressible flow theory. Boundary layer development and skin friction losses are calculated using an approximate integral momentum method based on the classic karman-Polhausen solution. Angularity effects are included with these two models in a computational Nozzle Performance Analysis Code, NPAC. In four different case studies, results from NPAC are compared to experimental data obtained from subscale nozzle testing to demonstrate the capabilities and limitations of the NPAC method. In several cases, the NPAC prediction matched experimental gross thrust efficiency data to within 0.1 percent at a design NPR, and to within 0.5 percent at off-design conditions.
Progressive Failure Analysis Methodology for Laminated Composite Structures
NASA Technical Reports Server (NTRS)
Sleight, David W.
1999-01-01
A progressive failure analysis method has been developed for predicting the failure of laminated composite structures under geometrically nonlinear deformations. The progressive failure analysis uses C(exp 1) shell elements based on classical lamination theory to calculate the in-plane stresses. Several failure criteria, including the maximum strain criterion, Hashin's criterion, and Christensen's criterion, are used to predict the failure mechanisms and several options are available to degrade the material properties after failures. The progressive failure analysis method is implemented in the COMET finite element analysis code and can predict the damage and response of laminated composite structures from initial loading to final failure. The different failure criteria and material degradation methods are compared and assessed by performing analyses of several laminated composite structures. Results from the progressive failure method indicate good correlation with the existing test data except in structural applications where interlaminar stresses are important which may cause failure mechanisms such as debonding or delaminations.
Orbital tori for non-axisymmetric galaxies
NASA Astrophysics Data System (ADS)
Binney, James
2018-02-01
Our Galaxy's bar makes the Galaxy's potential distinctly non-axisymmetric. All orbits are affected by non-axisymmetry, and significant numbers are qualitatively changed by being trapped at a resonance with the bar. Orbital tori are used to compute these effects. Thick-disc orbits are no less likely to be trapped by corotation or a Lindblad resonance than thin-disc orbits. Perturbation theory is used to create non-axisymmetric orbital tori from standard axisymmetric tori, and both trapped and untrapped orbits are recovered to surprising accuracy. Code is added to the TorusModeller library that makes it as easy to manipulate non-axisymmetric tori as axisymmetric ones. The augmented TorusModeller is used to compute the velocity structure of the solar neighbourhood for bars of different pattern speeds and a simple action-based distribution function. The technique developed here can be applied to any non-axisymmetric potential that is stationary in a rotating from - hence also to classical spiral structure.
NASA Technical Reports Server (NTRS)
Gotsis, Pascal K.; Chamis, Christos C.
1992-01-01
The nonlinear behavior of a high-temperature metal-matrix composite (HT-MMC) was simulated by using the metal matrix composite analyzer (METCAN) computer code. The simulation started with the fabrication process, proceeded to thermomechanical cyclic loading, and ended with the application of a monotonic load. Classical laminate theory and composite micromechanics and macromechanics are used in METCAN, along with a multifactor interaction model for the constituents behavior. The simulation of the stress-strain behavior from the macromechanical and the micromechanical points of view, as well as the initiation and final failure of the constituents and the plies in the composite, were examined in detail. It was shown that, when the fibers and the matrix were perfectly bonded, the fracture started in the matrix and then propagated with increasing load to the fibers. After the fibers fractured, the composite lost its capacity to carry additional load and fractured.
NASA Technical Reports Server (NTRS)
Kaufman, A.; Laflen, J. H.; Lindholm, U. S.
1985-01-01
Unified constitutive material models were developed for structural analyses of aircraft gas turbine engine components with particular application to isotropic materials used for high-pressure stage turbine blades and vanes. Forms or combinations of models independently proposed by Bodner and Walker were considered. These theories combine time-dependent and time-independent aspects of inelasticity into a continuous spectrum of behavior. This is in sharp contrast to previous classical approaches that partition inelastic strain into uncoupled plastic and creep components. Predicted stress-strain responses from these models were evaluated against monotonic and cyclic test results for uniaxial specimens of two cast nickel-base alloys, B1900+Hf and Rene' 80. Previously obtained tension-torsion test results for Hastelloy X alloy were used to evaluate multiaxial stress-strain cycle predictions. The unified models, as well as appropriate algorithms for integrating the constitutive equations, were implemented in finite-element computer codes.
Employment of CB models for non-linear dynamic analysis
NASA Technical Reports Server (NTRS)
Klein, M. R. M.; Deloo, P.; Fournier-Sicre, A.
1990-01-01
The non-linear dynamic analysis of large structures is always very time, effort and CPU consuming. Whenever possible the reduction of the size of the mathematical model involved is of main importance to speed up the computational procedures. Such reduction can be performed for the part of the structure which perform linearly. Most of the time, the classical Guyan reduction process is used. For non-linear dynamic process where the non-linearity is present at interfaces between different structures, Craig-Bampton models can provide a very rich information, and allow easy selection of the relevant modes with respect to the phenomenon driving the non-linearity. The paper presents the employment of Craig-Bampton models combined with Newmark direct integration for solving non-linear friction problems appearing at the interface between the Hubble Space Telescope and its solar arrays during in-orbit maneuvers. Theory, implementation in the FEM code ASKA, and practical results are shown.
NASA Technical Reports Server (NTRS)
Mikellides, Ioannis G.; Katz, Ira; Goebel, Dan M.; Jameson, Kristina K.
2006-01-01
Numerical simulations with the time-dependent Orificed Cathode (OrCa2D-II) computer code show that classical enhancements of the plasma resistivity can not account for the elevated electron temperatures and steep plasma potential gradients measured in the plume of a 25-27.5 A discharge hollow cathode. The cathode, which employs a 0.11-in diameter orifice, was operated at 5.5 sccm without an applied magnetic field using two different anode geometries. It is found that anomalous resistivity based on electron-driven instabilities improves the comparison between theory and experiment. It is also estimated that other effects such as the Hall-effect from the self-induced magnetic field, not presently included in OrCa2D-II, may contribute to the constriction of the current density streamlines thus explaining the higher plasma densities observed along the centerline.
NASA Technical Reports Server (NTRS)
Gotsis, Pascal K.
1991-01-01
The nonlinear behavior of a high-temperature metal-matrix composite (HT-MMC) was simulated by using the metal matrix composite analyzer (METCAN) computer code. The simulation started with the fabrication process, proceeded to thermomechanical cyclic loading, and ended with the application of a monotonic load. Classical laminate theory and composite micromechanics and macromechanics are used in METCAN, along with a multifactor interaction model for the constituents behavior. The simulation of the stress-strain behavior from the macromechanical and the micromechanical points of view, as well as the initiation and final failure of the constituents and the plies in the composite, were examined in detail. It was shown that, when the fibers and the matrix were perfectly bonded, the fracture started in the matrix and then propagated with increasing load to the fibers. After the fibers fractured, the composite lost its capacity to carry additional load and fractured.
Enhanced auditory temporal gap detection in listeners with musical training.
Mishra, Srikanta K; Panda, Manas R; Herbert, Carolyn
2014-08-01
Many features of auditory perception are positively altered in musicians. Traditionally auditory mechanisms in musicians are investigated using the Western-classical musician model. The objective of the present study was to adopt an alternative model-Indian-classical music-to further investigate auditory temporal processing in musicians. This study presents that musicians have significantly lower across-channel gap detection thresholds compared to nonmusicians. Use of the South Indian musician model provides an increased external validity for the prediction, from studies on Western-classical musicians, that auditory temporal coding is enhanced in musicians.
Dual coding: a cognitive model for psychoanalytic research.
Bucci, W
1985-01-01
Four theories of mental representation derived from current experimental work in cognitive psychology have been discussed in relation to psychoanalytic theory. These are: verbal mediation theory, in which language determines or mediates thought; perceptual dominance theory, in which imagistic structures are dominant; common code or propositional models, in which all information, perceptual or linguistic, is represented in an abstract, amodal code; and dual coding, in which nonverbal and verbal information are each encoded, in symbolic form, in separate systems specialized for such representation, and connected by a complex system of referential relations. The weight of current empirical evidence supports the dual code theory. However, psychoanalysis has implicitly accepted a mixed model-perceptual dominance theory applying to unconscious representation, and verbal mediation characterizing mature conscious waking thought. The characterization of psychoanalysis, by Schafer, Spence, and others, as a domain in which reality is constructed rather than discovered, reflects the application of this incomplete mixed model. The representations of experience in the patient's mind are seen as without structure of their own, needing to be organized by words, thus vulnerable to distortion or dissolution by the language of the analyst or the patient himself. In these terms, hypothesis testing becomes a meaningless pursuit; the propositions of the theory are no longer falsifiable; the analyst is always more or less "right." This paper suggests that the integrated dual code formulation provides a more coherent theoretical framework for psychoanalysis than the mixed model, with important implications for theory and technique. In terms of dual coding, the problem is not that the nonverbal representations are vulnerable to distortion by words, but that the words that pass back and forth between analyst and patient will not affect the nonverbal schemata at all. Using the dual code formulation, and applying an investigative methodology derived from experimental cognitive psychology, a new approach to the verification of interpretations is possible. Some constructions of a patient's story may be seen as more accurate than others, by virtue of their linkage to stored perceptual representations in long-term memory. We can demonstrate that such linking has occurred in functional or operational terms--through evaluating the representation of imagistic content in the patient's speech.
Collisionless stellar hydrodynamics as an efficient alternative to N-body methods
NASA Astrophysics Data System (ADS)
Mitchell, Nigel L.; Vorobyov, Eduard I.; Hensler, Gerhard
2013-01-01
The dominant constituents of the Universe's matter are believed to be collisionless in nature and thus their modelling in any self-consistent simulation is extremely important. For simulations that deal only with dark matter or stellar systems, the conventional N-body technique is fast, memory efficient and relatively simple to implement. However when extending simulations to include the effects of gas physics, mesh codes are at a distinct disadvantage compared to Smooth Particle Hydrodynamics (SPH) codes. Whereas implementing the N-body approach into SPH codes is fairly trivial, the particle-mesh technique used in mesh codes to couple collisionless stars and dark matter to the gas on the mesh has a series of significant scientific and technical limitations. These include spurious entropy generation resulting from discreteness effects, poor load balancing and increased communication overhead which spoil the excellent scaling in massively parallel grid codes. In this paper we propose the use of the collisionless Boltzmann moment equations as a means to model the collisionless material as a fluid on the mesh, implementing it into the massively parallel FLASH Adaptive Mesh Refinement (AMR) code. This approach which we term `collisionless stellar hydrodynamics' enables us to do away with the particle-mesh approach and since the parallelization scheme is identical to that used for the hydrodynamics, it preserves the excellent scaling of the FLASH code already demonstrated on peta-flop machines. We find that the classic hydrodynamic equations and the Boltzmann moment equations can be reconciled under specific conditions, allowing us to generate analytic solutions for collisionless systems using conventional test problems. We confirm the validity of our approach using a suite of demanding test problems, including the use of a modified Sod shock test. By deriving the relevant eigenvalues and eigenvectors of the Boltzmann moment equations, we are able to use high order accurate characteristic tracing methods with Riemann solvers to generate numerical solutions which show excellent agreement with our analytic solutions. We conclude by demonstrating the ability of our code to model complex phenomena by simulating the evolution of a two-armed spiral galaxy whose properties agree with those predicted by the swing amplification theory.
Heuristic rules embedded genetic algorithm for in-core fuel management optimization
NASA Astrophysics Data System (ADS)
Alim, Fatih
The objective of this study was to develop a unique methodology and a practical tool for designing loading pattern (LP) and burnable poison (BP) pattern for a given Pressurized Water Reactor (PWR) core. Because of the large number of possible combinations for the fuel assembly (FA) loading in the core, the design of the core configuration is a complex optimization problem. It requires finding an optimal FA arrangement and BP placement in order to achieve maximum cycle length while satisfying the safety constraints. Genetic Algorithms (GA) have been already used to solve this problem for LP optimization for both PWR and Boiling Water Reactor (BWR). The GA, which is a stochastic method works with a group of solutions and uses random variables to make decisions. Based on the theories of evaluation, the GA involves natural selection and reproduction of the individuals in the population for the next generation. The GA works by creating an initial population, evaluating it, and then improving the population by using the evaluation operators. To solve this optimization problem, a LP optimization package, GARCO (Genetic Algorithm Reactor Code Optimization) code is developed in the framework of this thesis. This code is applicable for all types of PWR cores having different geometries and structures with an unlimited number of FA types in the inventory. To reach this goal, an innovative GA is developed by modifying the classical representation of the genotype. To obtain the best result in a shorter time, not only the representation is changed but also the algorithm is changed to use in-core fuel management heuristics rules. The improved GA code was tested to demonstrate and verify the advantages of the new enhancements. The developed methodology is explained in this thesis and preliminary results are shown for the VVER-1000 reactor hexagonal geometry core and the TMI-1 PWR. The improved GA code was tested to verify the advantages of new enhancements. The core physics code used for VVER in this research is Moby-Dick, which was developed to analyze the VVER by SKODA Inc. The SIMULATE-3 code, which is an advanced two-group nodal code, is used to analyze the TMI-1.
A Nonvolume Preserving Plasticity Theory with Applications to Powder Metallurgy
NASA Technical Reports Server (NTRS)
Cassenti, B. N.
1983-01-01
A plasticity theory has been developed to predict the mechanical response of powder metals during hot isostatic pressing. The theory parameters were obtained through an experimental program consisting of hydrostatic pressure tests, uniaxial compression and uniaxial tension tests. A nonlinear finite element code was modified to include the theory and the results of themodified code compared favorably to the results from a verification experiment.
On quantum effects in a theory of biological evolution.
Martin-Delgado, M A
2012-01-01
We construct a descriptive toy model that considers quantum effects on biological evolution starting from Chaitin's classical framework. There are smart evolution scenarios in which a quantum world is as favorable as classical worlds for evolution to take place. However, in more natural scenarios, the rate of evolution depends on the degree of entanglement present in quantum organisms with respect to classical organisms. If the entanglement is maximal, classical evolution turns out to be more favorable.
On Quantum Effects in a Theory of Biological Evolution
Martin-Delgado, M. A.
2012-01-01
We construct a descriptive toy model that considers quantum effects on biological evolution starting from Chaitin's classical framework. There are smart evolution scenarios in which a quantum world is as favorable as classical worlds for evolution to take place. However, in more natural scenarios, the rate of evolution depends on the degree of entanglement present in quantum organisms with respect to classical organisms. If the entanglement is maximal, classical evolution turns out to be more favorable. PMID:22413059
Experimental evaluation of a flat wake theory for predicting rotor inflow-wake velocities
NASA Technical Reports Server (NTRS)
Wilson, John C.
1992-01-01
The theory for predicting helicopter inflow-wake velocities called flat wake theory was correlated with several sets of experimental data. The theory was developed by V. E. Baskin of the USSR, and a computer code known as DOWN was developed at Princeton University to implement the theory. The theory treats the wake geometry as rigid without interaction between induced velocities and wake structure. The wake structure is assumed to be a flat sheet of vorticity composed of trailing elements whose strength depends on the azimuthal and radial distributions of circulation on a rotor blade. The code predicts the three orthogonal components of flow velocity in the field surrounding the rotor. The predictions can be utilized in rotor performance and helicopter real-time flight-path simulation. The predictive capability of the coded version of flat wake theory provides vertical inflow patterns similar to experimental patterns.
The Value of Item Response Theory in Clinical Assessment: A Review
ERIC Educational Resources Information Center
Thomas, Michael L.
2011-01-01
Item response theory (IRT) and related latent variable models represent modern psychometric theory, the successor to classical test theory in psychological assessment. Although IRT has become prevalent in the measurement of ability and achievement, its contributions to clinical domains have been less extensive. Applications of IRT to clinical…
NASA Astrophysics Data System (ADS)
Rincón, Ángel; Panotopoulos, Grigoris
2018-01-01
We study for the first time the stability against scalar perturbations, and we compute the spectrum of quasinormal modes of three-dimensional charged black holes in Einstein-power-Maxwell nonlinear electrodynamics assuming running couplings. Adopting the sixth order Wentzel-Kramers-Brillouin (WKB) approximation we investigate how the running of the couplings change the spectrum of the classical theory. Our results show that all modes corresponding to nonvanishing angular momentum are unstable both in the classical theory and with the running of the couplings, while the fundamental mode can be stable or unstable depending on the running parameter and the electric charge.
Concreteness effects in semantic processing: ERP evidence supporting dual-coding theory.
Kounios, J; Holcomb, P J
1994-07-01
Dual-coding theory argues that processing advantages for concrete over abstract (verbal) stimuli result from the operation of 2 systems (i.e., imaginal and verbal) for concrete stimuli, rather than just 1 (for abstract stimuli). These verbal and imaginal systems have been linked with the left and right hemispheres of the brain, respectively. Context-availability theory argues that concreteness effects result from processing differences in a single system. The merits of these theories were investigated by examining the topographic distribution of event-related brain potentials in 2 experiments (lexical decision and concrete-abstract classification). The results were most consistent with dual-coding theory. In particular, different scalp distributions of an N400-like negativity were elicited by concrete and abstract words.
Sedimentation from Particle-Laden Plumes in Stratified Fluid
NASA Astrophysics Data System (ADS)
Sutherland, Bruce; Hong, Youn Sub
2015-11-01
Laboratory experiments are performed in which a mixture of particles, water and a small amount of dye is continuously injected upwards from a localized source into a uniformly stratified ambient. The particle-fluid mixture initially rises as a forced plume (which in most cases is buoyant, though in some cases due to high particle concentration is negative-buoyant at the source), reaches a maximum height, collapses upon itself and then spreads as a radial intrusion. The particles are observed to rain out of the descending intrusion and settle upon the floor of the tank. Using light attenuation, the depth of the particle mound is measured after the experiment has run for a fixed amount of time. In most experiments the distribution of particles is found to be approximately axisymmetric about the source with a near Gaussian structure for height as a function of radius. The results are compared with a code that combines classical plume theory with an adaptation to stratified fluids of the theory of Carey, Sigurdsson and Sparks (JGR, 1988) for the spread and fall of particles from a particle-laden plume impacting a rigid ceiling. Re-entrainment of particles into the plume is also taken into account.
Generalization Through the Recurrent Interaction of Episodic Memories
Kumaran, Dharshan; McClelland, James L.
2012-01-01
In this article, we present a perspective on the role of the hippocampal system in generalization, instantiated in a computational model called REMERGE (recurrency and episodic memory results in generalization). We expose a fundamental, but neglected, tension between prevailing computational theories that emphasize the function of the hippocampus in pattern separation (Marr, 1971; McClelland, McNaughton, & O'Reilly, 1995), and empirical support for its role in generalization and flexible relational memory (Cohen & Eichenbaum, 1993; Eichenbaum, 1999). Our account provides a means by which to resolve this conflict, by demonstrating that the basic representational scheme envisioned by complementary learning systems theory (McClelland et al., 1995), which relies upon orthogonalized codes in the hippocampus, is compatible with efficient generalization—as long as there is recurrence rather than unidirectional flow within the hippocampal circuit or, more widely, between the hippocampus and neocortex. We propose that recurrent similarity computation, a process that facilitates the discovery of higher-order relationships between a set of related experiences, expands the scope of classical exemplar-based models of memory (e.g., Nosofsky, 1984) and allows the hippocampus to support generalization through interactions that unfold within a dynamically created memory space. PMID:22775499
Constraints on primordial magnetic fields from inflation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Green, Daniel; Kobayashi, Takeshi, E-mail: drgreen@cita.utoronto.ca, E-mail: takeshi.kobayashi@sissa.it
2016-03-01
We present generic bounds on magnetic fields produced from cosmic inflation. By investigating field bounds on the vector potential, we constrain both the quantum mechanical production of magnetic fields and their classical growth in a model independent way. For classical growth, we show that only if the reheating temperature is as low as T{sub reh} ∼< 10{sup 2} MeV can magnetic fields of 10{sup −15} G be produced on Mpc scales in the present universe. For purely quantum mechanical scenarios, even stronger constraints are derived. Our bounds on classical and quantum mechanical scenarios apply to generic theories of inflationary magnetogenesis with a two-derivative timemore » kinetic term for the vector potential. In both cases, the magnetic field strength is limited by the gravitational back-reaction of the electric fields that are produced simultaneously. As an example of quantum mechanical scenarios, we construct vector field theories whose time diffeomorphisms are spontaneously broken, and explore magnetic field generation in theories with a variable speed of light. Transitions of quantum vector field fluctuations into classical fluctuations are also analyzed in the examples.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gurvits, L.
2002-01-01
Classical matching theory can be defined in terms of matrices with nonnegative entries. The notion of Positive operator, central in Quantum Theory, is a natural generalization of matrices with non-negative entries. Based on this point of view, we introduce a definition of perfect Quantum (operator) matching. We show that the new notion inherits many 'classical' properties, but not all of them. This new notion goes somewhere beyound matroids. For separable bipartite quantum states this new notion coinsides with the full rank property of the intersection of two corresponding geometric matroids. In the classical situation, permanents are naturally associated with perfectsmore » matchings. We introduce an analog of permanents for positive operators, called Quantum Permanent and show how this generalization of the permanent is related to the Quantum Entanglement. Besides many other things, Quantum Permanents provide new rational inequalities necessary for the separability of bipartite quantum states. Using Quantum Permanents, we give deterministic poly-time algorithm to solve Hidden Matroids Intersection Problem and indicate some 'classical' complexity difficulties associated with the Quantum Entanglement. Finally, we prove that the weak membership problem for the convex set of separable bipartite density matrices is NP-HARD.« less
Giles, Tracey M; de Lacey, Sheryl; Muir-Cochrane, Eimear
2016-01-01
Grounded theory method has been described extensively in the literature. Yet, the varying processes portrayed can be confusing for novice grounded theorists. This article provides a worked example of the data analysis phase of a constructivist grounded theory study that examined family presence during resuscitation in acute health care settings. Core grounded theory methods are exemplified, including initial and focused coding, constant comparative analysis, memo writing, theoretical sampling, and theoretical saturation. The article traces the construction of the core category "Conditional Permission" from initial and focused codes, subcategories, and properties, through to its position in the final substantive grounded theory.
A novel concatenated code based on the improved SCG-LDPC code for optical transmission systems
NASA Astrophysics Data System (ADS)
Yuan, Jian-guo; Xie, Ya; Wang, Lin; Huang, Sheng; Wang, Yong
2013-01-01
Based on the optimization and improvement for the construction method of systematically constructed Gallager (SCG) (4, k) code, a novel SCG low density parity check (SCG-LDPC)(3969, 3720) code to be suitable for optical transmission systems is constructed. The novel SCG-LDPC (6561,6240) code with code rate of 95.1% is constructed by increasing the length of SCG-LDPC (3969,3720) code, and in a way, the code rate of LDPC codes can better meet the high requirements of optical transmission systems. And then the novel concatenated code is constructed by concatenating SCG-LDPC(6561,6240) code and BCH(127,120) code with code rate of 94.5%. The simulation results and analyses show that the net coding gain (NCG) of BCH(127,120)+SCG-LDPC(6561,6240) concatenated code is respectively 2.28 dB and 0.48 dB more than those of the classic RS(255,239) code and SCG-LDPC(6561,6240) code at the bit error rate (BER) of 10-7.
Ghost imaging of phase objects with classical incoherent light
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shirai, Tomohiro; Setaelae, Tero; Friberg, Ari T.
2011-10-15
We describe an optical setup for performing spatial Fourier filtering in ghost imaging with classical incoherent light. This is achieved by a modification of the conventional geometry for lensless ghost imaging. It is shown on the basis of classical coherence theory that with this technique one can realize what we call phase-contrast ghost imaging to visualize pure phase objects.
Quantum-mechanical machinery for rational decision-making in classical guessing game
NASA Astrophysics Data System (ADS)
Bang, Jeongho; Ryu, Junghee; Pawłowski, Marcin; Ham, Byoung S.; Lee, Jinhyoung
2016-02-01
In quantum game theory, one of the most intriguing and important questions is, “Is it possible to get quantum advantages without any modification of the classical game?” The answer to this question so far has largely been negative. So far, it has usually been thought that a change of the classical game setting appears to be unavoidable for getting the quantum advantages. However, we give an affirmative answer here, focusing on the decision-making process (we call ‘reasoning’) to generate the best strategy, which may occur internally, e.g., in the player’s brain. To show this, we consider a classical guessing game. We then define a one-player reasoning problem in the context of the decision-making theory, where the machinery processes are designed to simulate classical and quantum reasoning. In such settings, we present a scenario where a rational player is able to make better use of his/her weak preferences due to quantum reasoning, without any altering or resetting of the classically defined game. We also argue in further analysis that the quantum reasoning may make the player fail, and even make the situation worse, due to any inappropriate preferences.
Quantum-mechanical machinery for rational decision-making in classical guessing game
Bang, Jeongho; Ryu, Junghee; Pawłowski, Marcin; Ham, Byoung S.; Lee, Jinhyoung
2016-01-01
In quantum game theory, one of the most intriguing and important questions is, “Is it possible to get quantum advantages without any modification of the classical game?” The answer to this question so far has largely been negative. So far, it has usually been thought that a change of the classical game setting appears to be unavoidable for getting the quantum advantages. However, we give an affirmative answer here, focusing on the decision-making process (we call ‘reasoning’) to generate the best strategy, which may occur internally, e.g., in the player’s brain. To show this, we consider a classical guessing game. We then define a one-player reasoning problem in the context of the decision-making theory, where the machinery processes are designed to simulate classical and quantum reasoning. In such settings, we present a scenario where a rational player is able to make better use of his/her weak preferences due to quantum reasoning, without any altering or resetting of the classically defined game. We also argue in further analysis that the quantum reasoning may make the player fail, and even make the situation worse, due to any inappropriate preferences. PMID:26875685
Quantum-mechanical machinery for rational decision-making in classical guessing game.
Bang, Jeongho; Ryu, Junghee; Pawłowski, Marcin; Ham, Byoung S; Lee, Jinhyoung
2016-02-15
In quantum game theory, one of the most intriguing and important questions is, "Is it possible to get quantum advantages without any modification of the classical game?" The answer to this question so far has largely been negative. So far, it has usually been thought that a change of the classical game setting appears to be unavoidable for getting the quantum advantages. However, we give an affirmative answer here, focusing on the decision-making process (we call 'reasoning') to generate the best strategy, which may occur internally, e.g., in the player's brain. To show this, we consider a classical guessing game. We then define a one-player reasoning problem in the context of the decision-making theory, where the machinery processes are designed to simulate classical and quantum reasoning. In such settings, we present a scenario where a rational player is able to make better use of his/her weak preferences due to quantum reasoning, without any altering or resetting of the classically defined game. We also argue in further analysis that the quantum reasoning may make the player fail, and even make the situation worse, due to any inappropriate preferences.
ERIC Educational Resources Information Center
Sadoski, Mark; And Others
1993-01-01
The comprehensibility, interestingness, familiarity, and memorability of concrete and abstract instructional texts were studied in 4 experiments involving 221 college students. Results indicate that concreteness (ease of imagery) is the variable overwhelmingly most related to comprehensibility and recall. Dual coding theory and schema theory are…
Toda theories as contractions of affine Toda theories
NASA Astrophysics Data System (ADS)
Aghamohammadi, A.; Khorrami, M.; Shariati, A.
1996-02-01
Using a contraction procedure, we obtain Toda theories and their structures, from affine Toda theories and their corresponding structures. By structures, we mean the equation of motion, the classical Lax pair, the boundary term for half line theories, and the quantum transfer matrix. The Lax pair and the transfer matrix so obtained, depend nontrivially on the spectral parameter.
Comparing the Effectiveness of SPSS and EduG Using Different Designs for Generalizability Theory
ERIC Educational Resources Information Center
Teker, Gulsen Tasdelen; Guler, Nese; Uyanik, Gulden Kaya
2015-01-01
Generalizability theory (G theory) provides a broad conceptual framework for social sciences such as psychology and education, and a comprehensive construct for numerous measurement events by using analysis of variance, a strong statistical method. G theory, as an extension of both classical test theory and analysis of variance, is a model which…
An Approach to Biased Item Identification Using Latent Trait Measurement Theory.
ERIC Educational Resources Information Center
Rudner, Lawrence M.
Because it is a true score model employing item parameters which are independent of the examined sample, item characteristic curve theory (ICC) offers several advantages over classical measurement theory. In this paper an approach to biased item identification using ICC theory is described and applied. The ICC theory approach is attractive in that…
Unitals and ovals of symmetric block designs in LDPC and space-time coding
NASA Astrophysics Data System (ADS)
Andriamanalimanana, Bruno R.
2004-08-01
An approach to the design of LDPC (low density parity check) error-correction and space-time modulation codes involves starting with known mathematical and combinatorial structures, and deriving code properties from structure properties. This paper reports on an investigation of unital and oval configurations within generic symmetric combinatorial designs, not just classical projective planes, as the underlying structure for classes of space-time LDPC outer codes. Of particular interest are the encoding and iterative (sum-product) decoding gains that these codes may provide. Various small-length cases have been numerically implemented in Java and Matlab for a number of channel models.
Khrennikov, Andrei
2011-09-01
We propose a model of quantum-like (QL) processing of mental information. This model is based on quantum information theory. However, in contrast to models of "quantum physical brain" reducing mental activity (at least at the highest level) to quantum physical phenomena in the brain, our model matches well with the basic neuronal paradigm of the cognitive science. QL information processing is based (surprisingly) on classical electromagnetic signals induced by joint activity of neurons. This novel approach to quantum information is based on representation of quantum mechanics as a version of classical signal theory which was recently elaborated by the author. The brain uses the QL representation (QLR) for working with abstract concepts; concrete images are described by classical information theory. Two processes, classical and QL, are performed parallely. Moreover, information is actively transmitted from one representation to another. A QL concept given in our model by a density operator can generate a variety of concrete images given by temporal realizations of the corresponding (Gaussian) random signal. This signal has the covariance operator coinciding with the density operator encoding the abstract concept under consideration. The presence of various temporal scales in the brain plays the crucial role in creation of QLR in the brain. Moreover, in our model electromagnetic noise produced by neurons is a source of superstrong QL correlations between processes in different spatial domains in the brain; the binding problem is solved on the QL level, but with the aid of the classical background fluctuations. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Charging and shielding of a non-spherical dust grain in a plasma
NASA Astrophysics Data System (ADS)
Zhao, L.; Delzanno, G.
2013-12-01
The interaction of objects with a plasma is a classic problem of plasma physics. Originally, it was investigated in the framework of probe theory but more recently its interest has grown in connection with space and complex or dusty plasmas. It is customary to assume that the dust grains are spherical, and theories such as the Orbital Motion Limited (OML) theory are usually applied to calculate the dust charge. However, in nature dust grains have a variety of sizes and shapes. It is therefore natural to ask about the influence of the dust shape on the charging and shielding process. In order to answer this question, we study the charging and shielding of a non-spherical dust grain immersed in a Maxwellian plasma at rest. We consider prolate ellipsoids, varying parametrically the aspect ratio while keeping the surface area constant. The study is conducted with CPIC [1], a newly developed Particle-In-Cell code in curvilinear geometry that conforms to objects of arbitrary shape. For a plasma with temperature ratio equal to unity and for a dust grain with characteristic size of the order of the Debye length, it is shown that the floating potential has a very weak dependence on the geometry, while the charge on the grain increases by a factor of three when the aspect ratio changes from one (a sphere) to hundred (a needle-like ellipsoid). These results are consistent with the higher capacitance of ellipsoidal dust grains, but also indicate that the screening length depends on the geometry. Scaling studies of the dependence of the charging time and screening length on the aspect ratio and plasma conditions are presented, including theoretical considerations to support the numerical results. [1] G.L. Delzanno, et al, ';CPIC: a curvilinear Particle-In-Cell code for plasma-material interaction studies', under review.
Research on compressive sensing reconstruction algorithm based on total variation model
NASA Astrophysics Data System (ADS)
Gao, Yu-xuan; Sun, Huayan; Zhang, Tinghua; Du, Lin
2017-12-01
Compressed sensing for breakthrough Nyquist sampling theorem provides a strong theoretical , making compressive sampling for image signals be carried out simultaneously. In traditional imaging procedures using compressed sensing theory, not only can it reduces the storage space, but also can reduce the demand for detector resolution greatly. Using the sparsity of image signal, by solving the mathematical model of inverse reconfiguration, realize the super-resolution imaging. Reconstruction algorithm is the most critical part of compression perception, to a large extent determine the accuracy of the reconstruction of the image.The reconstruction algorithm based on the total variation (TV) model is more suitable for the compression reconstruction of the two-dimensional image, and the better edge information can be obtained. In order to verify the performance of the algorithm, Simulation Analysis the reconstruction result in different coding mode of the reconstruction algorithm based on the TV reconstruction algorithm. The reconstruction effect of the reconfigurable algorithm based on TV based on the different coding methods is analyzed to verify the stability of the algorithm. This paper compares and analyzes the typical reconstruction algorithm in the same coding mode. On the basis of the minimum total variation algorithm, the Augmented Lagrangian function term is added and the optimal value is solved by the alternating direction method.Experimental results show that the reconstruction algorithm is compared with the traditional classical algorithm based on TV has great advantages, under the low measurement rate can be quickly and accurately recovers target image.
Decoding communities in networks
NASA Astrophysics Data System (ADS)
Radicchi, Filippo
2018-02-01
According to a recent information-theoretical proposal, the problem of defining and identifying communities in networks can be interpreted as a classical communication task over a noisy channel: memberships of nodes are information bits erased by the channel, edges and nonedges in the network are parity bits introduced by the encoder but degraded through the channel, and a community identification algorithm is a decoder. The interpretation is perfectly equivalent to the one at the basis of well-known statistical inference algorithms for community detection. The only difference in the interpretation is that a noisy channel replaces a stochastic network model. However, the different perspective gives the opportunity to take advantage of the rich set of tools of coding theory to generate novel insights on the problem of community detection. In this paper, we illustrate two main applications of standard coding-theoretical methods to community detection. First, we leverage a state-of-the-art decoding technique to generate a family of quasioptimal community detection algorithms. Second and more important, we show that the Shannon's noisy-channel coding theorem can be invoked to establish a lower bound, here named as decodability bound, for the maximum amount of noise tolerable by an ideal decoder to achieve perfect detection of communities. When computed for well-established synthetic benchmarks, the decodability bound explains accurately the performance achieved by the best community detection algorithms existing on the market, telling us that only little room for their improvement is still potentially left.
Decoding communities in networks.
Radicchi, Filippo
2018-02-01
According to a recent information-theoretical proposal, the problem of defining and identifying communities in networks can be interpreted as a classical communication task over a noisy channel: memberships of nodes are information bits erased by the channel, edges and nonedges in the network are parity bits introduced by the encoder but degraded through the channel, and a community identification algorithm is a decoder. The interpretation is perfectly equivalent to the one at the basis of well-known statistical inference algorithms for community detection. The only difference in the interpretation is that a noisy channel replaces a stochastic network model. However, the different perspective gives the opportunity to take advantage of the rich set of tools of coding theory to generate novel insights on the problem of community detection. In this paper, we illustrate two main applications of standard coding-theoretical methods to community detection. First, we leverage a state-of-the-art decoding technique to generate a family of quasioptimal community detection algorithms. Second and more important, we show that the Shannon's noisy-channel coding theorem can be invoked to establish a lower bound, here named as decodability bound, for the maximum amount of noise tolerable by an ideal decoder to achieve perfect detection of communities. When computed for well-established synthetic benchmarks, the decodability bound explains accurately the performance achieved by the best community detection algorithms existing on the market, telling us that only little room for their improvement is still potentially left.
Android application for handwriting segmentation using PerTOHS theory
NASA Astrophysics Data System (ADS)
Akouaydi, Hanen; Njah, Sourour; Alimi, Adel M.
2017-03-01
The paper handles the problem of segmentation of handwriting on mobile devices. Many applications have been developed in order to facilitate the recognition of handwriting and to skip the limited numbers of keys in keyboards and try to introduce a space of drawing for writing instead of using keyboards. In this one, we will present a mobile theory for the segmentation of for handwriting uses PerTOHS theory, Perceptual Theory of On line Handwriting Segmentation, where handwriting is defined as a sequence of elementary and perceptual codes. In fact, the theory analyzes the written script and tries to learn the handwriting visual codes features in order to generate new ones via the generated perceptual sequences. To get this classification we try to apply the Beta-elliptic model, fuzzy detector and also genetic algorithms in order to get the EPCs (Elementary Perceptual Codes) and GPCs (Global Perceptual Codes) that composed the script. So, we will present our Android application M-PerTOHS for segmentation of handwriting.
Transfer function modeling of damping mechanisms in viscoelastic plates
NASA Technical Reports Server (NTRS)
Slater, J. C.; Inman, D. J.
1991-01-01
This work formulates a method for the modeling of material damping characteristics in plates. The Sophie German equation of classical plate theory is modified to incorporate hysteresis effects represented by complex stiffness using the transfer function approach proposed by Golla and Hughes, (1985). However, this procedure is not limited to this representation. The governing characteristic equation is decoupled through separation of variables, yielding a solution similar to that of undamped classical plate theory, allowing solution of the steady state as well as the transient response problem.
Information Theory, Inference and Learning Algorithms
NASA Astrophysics Data System (ADS)
Mackay, David J. C.
2003-10-01
Information theory and inference, often taught separately, are here united in one entertaining textbook. These topics lie at the heart of many exciting areas of contemporary science and engineering - communication, signal processing, data mining, machine learning, pattern recognition, computational neuroscience, bioinformatics, and cryptography. This textbook introduces theory in tandem with applications. Information theory is taught alongside practical communication systems, such as arithmetic coding for data compression and sparse-graph codes for error-correction. A toolbox of inference techniques, including message-passing algorithms, Monte Carlo methods, and variational approximations, are developed alongside applications of these tools to clustering, convolutional codes, independent component analysis, and neural networks. The final part of the book describes the state of the art in error-correcting codes, including low-density parity-check codes, turbo codes, and digital fountain codes -- the twenty-first century standards for satellite communications, disk drives, and data broadcast. Richly illustrated, filled with worked examples and over 400 exercises, some with detailed solutions, David MacKay's groundbreaking book is ideal for self-learning and for undergraduate or graduate courses. Interludes on crosswords, evolution, and sex provide entertainment along the way. In sum, this is a textbook on information, communication, and coding for a new generation of students, and an unparalleled entry point into these subjects for professionals in areas as diverse as computational biology, financial engineering, and machine learning.
Insights into HLA-G Genetics Provided by Worldwide Haplotype Diversity
Castelli, Erick C.; Ramalho, Jaqueline; Porto, Iane O. P.; Lima, Thálitta H. A.; Felício, Leandro P.; Sabbagh, Audrey; Donadi, Eduardo A.; Mendes-Junior, Celso T.
2014-01-01
Human leukocyte antigen G (HLA-G) belongs to the family of non-classical HLA class I genes, located within the major histocompatibility complex (MHC). HLA-G has been the target of most recent research regarding the function of class I non-classical genes. The main features that distinguish HLA-G from classical class I genes are (a) limited protein variability, (b) alternative splicing generating several membrane bound and soluble isoforms, (c) short cytoplasmic tail, (d) modulation of immune response (immune tolerance), and (e) restricted expression to certain tissues. In the present work, we describe the HLA-G gene structure and address the HLA-G variability and haplotype diversity among several populations around the world, considering each of its major segments [promoter, coding, and 3′ untranslated region (UTR)]. For this purpose, we developed a pipeline to reevaluate the 1000Genomes data and recover miscalled or missing genotypes and haplotypes. It became clear that the overall structure of the HLA-G molecule has been maintained during the evolutionary process and that most of the variation sites found in the HLA-G coding region are either coding synonymous or intronic mutations. In addition, only a few frequent and divergent extended haplotypes are found when the promoter, coding, and 3′UTRs are evaluated together. The divergence is particularly evident for the regulatory regions. The population comparisons confirmed that most of the HLA-G variability has originated before human dispersion from Africa and that the allele and haplotype frequencies have probably been shaped by strong selective pressures. PMID:25339953
Qualitative Data Analysis for Health Services Research: Developing Taxonomy, Themes, and Theory
Bradley, Elizabeth H; Curry, Leslie A; Devers, Kelly J
2007-01-01
Objective To provide practical strategies for conducting and evaluating analyses of qualitative data applicable for health services researchers. Data Sources and Design We draw on extant qualitative methodological literature to describe practical approaches to qualitative data analysis. Approaches to data analysis vary by discipline and analytic tradition; however, we focus on qualitative data analysis that has as a goal the generation of taxonomy, themes, and theory germane to health services research. Principle Findings We describe an approach to qualitative data analysis that applies the principles of inductive reasoning while also employing predetermined code types to guide data analysis and interpretation. These code types (conceptual, relationship, perspective, participant characteristics, and setting codes) define a structure that is appropriate for generation of taxonomy, themes, and theory. Conceptual codes and subcodes facilitate the development of taxonomies. Relationship and perspective codes facilitate the development of themes and theory. Intersectional analyses with data coded for participant characteristics and setting codes can facilitate comparative analyses. Conclusions Qualitative inquiry can improve the description and explanation of complex, real-world phenomena pertinent to health services research. Greater understanding of the processes of qualitative data analysis can be helpful for health services researchers as they use these methods themselves or collaborate with qualitative researchers from a wide range of disciplines. PMID:17286625
Qualitative data analysis for health services research: developing taxonomy, themes, and theory.
Bradley, Elizabeth H; Curry, Leslie A; Devers, Kelly J
2007-08-01
To provide practical strategies for conducting and evaluating analyses of qualitative data applicable for health services researchers. DATA SOURCES AND DESIGN: We draw on extant qualitative methodological literature to describe practical approaches to qualitative data analysis. Approaches to data analysis vary by discipline and analytic tradition; however, we focus on qualitative data analysis that has as a goal the generation of taxonomy, themes, and theory germane to health services research. We describe an approach to qualitative data analysis that applies the principles of inductive reasoning while also employing predetermined code types to guide data analysis and interpretation. These code types (conceptual, relationship, perspective, participant characteristics, and setting codes) define a structure that is appropriate for generation of taxonomy, themes, and theory. Conceptual codes and subcodes facilitate the development of taxonomies. Relationship and perspective codes facilitate the development of themes and theory. Intersectional analyses with data coded for participant characteristics and setting codes can facilitate comparative analyses. Qualitative inquiry can improve the description and explanation of complex, real-world phenomena pertinent to health services research. Greater understanding of the processes of qualitative data analysis can be helpful for health services researchers as they use these methods themselves or collaborate with qualitative researchers from a wide range of disciplines.
NASA Astrophysics Data System (ADS)
Wang, Hai; Kumar, Asutosh; Cho, Minhyung; Wu, Junde
2018-04-01
Physical quantities are assumed to take real values, which stems from the fact that an usual measuring instrument that measures a physical observable always yields a real number. Here we consider the question of what would happen if physical observables are allowed to assume complex values. In this paper, we show that by allowing observables in the Bell inequality to take complex values, a classical physical theory can actually get the same upper bound of the Bell expression as quantum theory. Also, by extending the real field to the quaternionic field, we can puzzle out the GHZ problem using local hidden variable model. Furthermore, we try to build a new type of hidden-variable theory of a single qubit based on the result.
From integrability to conformal symmetry: Bosonic superconformal Toda theories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bo-Yu Hou; Liu Chao
In this paper the authors study the conformal integrable models obtained from conformal reductions of WZNW theory associated with second order constraints. These models are called bosonic superconformal Toda models due to their conformal spectra and their resemblance to the usual Toda theories. From the reduction procedure they get the equations of motion and the linearized Lax equations in a generic Z gradation of the underlying Lie algebra. Then, in the special case of principal gradation, they derive the classical r matrix, fundamental Poisson relation, exchange algebra of chiral operators and find out the classical vertex operators. The result showsmore » that their model is very similar to the ordinary Toda theories in that one can obtain various conformal properties of the model from its integrability.« less
Sound and Vision: Using Progressive Rock To Teach Social Theory.
ERIC Educational Resources Information Center
Ahlkvist, Jarl A.
2001-01-01
Describes a teaching technique that utilizes progressive rock music to educate students about sociological theories in introductory sociology courses. Discusses the use of music when teaching about classical social theory and offers an evaluation of this teaching strategy. Includes references. (CMK)
Paivio, Allan; Sadoski, Mark
2011-01-01
Elman (2009) proposed that the traditional role of the mental lexicon in language processing can largely be replaced by a theoretical model of schematic event knowledge founded on dynamic context-dependent variables. We evaluate Elman's approach and propose an alternative view, based on dual coding theory and evidence that modality-specific cognitive representations contribute strongly to word meaning and language performance across diverse contexts which also have effects predictable from dual coding theory. Copyright © 2010 Cognitive Science Society, Inc.
NASA Astrophysics Data System (ADS)
Camilleri, Kristian; Schlosshauer, Maximilian
2015-02-01
Niels Bohr's doctrine of the primacy of "classical concepts" is arguably his most criticized and misunderstood view. We present a new, careful historical analysis that makes clear that Bohr's doctrine was primarily an epistemological thesis, derived from his understanding of the functional role of experiment. A hitherto largely overlooked disagreement between Bohr and Heisenberg about the movability of the "cut" between measuring apparatus and observed quantum system supports the view that, for Bohr, such a cut did not originate in dynamical (ontological) considerations, but rather in functional (epistemological) considerations. As such, both the motivation and the target of Bohr's doctrine of classical concepts are of a fundamentally different nature than what is understood as the dynamical problem of the quantum-to-classical transition. Our analysis suggests that, contrary to claims often found in the literature, Bohr's doctrine is not, and cannot be, at odds with proposed solutions to the dynamical problem of the quantum-classical transition that were pursued by several of Bohr's followers and culminated in the development of decoherence theory.
Force-field functor theory: classical force-fields which reproduce equilibrium quantum distributions
Babbush, Ryan; Parkhill, John; Aspuru-Guzik, Alán
2013-01-01
Feynman and Hibbs were the first to variationally determine an effective potential whose associated classical canonical ensemble approximates the exact quantum partition function. We examine the existence of a map between the local potential and an effective classical potential which matches the exact quantum equilibrium density and partition function. The usefulness of such a mapping rests in its ability to readily improve Born-Oppenheimer potentials for use with classical sampling. We show that such a map is unique and must exist. To explore the feasibility of using this result to improve classical molecular mechanics, we numerically produce a map from a library of randomly generated one-dimensional potential/effective potential pairs then evaluate its performance on independent test problems. We also apply the map to simulate liquid para-hydrogen, finding that the resulting radial pair distribution functions agree well with path integral Monte Carlo simulations. The surprising accessibility and transferability of the technique suggest a quantitative route to adapting Born-Oppenheimer potentials, with a motivation similar in spirit to the powerful ideas and approximations of density functional theory. PMID:24790954
Quantum theory for 1D X-ray free electron laser
Anisimov, Petr Mikhaylovich
2017-09-19
Classical 1D X-ray Free Electron Laser (X-ray FEL) theory has stood the test of time by guiding FEL design and development prior to any full-scale analysis. Future X-ray FELs and inverse-Compton sources, where photon recoil approaches an electron energy spread value, push the classical theory to its limits of applicability. After substantial efforts by the community to find what those limits are, there is no universally agreed upon quantum approach to design and development of future X-ray sources. We offer a new approach to formulate the quantum theory for 1D X-ray FELs that has an obvious connection to the classicalmore » theory, which allows for immediate transfer of knowledge between the two regimes. In conclusion, we exploit this connection in order to draw quantum mechanical conclusions about the quantum nature of electrons and generated radiation in terms of FEL variables.« less
NASA Astrophysics Data System (ADS)
Rezaei Kivi, Araz; Azizi, Saber; Norouzi, Peyman
2017-12-01
In this paper, the nonlinear size-dependent static and dynamic behavior of an electrostatically actuated nano-beam is investigated. A fully clamped nano-beam is considered for the modeling of the deformable electrode of the NEMS. The governing differential equation of the motion is derived using Hamiltonian principle based on couple stress theory; a non-classical theory for considering length scale effects. The nonlinear partial differential equation of the motion is discretized to a nonlinear Duffing type ODE's using Galerkin method. Static and dynamic pull-in instabilities obtained by both classical theory and MCST are compared. At the second stage of analysis, shooting technique is utilized to obtain the frequency response curve, and to capture the periodic solutions of the motion; the stability of the periodic solutions are gained by Floquet theory. The nonlinear dynamic behavior of the deformable electrode due to the AC harmonic accompanied with size dependency is investigated.
Finite conformal quantum gravity and spacetime singularities
NASA Astrophysics Data System (ADS)
Modesto, Leonardo; Rachwał, Lesław
2017-12-01
We show that a class of finite quantum non-local gravitational theories is conformally invariant at classical as well as at quantum level. This is actually a range of conformal anomaly-free theories in the spontaneously broken phase of the Weyl symmetry. At classical level we show how the Weyl conformal invariance is able to tame all the spacetime singularities that plague not only Einstein gravity, but also local and weakly non-local higher derivative theories. The latter statement is proved by a singularity theorem that applies to a large class of weakly non-local theories. Therefore, we are entitled to look for a solution of the spacetime singularity puzzle in a missed symmetry of nature, namely the Weyl conformal symmetry. Following the seminal paper by Narlikar and Kembhavi, we provide an explicit construction of singularity-free black hole exact solutions in a class of conformally invariant theories.
NASA Astrophysics Data System (ADS)
Argurio, Riccardo
1998-07-01
The thesis begins with an introduction to M-theory (at a graduate student's level), starting from perturbative string theory and proceeding to dualities, D-branes and finally Matrix theory. The following chapter treats, in a self-contained way, of general classical p-brane solutions. Black and extremal branes are reviewed, along with their semi-classical thermodynamics. We then focus on intersecting extremal branes, the intersection rules being derived both with and without the explicit use of supersymmetry. The last three chapters comprise more advanced aspects of brane physics, such as the dynamics of open branes, the little theories on the world-volume of branes and how the four dimensional Schwarzschild black hole can be mapped to an extremal configuration of branes, thus allowing for a statistical interpretation of its entropy. The original results were already reported in hep-th/9701042, hep-th/9704190, hep-th/9710027 and hep-th/9801053.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harrison, R. G., E-mail: rgh@doe.carleton.ca
2014-01-21
A positive-feedback mean-field modification of the classical Brillouin magnetization theory provides an explanation of the apparent persistence of the spontaneous magnetization beyond the conventional Curie temperature—the little understood “tail” phenomenon that occurs in many ferromagnetic materials. The classical theory is unable to resolve this apparent anomaly. The modified theory incorporates the temperature-dependent quantum-scale hysteretic and mesoscopic domain-scale anhysteretic magnetization processes and includes the effects of demagnetizing and exchange fields. It is found that the thermal behavior of the reversible and irreversible segments of the hysteresis loops, as predicted by the theory, is a key to the presence or absence ofmore » the “tails.” The theory, which permits arbitrary values of the quantum spin number J, generally provides a quantitative agreement with the thermal variations of both the spontaneous magnetization and the shape of the hysteresis loop.« less
Item Response Modeling with Sum Scores
ERIC Educational Resources Information Center
Johnson, Timothy R.
2013-01-01
One of the distinctions between classical test theory and item response theory is that the former focuses on sum scores and their relationship to true scores, whereas the latter concerns item responses and their relationship to latent scores. Although item response theory is often viewed as the richer of the two theories, sum scores are still…
Test Theories, Educational Priorities and Reliability of Public Examinations in England
ERIC Educational Resources Information Center
Baird, Jo-Anne; Black, Paul
2013-01-01
Much has already been written on the controversies surrounding the use of different test theories in educational assessment. Other authors have noted the prevalence of classical test theory over item response theory in practice. This Special Issue draws together articles based upon work conducted on the Reliability Programme for England's…
Recent developments in bimetric theory
NASA Astrophysics Data System (ADS)
Schmidt-May, Angnis; von Strauss, Mikael
2016-05-01
This review is dedicated to recent progress in the field of classical, interacting, massive spin-2 theories, with a focus on ghost-free bimetric theory. We will outline its history and its development as a nontrivial extension and generalisation of nonlinear massive gravity. We present a detailed discussion of the consistency proofs of both theories, before we review Einstein solutions to the bimetric equations of motion in vacuum as well as the resulting mass spectrum. We introduce couplings to matter and then discuss the general relativity and massive gravity limits of bimetric theory, which correspond to decoupling the massive or the massless spin-2 field from the matter sector, respectively. More general classical solutions are reviewed and the present status of bimetric cosmology is summarised. An interesting corner in the bimetric parameter space which could potentially give rise to a nonlinear theory for partially massless spin-2 fields is also discussed. Relations to higher-curvature theories of gravity are explained and finally we give an overview of possible extensions of the theory and review its formulation in terms of vielbeins.
A Very Fast and Angular Momentum Conserving Tree Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marcello, Dominic C., E-mail: dmarce504@gmail.com
There are many methods used to compute the classical gravitational field in astrophysical simulation codes. With the exception of the typically impractical method of direct computation, none ensure conservation of angular momentum to machine precision. Under uniform time-stepping, the Cartesian fast multipole method of Dehnen (also known as the very fast tree code) conserves linear momentum to machine precision. We show that it is possible to modify this method in a way that conserves both angular and linear momenta.
NASA Technical Reports Server (NTRS)
Wigton, Larry
1996-01-01
Improving the numerical linear algebra routines for use in new Navier-Stokes codes, specifically Tim Barth's unstructured grid code, with spin-offs to TRANAIR is reported. A fast distance calculation routine for Navier-Stokes codes using the new one-equation turbulence models is written. The primary focus of this work was devoted to improving matrix-iterative methods. New algorithms have been developed which activate the full potential of classical Cray-class computers as well as distributed-memory parallel computers.
NASA Astrophysics Data System (ADS)
Honnell, Kevin; Burnett, Sarah; Yorke, Chloe'; Howard, April; Ramsey, Scott
2017-06-01
The Noh problem is classic verification problem in the field of compressible flows. Simple to conceptualize, it is nonetheless difficult for numerical codes to predict correctly, making it an ideal code-verification test bed. In its original incarnation, the fluid is a simple ideal gas; once validated, however, these codes are often used to study highly non-ideal fluids and solids. In this work the classic Noh problem is extended beyond the commonly-studied polytropic ideal gas to more realistic equations of state (EOS) including the stiff gas, the Nobel-Abel gas, and the Carnahan-Starling hard-sphere fluid, thus enabling verification studies to be performed on more physically-realistic fluids. Exact solutions are compared with numerical results obtained from the Lagrangian hydrocode FLAG, developed at Los Alamos. For these more realistic EOSs, the simulation errors decreased in magnitude both at the origin and at the shock, but also spread more broadly about these points compared to the ideal EOS. The overall spatial convergence rate remained first order.
A Theoretical Analysis of Learning with Graphics--Implications for Computer Graphics Design.
ERIC Educational Resources Information Center
ChanLin, Lih-Juan
This paper reviews the literature pertinent to learning with graphics. The dual coding theory provides explanation about how graphics are stored and precessed in semantic memory. The level of processing theory suggests how graphics can be employed in learning to encourage deeper processing. In addition to dual coding theory and level of processing…
ERIC Educational Resources Information Center
Bowers, Jeffrey S.
2009-01-01
A fundamental claim associated with parallel distributed processing (PDP) theories of cognition is that knowledge is coded in a distributed manner in mind and brain. This approach rejects the claim that knowledge is coded in a localist fashion, with words, objects, and simple concepts (e.g. "dog"), that is, coded with their own dedicated…
A new theory of development: the generation of complexity in ontogenesis.
Barbieri, Marcello
2016-03-13
Today there is a very wide consensus on the idea that embryonic development is the result of a genetic programme and of epigenetic processes. Many models have been proposed in this theoretical framework to account for the various aspects of development, and virtually all of them have one thing in common: they do not acknowledge the presence of organic codes (codes between organic molecules) in ontogenesis. Here it is argued instead that embryonic development is a convergent increase in complexity that necessarily requires organic codes and organic memories, and a few examples of such codes are described. This is the code theory of development, a theory that was originally inspired by an algorithm that is capable of reconstructing structures from incomplete information, an algorithm that here is briefly summarized because it makes it intuitively appealing how a convergent increase in complexity can be achieved. The main thesis of the new theory is that the presence of organic codes in ontogenesis is not only a theoretical necessity but, first and foremost, an idea that can be tested and that has already been found to be in agreement with the evidence. © 2016 The Author(s).
NASA Technical Reports Server (NTRS)
Jones, R. T. (Compiler)
1979-01-01
A collection of papers on modern theoretical aerodynamics is presented. Included are theories of incompressible potential flow and research on the aerodynamic forces on wing and wing sections of aircraft and on airship hulls.
Contextual Advantage for State Discrimination
NASA Astrophysics Data System (ADS)
Schmid, David; Spekkens, Robert W.
2018-02-01
Finding quantitative aspects of quantum phenomena which cannot be explained by any classical model has foundational importance for understanding the boundary between classical and quantum theory. It also has practical significance for identifying information processing tasks for which those phenomena provide a quantum advantage. Using the framework of generalized noncontextuality as our notion of classicality, we find one such nonclassical feature within the phenomenology of quantum minimum-error state discrimination. Namely, we identify quantitative limits on the success probability for minimum-error state discrimination in any experiment described by a noncontextual ontological model. These constraints constitute noncontextuality inequalities that are violated by quantum theory, and this violation implies a quantum advantage for state discrimination relative to noncontextual models. Furthermore, our noncontextuality inequalities are robust to noise and are operationally formulated, so that any experimental violation of the inequalities is a witness of contextuality, independently of the validity of quantum theory. Along the way, we introduce new methods for analyzing noncontextuality scenarios and demonstrate a tight connection between our minimum-error state discrimination scenario and a Bell scenario.
Scalar gravitational waves in the effective theory of gravity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mottola, Emil
As a low energy effective field theory, classical General Relativity receives an infrared relevant modification from the conformal trace anomaly of the energy-momentum tensor of massless, or nearly massless, quantum fields. The local form of the effective action associated with the trace anomaly is expressed in terms of a dynamical scalar field that couples to the conformal factor of the spacetime metric, allowing it to propagate over macroscopic distances. Linearized around flat spacetime, this semi-classical EFT admits scalar gravitational wave solutions in addition to the transversely polarized tensor waves of the classical Einstein theory. The amplitude of the scalar wavemore » modes, as well as their energy and energy flux which are positive and contain a monopole moment, are computed. As a result, astrophysical sources for scalar gravitational waves are considered, with the excited gluonic condensates in the interiors of neutron stars in merger events with other compact objects likely to provide the strongest burst signals.« less
NASA Astrophysics Data System (ADS)
Zamani Kouhpanji, Mohammad Reza; Behzadirad, Mahmoud; Busani, Tito
2017-12-01
We used the stable strain gradient theory including acceleration gradients to investigate the classical and nonclassical mechanical properties of gallium nitride (GaN) nanowires (NWs). We predicted the static length scales, Young's modulus, and shear modulus of the GaN NWs from the experimental data. Combining these results with atomic simulations, we also found the dynamic length scale of the GaN NWs. Young's modulus, shear modulus, static, and dynamic length scales were found to be 318 GPa, 131 GPa, 8 nm, and 8.9 nm, respectively, usable for demonstrating the static and dynamic behaviors of GaN NWs having diameters from a few nm to bulk dimensions. Furthermore, the experimental data were analyzed with classical continuum theory (CCT) and compared with the available literature to illustrate the size-dependency of the mechanical properties of GaN NWs. This practice resolves the previous published discrepancies that happened due to the limitations of CCT used for determining the mechanical properties of GaN NWs and their size-dependency.
Banik, Suman Kumar; Bag, Bidhan Chandra; Ray, Deb Shankar
2002-05-01
Traditionally, quantum Brownian motion is described by Fokker-Planck or diffusion equations in terms of quasiprobability distribution functions, e.g., Wigner functions. These often become singular or negative in the full quantum regime. In this paper a simple approach to non-Markovian theory of quantum Brownian motion using true probability distribution functions is presented. Based on an initial coherent state representation of the bath oscillators and an equilibrium canonical distribution of the quantum mechanical mean values of their coordinates and momenta, we derive a generalized quantum Langevin equation in c numbers and show that the latter is amenable to a theoretical analysis in terms of the classical theory of non-Markovian dynamics. The corresponding Fokker-Planck, diffusion, and Smoluchowski equations are the exact quantum analogs of their classical counterparts. The present work is independent of path integral techniques. The theory as developed here is a natural extension of its classical version and is valid for arbitrary temperature and friction (the Smoluchowski equation being considered in the overdamped limit).
Density-functional theory simulation of large quantum dots
NASA Astrophysics Data System (ADS)
Jiang, Hong; Baranger, Harold U.; Yang, Weitao
2003-10-01
Kohn-Sham spin-density functional theory provides an efficient and accurate model to study electron-electron interaction effects in quantum dots, but its application to large systems is a challenge. Here an efficient method for the simulation of quantum dots using density-function theory is developed; it includes the particle-in-the-box representation of the Kohn-Sham orbitals, an efficient conjugate-gradient method to directly minimize the total energy, a Fourier convolution approach for the calculation of the Hartree potential, and a simplified multigrid technique to accelerate the convergence. We test the methodology in a two-dimensional model system and show that numerical studies of large quantum dots with several hundred electrons become computationally affordable. In the noninteracting limit, the classical dynamics of the system we study can be continuously varied from integrable to fully chaotic. The qualitative difference in the noninteracting classical dynamics has an effect on the quantum properties of the interacting system: integrable classical dynamics leads to higher-spin states and a broader distribution of spacing between Coulomb blockade peaks.
Scalar gravitational waves in the effective theory of gravity
Mottola, Emil
2017-07-10
As a low energy effective field theory, classical General Relativity receives an infrared relevant modification from the conformal trace anomaly of the energy-momentum tensor of massless, or nearly massless, quantum fields. The local form of the effective action associated with the trace anomaly is expressed in terms of a dynamical scalar field that couples to the conformal factor of the spacetime metric, allowing it to propagate over macroscopic distances. Linearized around flat spacetime, this semi-classical EFT admits scalar gravitational wave solutions in addition to the transversely polarized tensor waves of the classical Einstein theory. The amplitude of the scalar wavemore » modes, as well as their energy and energy flux which are positive and contain a monopole moment, are computed. As a result, astrophysical sources for scalar gravitational waves are considered, with the excited gluonic condensates in the interiors of neutron stars in merger events with other compact objects likely to provide the strongest burst signals.« less
An appraisal of the classic forest succession paradigm with the shade tolerance index
Jean Lienard; Ionut Florescu; Nikolay Strigul
2015-01-01
We revisit the classic theory of forest succession that relates shade tolerance and species replacement and assess its validity to understand patch-mosaic patterns of forested ecosystems of the USA. We introduce a macroscopic parameter called the âshade tolerance indexâ and compare it to the classic continuum index in southern Wisconsin forests. We exemplify shade...
Noncommutative gauge theory for Poisson manifolds
NASA Astrophysics Data System (ADS)
Jurčo, Branislav; Schupp, Peter; Wess, Julius
2000-09-01
A noncommutative gauge theory is associated to every Abelian gauge theory on a Poisson manifold. The semi-classical and full quantum version of the map from the ordinary gauge theory to the noncommutative gauge theory (Seiberg-Witten map) is given explicitly to all orders for any Poisson manifold in the Abelian case. In the quantum case the construction is based on Kontsevich's formality theorem.
The Foundations of Einstein's Theory of Gravitation
NASA Astrophysics Data System (ADS)
Freundlich, Erwin; Brose, Translated by Henry L.; Einstein, Preface by Albert; Turner, Introduction by H. H.
2011-06-01
Introduction; 1. The special theory of relativity as a stepping-stone to the general theory of relativity; 2. Two fundamental postulates in the mathematical formulation of physical laws; 3. Concerning the fulfilment of the two postulates; 4. The difficulties in the principles of classical mechanics; 5. Einstein's theory of gravitation; 6. The verification of the new theory by actual experience; Appendix; Index.
ERIC Educational Resources Information Center
Besson, Ugo
2013-01-01
This paper presents a history of research and theories on sliding friction between solids. This history is divided into four phases: from Leonardo da Vinci to Coulomb and the establishment of classical laws of friction; the theories of lubrication and the Tomlinson's theory of friction (1850-1930); the theories of wear, the Bowden and Tabor's…
NASA Technical Reports Server (NTRS)
Stein, Manuel; Sydow, P. Daniel; Librescu, Liviu
1990-01-01
Buckling and postbuckling results are presented for compression-loaded simply-supported aluminum plates and composite plates with a symmetric lay-up of thin + or - 45 deg plies composed of many layers. Buckling results for aluminum plates of finite length are given for various length-to-width ratios. Asymptotes to the curves based on buckling results give N(sub xcr) for plates of infinite length. Postbuckling results for plates with transverse shearing flexibility are compared to results from classical theory for various width-to-thickness ratios. Characteristic curves indicating the average longitudinal direct stress resultant as a function of the applied displacements are calculated based on four different theories: Classical von Karman theory using the Kirchoff assumptions, first-order shear deformation theory, higher-order shear deformation theory, and 3-D flexibility theory. Present results indicate that the 3-D flexibility theory gives the lowest buckling loads. The higher-order shear deformation theory has fewer unknowns than the 3-D flexibility theory but does not take into account through-the-thickness effects. The figures presented show that small differences occur in the average longitudinal direct stress resultants from the four theories that are functions of applied end-shortening displacement.
ERIC Educational Resources Information Center
Green, Crystal D.
2010-01-01
This action research study investigated the perceptions that student participants had on the development of a career exploration model and a career exploration project. The Holland code theory was the primary assessment used for this research study, in addition to the Multiple Intelligences theory and the identification of a role model for the…
Using grounded theory to create a substantive theory of promoting schoolchildren's mental health.
Puolakka, Kristiina; Haapasalo-Pesu, Kirsi-Maria; Kiikkala, Irma; Astedt-Kurki, Päivi; Paavilainen, Eija
2013-01-01
To discuss the creation of a substantive theory using grounded theory. This article provides an example of generating theory from a study of mental health promotion at a high school in Finland. Grounded theory is a method for creating explanatory theory. It is a valuable tool for health professionals when studying phenomena that affect patients' health, offering a deeper understanding of nursing methods and knowledge. Interviews with school employees, students and parents, and verbal responses to the 'school wellbeing profile survey', as well as working group memos related to the development activities. Participating children were aged between 12 and 15. The analysis was conducted by applying the grounded theory method and involved open coding of the material, constant comparison, axial coding and selective coding after identifying the core category. The analysis produced concepts about mental health promotion in school and assumptions about relationships. Grounded theory proved to be an effective means of eliciting people's viewpoints on mental health promotion. The personal views of different parties make it easier to identify an action applicable to practice.
Cognitive-Behavioral Therapy. Second Edition. Theories of Psychotherapy Series
ERIC Educational Resources Information Center
Craske, Michelle G.
2017-01-01
In this revised edition of "Cognitive-Behavioral Therapy," Michelle G. Craske discusses the history, theory, and practice of this commonly practiced therapy. Cognitive-behavioral therapy (CBT) originated in the science and theory of classical and instrumental conditioning when cognitive principles were adopted following dissatisfaction…
A Transferrable Belief Model Representation for Physical Security of Nuclear Materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
David Gerts
This work analyzed various probabilistic methods such as classic statistics, Bayesian inference, possibilistic theory, and Dempster-Shafer theory of belief functions for the potential insight offered into the physical security of nuclear materials as well as more broad application to nuclear non-proliferation automated decision making theory. A review of the fundamental heuristic and basic limitations of each of these methods suggested that the Dempster-Shafer theory of belief functions may offer significant capability. Further examination of the various interpretations of Dempster-Shafer theory, such as random set, generalized Bayesian, and upper/lower probability demonstrate some limitations. Compared to the other heuristics, the transferrable beliefmore » model (TBM), one of the leading interpretations of Dempster-Shafer theory, can improve the automated detection of the violation of physical security using sensors and human judgment. The improvement is shown to give a significant heuristic advantage over other probabilistic options by demonstrating significant successes for several classic gedanken experiments.« less
NASA Astrophysics Data System (ADS)
Frič, Roman; Papčo, Martin
2017-12-01
Stressing a categorical approach, we continue our study of fuzzified domains of probability, in which classical random events are replaced by measurable fuzzy random events. In operational probability theory (S. Bugajski) classical random variables are replaced by statistical maps (generalized distribution maps induced by random variables) and in fuzzy probability theory (S. Gudder) the central role is played by observables (maps between probability domains). We show that to each of the two generalized probability theories there corresponds a suitable category and the two resulting categories are dually equivalent. Statistical maps and observables become morphisms. A statistical map can send a degenerated (pure) state to a non-degenerated one —a quantum phenomenon and, dually, an observable can map a crisp random event to a genuine fuzzy random event —a fuzzy phenomenon. The dual equivalence means that the operational probability theory and the fuzzy probability theory coincide and the resulting generalized probability theory has two dual aspects: quantum and fuzzy. We close with some notes on products and coproducts in the dual categories.
There is no fitness but fitness, and the lineage is its bearer
2016-01-01
Inclusive fitness has been the cornerstone of social evolution theory for more than a half-century and has matured as a mathematical theory in the past 20 years. Yet surprisingly for a theory so central to an entire field, some of its connections to evolutionary theory more broadly remain contentious or underappreciated. In this paper, we aim to emphasize the connection between inclusive fitness and modern evolutionary theory through the following fact: inclusive fitness is simply classical Darwinian fitness, averaged over social, environmental and demographic states that members of a gene lineage experience. Therefore, inclusive fitness is neither a generalization of classical fitness, nor does it belong exclusively to the individual. Rather, the lineage perspective emphasizes that evolutionary success is determined by the effect of selection on all biological and environmental contexts that a lineage may experience. We argue that this understanding of inclusive fitness based on gene lineages provides the most illuminating and accurate picture and avoids pitfalls in interpretation and empirical applications of inclusive fitness theory. PMID:26729925
First Test of Long-Range Collisional Drag via Plasma Wave Damping
NASA Astrophysics Data System (ADS)
Affolter, Matthew
2017-10-01
In magnetized plasmas, the rate of particle collisions is enhanced over classical predictions when the cyclotron radius rc is less than the Debye length λD. Classical theories describe local velocity scattering collisions with impact parameters ρ
The Analysis of Dimensionality Reduction Techniques in Cryptographic Object Code Classification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jason L. Wright; Milos Manic
2010-05-01
This paper compares the application of three different dimension reduction techniques to the problem of locating cryptography in compiled object code. A simple classi?er is used to compare dimension reduction via sorted covariance, principal component analysis, and correlation-based feature subset selection. The analysis concentrates on the classi?cation accuracy as the number of dimensions is increased.
NASA Astrophysics Data System (ADS)
Surana, K. S.; Joy, A. D.; Reddy, J. N.
2017-03-01
This paper presents a non-classical continuum theory in Lagrangian description for solids in which the conservation and the balance laws are derived by incorporating both the internal rotations arising from the Jacobian of deformation and the rotations of Cosserat theories at a material point. In particular, in this non-classical continuum theory, we have (i) the usual displacements ( ±b \\varvec{u}) and (ii) three internal rotations ({}_i ±b \\varvec{Θ}) about the axes of a triad whose axes are parallel to the x-frame arising from the Jacobian of deformation (which are completely defined by the skew-symmetric part of the Jacobian of deformation), and (iii) three additional rotations ({}_e ±b \\varvec{Θ}) about the axes of the same triad located at each material point as additional three degrees of freedom referred to as Cosserat rotations. This gives rise to ±b \\varvec{u} and {}_e ±b \\varvec{{Θ} as six degrees of freedom at a material point. The internal rotations ({}_i ±b \\varvec{Θ}), often neglected in classical continuum mechanics, exist in all deforming solid continua as these are due to Jacobian of deformation. When the internal rotations {}_i ±b \\varvec{Θ} are resisted by the deforming matter, conjugate moment tensor arises that together with {}_i ±b \\varvec{Θ} may result in energy storage and/or dissipation, which must be accounted for in the conservation and the balance laws. The Cosserat rotations {}_e ±b \\varvec{Θ} also result in conjugate moment tensor which, together with {}_e ±b \\varvec{Θ}, may also result in energy storage and/or dissipation. The main focus of the paper is a consistent derivation of conservation and balance laws that incorporate aforementioned physics and associated constitutive theories for thermoelastic solids. The mathematical model derived here has closure, and the constitutive theories derived using two alternate approaches are in agreement with each other as well as with the condition resulting from the entropy inequality. Material coefficients introduced in the constitutive theories are clearly defined and discussed.
Action and entanglement in gravity and field theory.
Neiman, Yasha
2013-12-27
In nongravitational quantum field theory, the entanglement entropy across a surface depends on the short-distance regularization. Quantum gravity should not require such regularization, and it has been conjectured that the entanglement entropy there is always given by the black hole entropy formula evaluated on the entangling surface. We show that these statements have precise classical counterparts at the level of the action. Specifically, we point out that the action can have a nonadditive imaginary part. In gravity, the latter is fixed by the black hole entropy formula, while in nongravitating theories it is arbitrary. From these classical facts, the entanglement entropy conjecture follows by heuristically applying the relation between actions and wave functions.
Inelastic black hole scattering from charged scalar amplitudes
NASA Astrophysics Data System (ADS)
Luna, Andrés; Nicholson, Isobel; O'Connell, Donal; White, Chris D.
2018-03-01
We explain how the lowest-order classical gravitational radiation produced during the inelastic scattering of two Schwarzschild black holes in General Relativity can be obtained from a tree scattering amplitude in gauge theory coupled to scalar fields. The gauge calculation is related to gravity through the double copy. We remove unwanted scalar forces which can occur in the double copy by introducing a massless scalar in the gauge theory, which is treated as a ghost in the link to gravity. We hope these methods are a step towards a direct application of the double copy at higher orders in classical perturbation theory, with the potential to greatly streamline gravity calculations for phenomenological applications.
1987-09-01
response. An estimate of the buffeting response for the two cases is presented in Figure 4, using the theory of Irwin (Reference 7). Data acquisition was...values were obtained using the log decrement method by exciting the bridge in one mode and observing the decay of the response. Classical theory would...added mass or structural damping level. The addition of inertia to the deck would tend to lower the response according to classical vibration theory
An entropy method for induced drag minimization
NASA Technical Reports Server (NTRS)
Greene, George C.
1989-01-01
A fundamentally new approach to the aircraft minimum induced drag problem is presented. The method, a 'viscous lifting line', is based on the minimum entropy production principle and does not require the planar wake assumption. An approximate, closed form solution is obtained for several wing configurations including a comparison of wing extension, winglets, and in-plane wing sweep, with and without a constraint on wing-root bending moment. Like the classical lifting-line theory, this theory predicts that induced drag is proportional to the square of the lift coefficient and inversely proportioinal to the wing aspect ratio. Unlike the classical theory, it predicts that induced drag is Reynolds number dependent and that the optimum spanwise circulation distribution is non-elliptic.
A novel approach to the theory of homogeneous and heterogeneous nucleation.
Ruckenstein, Eli; Berim, Gersh O; Narsimhan, Ganesan
2015-01-01
A new approach to the theory of nucleation, formulated relatively recently by Ruckenstein, Narsimhan, and Nowakowski (see Refs. [7-16]) and developed further by Ruckenstein and other colleagues, is presented. In contrast to the classical nucleation theory, which is based on calculating the free energy of formation of a cluster of the new phase as a function of its size on the basis of macroscopic thermodynamics, the proposed theory uses the kinetic theory of fluids to calculate the condensation (W(+)) and dissociation (W(-)) rates on and from the surface of the cluster, respectively. The dissociation rate of a monomer from a cluster is evaluated from the average time spent by a surface monomer in the potential well as obtained from the solution of the Fokker-Planck equation in the phase space of position and momentum for liquid-to-solid transition and the phase space of energy for vapor-to-liquid transition. The condensation rates are calculated using traditional expressions. The knowledge of those two rates allows one to calculate the size of the critical cluster from the equality W(+)=W(-) as well as the rate of nucleation. The developed microscopic approach allows one to avoid the controversial application of classical thermodynamics to the description of nuclei which contain a few molecules. The new theory was applied to a number of cases, such as the liquid-to-solid and vapor-to-liquid phase transitions, binary nucleation, heterogeneous nucleation, nucleation on soluble particles and protein folding. The theory predicts higher nucleation rates at high saturation ratios (small critical clusters) than the classical nucleation theory for both solid-to-liquid as well as vapor-to-liquid transitions. As expected, at low saturation ratios for which the size of the critical cluster is large, the results of the new theory are consistent with those of the classical one. The present approach was combined with the density functional theory to account for the density profile in the cluster. This approach was also applied to protein folding, viewed as the evolution of a cluster of native residues of spherical shape within a protein molecule, which could explain protein folding/unfolding and their dependence on temperature. Copyright © 2014 Elsevier B.V. All rights reserved.
Branes and the Kraft-Procesi transition: classical case
NASA Astrophysics Data System (ADS)
Cabrera, Santiago; Hanany, Amihay
2018-04-01
Moduli spaces of a large set of 3 d N=4 effective gauge theories are known to be closures of nilpotent orbits. This set of theories has recently acquired a special status, due to Namikawa's theorem. As a consequence of this theorem, closures of nilpotent orbits are the simplest non-trivial moduli spaces that can be found in three dimensional theories with eight supercharges. In the early 80's mathematicians Hanspeter Kraft and Claudio Procesi characterized an inclusion relation between nilpotent orbit closures of the same classical Lie algebra. We recently [1] showed a physical realization of their work in terms of the motion of D3-branes on the Type IIB superstring embedding of the effective gauge theories. This analysis is restricted to A-type Lie algebras. The present note expands our previous discussion to the remaining classical cases: orthogonal and symplectic algebras. In order to do so we introduce O3-planes in the superstring description. We also find a brane realization for the mathematical map between two partitions of the same integer number known as collapse. Another result is that basic Kraft-Procesi transitions turn out to be described by the moduli space of orthosymplectic quivers with varying boundary conditions.
Elder, D
1984-06-07
The logic of genetic control of development may be based on a binary epigenetic code. This paper revises the author's previous scheme dealing with the numerology of annelid metamerism in these terms. Certain features of the code had been deduced to be combinatorial, others not. This paradoxical contrast is resolved here by the interpretation that these features relate to different operations of the code; the combinatiorial to coding identity of units, the non-combinatorial to coding production of units. Consideration of a second paradox in the theory of epigenetic coding leads to a new solution which further provides a basis for epimorphic regeneration, and may in particular throw light on the "regeneration-duplication" phenomenon. A possible test of the model is also put forward.
Second-Order Asymptotics for the Classical Capacity of Image-Additive Quantum Channels
NASA Astrophysics Data System (ADS)
Tomamichel, Marco; Tan, Vincent Y. F.
2015-08-01
We study non-asymptotic fundamental limits for transmitting classical information over memoryless quantum channels, i.e. we investigate the amount of classical information that can be transmitted when a quantum channel is used a finite number of times and a fixed, non-vanishing average error is permissible. In this work we consider the classical capacity of quantum channels that are image-additive, including all classical to quantum channels, as well as the product state capacity of arbitrary quantum channels. In both cases we show that the non-asymptotic fundamental limit admits a second-order approximation that illustrates the speed at which the rate of optimal codes converges to the Holevo capacity as the blocklength tends to infinity. The behavior is governed by a new channel parameter, called channel dispersion, for which we provide a geometrical interpretation.
Nesterenko, Pavel N; Rybalko, Marina A; Paull, Brett
2005-06-01
Significant deviations from classical van Deemter behaviour, indicative of turbulent flow liquid chromatography, has been recorded for mobile phases of varying viscosity on porous silica monolithic columns at elevated mobile phase flow rates.
On the emergence of classical gravity
NASA Astrophysics Data System (ADS)
Larjo, Klaus
In this thesis I will discuss how certain black holes arise as an effective, thermodynamical description from non-singular microstates in string theory. This provides a possible solution to the information paradox, and strengthens the case for treating black holes as thermodynamical objects. I will characterize the data defining a microstate of a black hole in several settings, and demonstrate that most of the data is unmeasurable for a classical observer. I will further show that the data that is measurable is universal for nearly all microstates, making it impossible for a classical observer to distinguish between microstates, thus giving rise to an effective statistical description for the black hole. In the first half of the thesis I will work with two specific systems: the half-BPS sector of [Special characters omitted.] = 4 super Yang-Mills the and the conformal field theory corresponding to the D1/D5 system; in both cases the high degree of symmetry present provides great control over potentially intractable computations. For these systems, I will further specify the conditions a quantum mechanical microstate must satisfy in order to have a classical description in terms of a unique metric, and define a 'metric operator' whose eigenstates correspond to classical geometries. In the second half of the thesis I will consider a much broader setting, general [Special characters omitted.] = I superconformal quiver gauge the= ories and their dual gravity theories, and demonstrate that a similar effective description arises also in this setting.
From Foucault to Freire Through Facebook: Toward an Integrated Theory of mHealth.
Bull, Sheana; Ezeanochie, Nnamdi
2016-08-01
To document the integration of social science theory in literature on mHealth (mobile health) and consider opportunities for integration of classic theory, health communication theory, and social networking to generate a relevant theory for mHealth program design. A secondary review of research syntheses and meta-analyses published between 2005 and 2014 related to mHealth, using the AMSTAR (A Measurement Tool to Assess Systematic Reviews) methodology for assessment of the quality of each review. High-quality articles from those reviews using a randomized controlled design and integrating social science theory in program design, implementation, or evaluation were reviewed. Results There were 1,749 articles among the 170 reviews with a high AMSTAR score (≥30). Only 13 were published from 2005 to 2014, used a randomized controlled design and made explicit mention of theory in any aspect of their mHealth program. All 13 included theoretical perspectives focused on psychological and/or psychosocial theories and constructs. Conclusions There is a very limited use of social science theory in mHealth despite demonstrated benefits in doing so. We propose an integrated theory of mHealth that incorporates classic theory, health communication theory, and social networking to guide development and evaluation of mHealth programs. © 2015 Society for Public Health Education.
Iron Catalyst Chemistry in High Pressure Carbon Monoxide Nanotube Reactor
NASA Technical Reports Server (NTRS)
Scott, Carl D.; Povitsky, Alexander; Dateo, Christopher; Gokcen, Tahir; Smalley, Richard E.
2001-01-01
The high-pressure carbon monoxide (HiPco) technique for producing single wall carbon nanotubes (SWNT) is analyzed using a chemical reaction model coupled with properties calculated along streamlines. Streamline properties for mixing jets are calculated by the FLUENT code using the k-e turbulent model for pure carbon monixide. The HiPco process introduces cold iron pentacarbonyl diluted in CO, or alternatively nitrogen, at high pressure, ca. 30 atmospheres into a conical mixing zone. Hot CO is also introduced via three jets at angles with respect to the axis of the reactor. Hot CO decomposes the Fe(CO)5 to release atomic Fe. Cluster reaction rates are from Krestinin, et aI., based on shock tube measurements. Another model is from classical cluster theory given by Girshick's team. The calculations are performed on streamlines that assume that a cold mixture of Fe(CO)5 in CO is introduced along the reactor axis. Then iron forms clusters that catalyze the formation of SWNTs from the Boudouard reaction on Fe-containing clusters by reaction with CO. To simulate the chemical process along streamlines that were calculated by the fluid dynamics code FLUENT, a time history of temperature and dilution are determined along streamlines. Alternative catalyst injection schemes are also evaluated.
Hybrid quantum-classical hierarchy for mitigation of decoherence and determination of excited states
DOE Office of Scientific and Technical Information (OSTI.GOV)
McClean, Jarrod R.; Kimchi-Schwartz, Mollie E.; Carter, Jonathan
Using quantum devices supported by classical computational resources is a promising approach to quantum-enabled computation. One powerful example of such a hybrid quantum-classical approach optimized for classically intractable eigenvalue problems is the variational quantum eigensolver, built to utilize quantum resources for the solution of eigenvalue problems and optimizations with minimal coherence time requirements by leveraging classical computational resources. These algorithms have been placed as leaders among the candidates for the first to achieve supremacy over classical computation. Here, we provide evidence for the conjecture that variational approaches can automatically suppress even nonsystematic decoherence errors by introducing an exactly solvable channelmore » model of variational state preparation. Moreover, we develop a more general hierarchy of measurement and classical computation that allows one to obtain increasingly accurate solutions by leveraging additional measurements and classical resources. In conclusion, we demonstrate numerically on a sample electronic system that this method both allows for the accurate determination of excited electronic states as well as reduces the impact of decoherence, without using any additional quantum coherence time or formal error-correction codes.« less
SYMTRAN - A Time-dependent Symmetric Tandem Mirror Transport Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hua, D; Fowler, T
2004-06-15
A time-dependent version of the steady-state radial transport model in symmetric tandem mirrors in Ref. [1] has been coded up and first tests performed. Our code, named SYMTRAN, is an adaptation of the earlier SPHERE code for spheromaks, now modified for tandem mirror physics. Motivated by Post's new concept of kinetic stabilization of symmetric mirrors, it is an extension of the earlier TAMRAC rate-equation code omitting radial transport [2], which successfully accounted for experimental results in TMX. The SYMTRAN code differs from the earlier tandem mirror radial transport code TMT in that our code is focused on axisymmetric tandem mirrorsmore » and classical diffusion, whereas TMT emphasized non-ambipolar transport in TMX and MFTF-B due to yin-yang plugs and non-symmetric transitions between the plugs and axisymmetric center cell. Both codes exhibit interesting but different non-linear behavior.« less
Social Comparison: The End of a Theory and the Emergence of a Field
ERIC Educational Resources Information Center
Buunk, Abraham P.; Gibbons, Frederick X.
2007-01-01
The past and current states of research on social comparison are reviewed with regard to a series of major theoretical developments that have occurred in the past 5 decades. These are, in chronological order: (1) classic social comparison theory, (2) fear-affiliation theory, (3) downward comparison theory, (4) social comparison as social…
ERIC Educational Resources Information Center
Gaziano, Cecilie
This paper seeks to integrate some ideas from family systems theory and attachment theory within a theory of public opinion and social movement. Citing the classic "The Authoritarian Personality," the paper states that the first authorities children know, their parents or other caregivers, shape children's attitudes toward all…
Portelli, Geoffrey; Barrett, John M; Hilgen, Gerrit; Masquelier, Timothée; Maccione, Alessandro; Di Marco, Stefano; Berdondini, Luca; Kornprobst, Pierre; Sernagor, Evelyne
2016-01-01
How a population of retinal ganglion cells (RGCs) encodes the visual scene remains an open question. Going beyond individual RGC coding strategies, results in salamander suggest that the relative latencies of a RGC pair encode spatial information. Thus, a population code based on this concerted spiking could be a powerful mechanism to transmit visual information rapidly and efficiently. Here, we tested this hypothesis in mouse by recording simultaneous light-evoked responses from hundreds of RGCs, at pan-retinal level, using a new generation of large-scale, high-density multielectrode array consisting of 4096 electrodes. Interestingly, we did not find any RGCs exhibiting a clear latency tuning to the stimuli, suggesting that in mouse, individual RGC pairs may not provide sufficient information. We show that a significant amount of information is encoded synergistically in the concerted spiking of large RGC populations. Thus, the RGC population response described with relative activities, or ranks, provides more relevant information than classical independent spike count- or latency- based codes. In particular, we report for the first time that when considering the relative activities across the whole population, the wave of first stimulus-evoked spikes is an accurate indicator of stimulus content. We show that this coding strategy coexists with classical neural codes, and that it is more efficient and faster. Overall, these novel observations suggest that already at the level of the retina, concerted spiking provides a reliable and fast strategy to rapidly transmit new visual scenes.
Sheaff, R; Lloyd-Kendall, A
2000-07-01
To investigate how far English National Health Service (NHS) Personal Medical Services (PMS) contracts embody a principal-agent relationship between health authorities (HAs) and primary health care providers, especially, but not exclusively, general practices involved in the first wave (1998) of PMS pilot projects; and to consider the implications for relational and classical theories of contract. Content analysis of 71 first-wave PMS contracts. Most PMS contracts reflect current English NHS policy priorities, but few institute mechanisms to ensure that providers realise these objectives. Although PMS contracts have some classical characteristics, relational characteristics are more evident. Some characteristics match neither the classical nor the relational model. First-wave PMS contracts do not appear to embody a strong principal-agent relationship between HAs and primary health care providers. This finding offers little support for the relevance of classical theories of contract, but also implies that relational theories of contract need to be revised for quasi-market settings. Future PMS contracts will need to focus more on evidence-based processes of primary care, health outputs and patient satisfaction and less upon service inputs. PMS contracts will also need to be longer-term contracts in order to promote the 'institutional embedding' of independent general practice in the wider management systems of the NHS.
Theory and praxis pf map analsys in CHEF part 1: Linear normal form
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michelotti, Leo; /Fermilab
2008-10-01
This memo begins a series which, put together, could comprise the 'CHEF Documentation Project' if there were such a thing. The first--and perhaps only--three will telegraphically describe theory, algorithms, implementation and usage of the normal form map analysis procedures encoded in CHEF's collection of libraries. [1] This one will begin the sequence by explaining the linear manipulations that connect the Jacobian matrix of a symplectic mapping to its normal form. It is a 'Reader's Digest' version of material I wrote in Intermediate Classical Dynamics (ICD) [2] and randomly scattered across technical memos, seminar viewgraphs, and lecture notes for the pastmore » quarter century. Much of its content is old, well known, and in some places borders on the trivial.1 Nevertheless, completeness requires their inclusion. The primary objective is the 'fundamental theorem' on normalization written on page 8. I plan to describe the nonlinear procedures in a subsequent memo and devote a third to laying out algorithms and lines of code, connecting them with equations written in the first two. Originally this was to be done in one short paper, but I jettisoned that approach after its first section exceeded a dozen pages. The organization of this document is as follows. A brief description of notation is followed by a section containing a general treatment of the linear problem. After the 'fundamental theorem' is proved, two further subsections discuss the generation of equilibrium distributions and issue of 'phase'. The final major section reviews parameterizations--that is, lattice functions--in two and four dimensions with a passing glance at the six-dimensional version. Appearances to the contrary, for the most part I have tried to restrict consideration to matters needed to understand the code in CHEF's libraries.« less
Linear Quantum Systems: Non-Classical States and Robust Stability
2016-06-29
quantum linear systems subject to non-classical quantum fields. The major outcomes of this project are (i) derivation of quantum filtering equations for...derivation of quantum filtering equations for systems non-classical input states including single photon states, (ii) determination of how linear...history going back some 50 years, to the birth of modern control theory with Kalman’s foundational work on filtering and LQG optimal control
An Arbitrary First Order Theory Can Be Represented by a Program: A Theorem
NASA Technical Reports Server (NTRS)
Hosheleva, Olga
1997-01-01
How can we represent knowledge inside a computer? For formalized knowledge, classical logic seems to be the most adequate tool. Classical logic is behind all formalisms of classical mathematics, and behind many formalisms used in Artificial Intelligence. There is only one serious problem with classical logic: due to the famous Godel's theorem, classical logic is algorithmically undecidable; as a result, when the knowledge is represented in the form of logical statements, it is very difficult to check whether, based on this statement, a given query is true or not. To make knowledge representations more algorithmic, a special field of logic programming was invented. An important portion of logic programming is algorithmically decidable. To cover knowledge that cannot be represented in this portion, several extensions of the decidable fragments have been proposed. In the spirit of logic programming, these extensions are usually introduced in such a way that even if a general algorithm is not available, good heuristic methods exist. It is important to check whether the already proposed extensions are sufficient, or further extensions is necessary. In the present paper, we show that one particular extension, namely, logic programming with classical negation, introduced by M. Gelfond and V. Lifschitz, can represent (in some reasonable sense) an arbitrary first order logical theory.
Derivation of Einstein-Cartan theory from general relativity
NASA Astrophysics Data System (ADS)
Petti, Richard
2015-04-01
General relativity cannot describe exchange of classical intrinsic angular momentum and orbital angular momentum. Einstein-Cartan theory fixes this problem in the least invasive way. In the late 20th century, the consensus view was that Einstein-Cartan theory requires inclusion of torsion without adequate justification, it has no empirical support (though it doesn't conflict with any known evidence), it solves no important problem, and it complicates gravitational theory with no compensating benefit. In 1986 the author published a derivation of Einstein-Cartan theory from general relativity, with no additional assumptions or parameters. Starting without torsion, Poincaré symmetry, classical or quantum spin, or spinors, it derives torsion and its relation to spin from a continuum limit of general relativistic solutions. The present work makes the case that this computation, combined with supporting arguments, constitutes a derivation of Einstein-Cartan theory from general relativity, not just a plausibility argument. This paper adds more and simpler explanations, more computational details, correction of a factor of 2, discussion of limitations of the derivation, and discussion of some areas of gravitational research where Einstein-Cartan theory is relevant.
Quantum channels and memory effects
NASA Astrophysics Data System (ADS)
Caruso, Filippo; Giovannetti, Vittorio; Lupo, Cosmo; Mancini, Stefano
2014-10-01
Any physical process can be represented as a quantum channel mapping an initial state to a final state. Hence it can be characterized from the point of view of communication theory, i.e., in terms of its ability to transfer information. Quantum information provides a theoretical framework and the proper mathematical tools to accomplish this. In this context the notion of codes and communication capacities have been introduced by generalizing them from the classical Shannon theory of information transmission and error correction. The underlying assumption of this approach is to consider the channel not as acting on a single system, but on sequences of systems, which, when properly initialized allow one to overcome the noisy effects induced by the physical process under consideration. While most of the work produced so far has been focused on the case in which a given channel transformation acts identically and independently on the various elements of the sequence (memoryless configuration in jargon), correlated error models appear to be a more realistic way to approach the problem. A slightly different, yet conceptually related, notion of correlated errors applies to a single quantum system which evolves continuously in time under the influence of an external disturbance which acts on it in a non-Markovian fashion. This leads to the study of memory effects in quantum channels: a fertile ground where interesting novel phenomena emerge at the intersection of quantum information theory and other branches of physics. A survey is taken of the field of quantum channels theory while also embracing these specific and complex settings.
Anticipatory vigilance: A grounded theory study of minimising risk within the perioperative setting.
O'Brien, Brid; Andrews, Tom; Savage, Eileen
2018-01-01
To explore and explain how nurses minimise risk in the perioperative setting. Perioperative nurses care for patients who are having surgery or other invasive explorative procedures. Perioperative care is increasingly focused on how to improve patient safety. Safety and risk management is a global priority for health services in reducing risk. Many studies have explored safety within the healthcare settings. However, little is known about how nurses minimise risk in the perioperative setting. Classic grounded theory. Ethical approval was granted for all aspects of the study. Thirty-seven nurses working in 11 different perioperative settings in Ireland were interviewed and 33 hr of nonparticipant observation was undertaken. Concurrent data collection and analysis was undertaken using theoretical sampling. Constant comparative method, coding and memoing and were used to analyse the data. Participants' main concern was how to minimise risk. Participants resolved this through engaging in anticipatory vigilance (core category). This strategy consisted of orchestrating, routinising and momentary adapting. Understanding the strategies of anticipatory vigilance extends and provides an in-depth explanation of how nurses' behaviour ensures that risk is minimised in a complex high-risk perioperative setting. This is the first theory situated in the perioperative area for nurses. This theory provides a guide and understanding for nurses working in the perioperative setting on how to minimise risk. It makes perioperative nursing visible enabling positive patient outcomes. This research suggests the need for training and education in maintaining safety and minimising risk in the perioperative setting. © 2017 John Wiley & Sons Ltd.
Di Giulio, Massimo
2016-06-21
I analyze the mechanism on which are based the majority of theories that put to the center of the origin of the genetic code the physico-chemical properties of amino acids. As this mechanism is based on excessive mutational steps, I conclude that it could not have been operative or if operative it would not have allowed a full realization of predictions of these theories, because this mechanism contained, evidently, a high indeterminacy. I make that disapproving the four-column theory of the origin of the genetic code (Higgs, 2009) and reply to the criticism that was directed towards the coevolution theory of the origin of the genetic code. In this context, I suggest a new hypothesis that clarifies the mechanism by which the domains of codons of the precursor amino acids would have evolved, as predicted by the coevolution theory. This mechanism would have used particular elongation factors that would have constrained the evolution of all amino acids belonging to a given biosynthetic family to the progenitor pre-tRNA, that for first recognized, the first codons that evolved in a certain codon domain of a determined precursor amino acid. This happened because the elongation factors recognized two characteristics of the progenitor pre-tRNAs of precursor amino acids, which prevented the elongation factors from recognizing the pre-tRNAs belonging to biosynthetic families of different precursor amino acids. Finally, I analyze by means of Fisher's exact test, the distribution, within the genetic code, of the biosynthetic classes of amino acids and the ones of polarity values of amino acids. This analysis would seem to support the biosynthetic classes of amino acids over the ones of polarity values, as the main factor that led to the structuring of the genetic code, with the physico-chemical properties of amino acids playing only a subsidiary role in this evolution. As a whole, the full analysis brings to the conclusion that the coevolution theory of the origin of the genetic code would be a theory highly corroborated. Copyright © 2016 Elsevier Ltd. All rights reserved.
Towards a General Model of Temporal Discounting
ERIC Educational Resources Information Center
van den Bos, Wouter; McClure, Samuel M.
2013-01-01
Psychological models of temporal discounting have now successfully displaced classical economic theory due to the simple fact that many common behavior patterns, such as impulsivity, were unexplainable with classic models. However, the now dominant hyperbolic model of discounting is itself becoming increasingly strained. Numerous factors have…
The evolution of consciousness
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stapp, H.P.
1996-08-16
It is argued that the principles of classical physics are inimical to the development of an adequate science of consciousness. The problem is that insofar as the classical principles are valid consciousness can have no effect on the behavior, and hence on the survival prospects, of the organisms in which it inheres. Thus within the classical framework it is not possible to explain in natural terms the development of consciousness to the high-level form found in human beings. In quantum theory, on the other hand, consciousness can be dynamically efficacious: quantum theory does allow consciousness to influence behavior, and thencemore » to evolve in accordance with the principles of natural selection. However, this evolutionary requirement places important constraints upon the details of the formulation of the quantum dynamical principles.« less
Classical and quantum production of cornucopions at energies below 1018 GeV
NASA Astrophysics Data System (ADS)
Banks, T.; O'loughlin, M.
1993-01-01
We argue that the paradoxes associated with infinitely degenerate states, which plague relic particle scenarios for the end point of black hole evaporation, may be absent when the relics are horned particles. Most of our arguments are based on simple observations about the classical geometry of extremal dilaton black holes, but at a crucial point we are forced to speculate about classical solutions to string theory in which the infinite coupling singularity of the extremal dilaton solution is shielded by a condensate of massless modes propagating in its infinite horn. We use the nonsingular c=1 solution of (1+1)-dimensional string theory as a crude model for the properties of the condensate. We also present a brief discussion of more general relic scenarios based on large relics of low mass.
Classically and quantum stable emergent universe from conservation laws
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campo, Sergio del; Herrera, Ramón; Guendelman, Eduardo I.
It has been recently pointed out by Mithani-Vilenkin [1-4] that certain emergent universe scenarios which are classically stable are nevertheless unstable semiclassically to collapse. Here, we show that there is a class of emergent universes derived from scale invariant two measures theories with spontaneous symmetry breaking (s.s.b) of the scale invariance, which can have both classical stability and do not suffer the instability pointed out by Mithani-Vilenkin towards collapse. We find that this stability is due to the presence of a symmetry in the 'emergent phase', which together with the non linearities of the theory, does not allow that themore » FLRW scale factor to be smaller that a certain minimum value a {sub 0} in a certain protected region.« less
A proposal for self-correcting stabilizer quantum memories in 3 dimensions (or slightly less)
NASA Astrophysics Data System (ADS)
Brell, Courtney G.
2016-01-01
We propose a family of local CSS stabilizer codes as possible candidates for self-correcting quantum memories in 3D. The construction is inspired by the classical Ising model on a Sierpinski carpet fractal, which acts as a classical self-correcting memory. Our models are naturally defined on fractal subsets of a 4D hypercubic lattice with Hausdorff dimension less than 3. Though this does not imply that these models can be realized with local interactions in {{{R}}}3, we also discuss this possibility. The X and Z sectors of the code are dual to one another, and we show that there exists a finite temperature phase transition associated with each of these sectors, providing evidence that the system may robustly store quantum information at finite temperature.
Semi-classical Reissner-Nordstrom model for the structure of charged leptons
NASA Technical Reports Server (NTRS)
Rosen, G.
1980-01-01
The lepton self-mass problem is examined within the framework of the quantum theory of electromagnetism and gravity. Consideration is given to the Reissner-Nordstrom solution to the Einstein-Maxwell classical field equations for an electrically charged mass point, and the WKB theory for a semiclassical system with total energy zero is used to obtain an expression for the Einstein-Maxwell action factor. The condition obtained is found to account for the observed mass values of the three charged leptons, and to be in agreement with the correspondence principle.
Redundancy of constraints in the classical and quantum theories of gravitation.
NASA Technical Reports Server (NTRS)
Moncrief, V.
1972-01-01
It is shown that in Dirac's version of the quantum theory of gravitation, the Hamiltonian constraints are greatly redundant. If the Hamiltonian constraint condition is satisfied at one point on the underlying, closed three-dimensional manifold, then it is automatically satisfied at every point, provided only that the momentum constraints are everywhere satisfied. This permits one to replace the usual infinity of Hamiltonian constraints by a single condition which may be taken in the form of an integral over the manifold. Analogous theorems are given for the classical Einstein Hamilton-Jacobi equations.
Quantum gambling based on Nash-equilibrium
NASA Astrophysics Data System (ADS)
Zhang, Pei; Zhou, Xiao-Qi; Wang, Yun-Long; Liu, Bi-Heng; Shadbolt, Pete; Zhang, Yong-Sheng; Gao, Hong; Li, Fu-Li; O'Brien, Jeremy L.
2017-06-01
The problem of establishing a fair bet between spatially separated gambler and casino can only be solved in the classical regime by relying on a trusted third party. By combining Nash-equilibrium theory with quantum game theory, we show that a secure, remote, two-party game can be played using a quantum gambling machine which has no classical counterpart. Specifically, by modifying the Nash-equilibrium point we can construct games with arbitrary amount of bias, including a game that is demonstrably fair to both parties. We also report a proof-of-principle experimental demonstration using linear optics.
The energy-momentum tensor(s) in classical gauge theories
Blaschke, Daniel N.; Gieres, François; Reboud, Méril; ...
2016-07-12
We give an introduction to, and review of, the energy-momentum tensors in classical gauge field theories in Minkowski space, and to some extent also in curved space-time. For the canonical energy-momentum tensor of non-Abelian gauge fields and of matter fields coupled to such fields, we present a new and simple improvement procedure based on gauge invariance for constructing a gauge invariant, symmetric energy-momentum tensor. In conclusion, the relationship with the Einstein-Hilbert tensor following from the coupling to a gravitational field is also discussed.
Applications of potential theory computations to transonic aeroelasticity
NASA Technical Reports Server (NTRS)
Edwards, J. W.
1986-01-01
Unsteady aerodynamic and aeroelastic stability calculations based upon transonic small disturbance (TSD) potential theory are presented. Results from the two-dimensional XTRAN2L code and the three-dimensional XTRAN3S code are compared with experiment to demonstrate the ability of TSD codes to treat transonic effects. The necessity of nonisentropic corrections to transonic potential theory is demonstrated. Dynamic computational effects resulting from the choice of grid and boundary conditions are illustrated. Unsteady airloads for a number of parameter variations including airfoil shape and thickness, Mach number, frequency, and amplitude are given. Finally, samples of transonic aeroelastic calculations are given. A key observation is the extent to which unsteady transonic airloads calculated by inviscid potential theory may be treated in a locally linear manner.
Revealing the Physics of Galactic Winds Through Massively-Parallel Hydrodynamics Simulations
NASA Astrophysics Data System (ADS)
Schneider, Evan Elizabeth
This thesis documents the hydrodynamics code Cholla and a numerical study of multiphase galactic winds. Cholla is a massively-parallel, GPU-based code designed for astrophysical simulations that is freely available to the astrophysics community. A static-mesh Eulerian code, Cholla is ideally suited to carrying out massive simulations (> 20483 cells) that require very high resolution. The code incorporates state-of-the-art hydrodynamics algorithms including third-order spatial reconstruction, exact and linearized Riemann solvers, and unsplit integration algorithms that account for transverse fluxes on multidimensional grids. Operator-split radiative cooling and a dual-energy formalism for high mach number flows are also included. An extensive test suite demonstrates Cholla's superior ability to model shocks and discontinuities, while the GPU-native design makes the code extremely computationally efficient - speeds of 5-10 million cell updates per GPU-second are typical on current hardware for 3D simulations with all of the aforementioned physics. The latter half of this work comprises a comprehensive study of the mixing between a hot, supernova-driven wind and cooler clouds representative of those observed in multiphase galactic winds. Both adiabatic and radiatively-cooling clouds are investigated. The analytic theory of cloud-crushing is applied to the problem, and adiabatic turbulent clouds are found to be mixed with the hot wind on similar timescales as the classic spherical case (4-5 t cc) with an appropriate rescaling of the cloud-crushing time. Radiatively cooling clouds survive considerably longer, and the differences in evolution between turbulent and spherical clouds cannot be reconciled with a simple rescaling. The rapid incorporation of low-density material into the hot wind implies efficient mass-loading of hot phases of galactic winds. At the same time, the extreme compression of high-density cloud material leads to long-lived but slow-moving clumps that are unlikely to escape the galaxy.
Thalamic neuron models encode stimulus information by burst-size modulation
Elijah, Daniel H.; Samengo, Inés; Montemurro, Marcelo A.
2015-01-01
Thalamic neurons have been long assumed to fire in tonic mode during perceptive states, and in burst mode during sleep and unconsciousness. However, recent evidence suggests that bursts may also be relevant in the encoding of sensory information. Here, we explore the neural code of such thalamic bursts. In order to assess whether the burst code is generic or whether it depends on the detailed properties of each bursting neuron, we analyzed two neuron models incorporating different levels of biological detail. One of the models contained no information of the biophysical processes entailed in spike generation, and described neuron activity at a phenomenological level. The second model represented the evolution of the individual ionic conductances involved in spiking and bursting, and required a large number of parameters. We analyzed the models' input selectivity using reverse correlation methods and information theory. We found that n-spike bursts from both models transmit information by modulating their spike count in response to changes to instantaneous input features, such as slope, phase, amplitude, etc. The stimulus feature that is most efficiently encoded by bursts, however, need not coincide with one of such classical features. We therefore searched for the optimal feature among all those that could be expressed as a linear transformation of the time-dependent input current. We found that bursting neurons transmitted 6 times more information about such more general features. The relevant events in the stimulus were located in a time window spanning ~100 ms before and ~20 ms after burst onset. Most importantly, the neural code employed by the simple and the biologically realistic models was largely the same, implying that the simple thalamic neuron model contains the essential ingredients that account for the computational properties of the thalamic burst code. Thus, our results suggest the n-spike burst code is a general property of thalamic neurons. PMID:26441623
Thalamic neuron models encode stimulus information by burst-size modulation.
Elijah, Daniel H; Samengo, Inés; Montemurro, Marcelo A
2015-01-01
Thalamic neurons have been long assumed to fire in tonic mode during perceptive states, and in burst mode during sleep and unconsciousness. However, recent evidence suggests that bursts may also be relevant in the encoding of sensory information. Here, we explore the neural code of such thalamic bursts. In order to assess whether the burst code is generic or whether it depends on the detailed properties of each bursting neuron, we analyzed two neuron models incorporating different levels of biological detail. One of the models contained no information of the biophysical processes entailed in spike generation, and described neuron activity at a phenomenological level. The second model represented the evolution of the individual ionic conductances involved in spiking and bursting, and required a large number of parameters. We analyzed the models' input selectivity using reverse correlation methods and information theory. We found that n-spike bursts from both models transmit information by modulating their spike count in response to changes to instantaneous input features, such as slope, phase, amplitude, etc. The stimulus feature that is most efficiently encoded by bursts, however, need not coincide with one of such classical features. We therefore searched for the optimal feature among all those that could be expressed as a linear transformation of the time-dependent input current. We found that bursting neurons transmitted 6 times more information about such more general features. The relevant events in the stimulus were located in a time window spanning ~100 ms before and ~20 ms after burst onset. Most importantly, the neural code employed by the simple and the biologically realistic models was largely the same, implying that the simple thalamic neuron model contains the essential ingredients that account for the computational properties of the thalamic burst code. Thus, our results suggest the n-spike burst code is a general property of thalamic neurons.
Coupled discrete element and finite volume solution of two classical soil mechanics problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Feng; Drumm, Eric; Guiochon, Georges A
One dimensional solutions for the classic critical upward seepage gradient/quick condition and the time rate of consolidation problems are obtained using coupled routines for the finite volume method (FVM) and discrete element method (DEM), and the results compared with the analytical solutions. The two phase flow in a system composed of fluid and solid is simulated with the fluid phase modeled by solving the averaged Navier-Stokes equation using the FVM and the solid phase is modeled using the DEM. A framework is described for the coupling of two open source computer codes: YADE-OpenDEM for the discrete element method and OpenFOAMmore » for the computational fluid dynamics. The particle-fluid interaction is quantified using a semi-empirical relationship proposed by Ergun [12]. The two classical verification problems are used to explore issues encountered when using coupled flow DEM codes, namely, the appropriate time step size for both the fluid and mechanical solution processes, the choice of the viscous damping coefficient, and the number of solid particles per finite fluid volume.« less
Stochastic game theory: for playing games, not just for doing theory.
Goeree, J K; Holt, C A
1999-09-14
Recent theoretical advances have dramatically increased the relevance of game theory for predicting human behavior in interactive situations. By relaxing the classical assumptions of perfect rationality and perfect foresight, we obtain much improved explanations of initial decisions, dynamic patterns of learning and adjustment, and equilibrium steady-state distributions.
On coupling NEC-violating matter to gravity
Chatterjee, Saugata; Parikh, Maulik; van der Schaar, Jan Pieter
2015-03-16
We show that effective theories of matter that classically violate the null energy condition cannot be minimally coupled to Einstein gravity without being inconsistent with both string theory and black hole thermodynamics. We argue however that they could still be either non-minimally coupled or coupled to higher-curvature theories of gravity.
Examining U.S. Irregular Warfare Doctrine
2008-06-01
48 Future Theories ...Questions, and Hypotheses The classic warfare theories (i.e. Sun Tzu, Clausewitz, and so forth) directly apply to this research since they form the...Tzu, their theories on the conduct of warfare, and its close tie to politics, form the basis for examining the content of the manuals. The Marine
Modern Psychometric Methodology: Applications of Item Response Theory
ERIC Educational Resources Information Center
Reid, Christine A.; Kolakowsky-Hayner, Stephanie A.; Lewis, Allen N.; Armstrong, Amy J.
2007-01-01
Item response theory (IRT) methodology is introduced as a tool for improving assessment instruments used with people who have disabilities. Need for this approach in rehabilitation is emphasized; differences between IRT and classical test theory are clarified. Concepts essential to understanding IRT are defined, necessary data assumptions are…
Testing the Moral Algebra of Two Kohlbergian Informers
ERIC Educational Resources Information Center
Hommers, Wilfried; Lewand, Martin; Ehrmann, Dominic
2012-01-01
This paper seeks to unify two major theories of moral judgment: Kohlberg's stage theory and Anderson's moral information integration theory. Subjects were told about thoughts of actors in Kohlberg's classic altruistic Heinz dilemma and in a new egoistical dilemma. These actors's thoughts represented Kohlberg's stages I (Personal Risk) and IV…
NASA Astrophysics Data System (ADS)
Kovchegov, Yuri V.; Wu, Bin
2018-03-01
To understand the dynamics of thermalization in heavy ion collisions in the perturbative framework it is essential to first find corrections to the free-streaming classical gluon fields of the McLerran-Venugopalan model. The corrections that lead to deviations from free streaming (and that dominate at late proper time) would provide evidence for the onset of isotropization (and, possibly, thermalization) of the produced medium. To find such corrections we calculate the late-time two-point Green function and the energy-momentum tensor due to a single 2 → 2 scattering process involving two classical fields. To make the calculation tractable we employ the scalar φ 4 theory instead of QCD. We compare our exact diagrammatic results for these quantities to those in kinetic theory and find disagreement between the two. The disagreement is in the dependence on the proper time τ and, for the case of the two-point function, is also in the dependence on the space-time rapidity η: the exact diagrammatic calculation is, in fact, consistent with the free streaming scenario. Kinetic theory predicts a build-up of longitudinal pressure, which, however, is not observed in the exact calculation. We conclude that we find no evidence for the beginning of the transition from the free-streaming classical fields to the kinetic theory description of the produced matter after a single 2 → 2 rescattering.
From Theory to Practice: Measuring end-of-life communication quality using multiple goals theory.
Van Scoy, L J; Scott, A M; Reading, J M; Chuang, C H; Chinchilli, V M; Levi, B H; Green, M J
2017-05-01
To describe how multiple goals theory can be used as a reliable and valid measure (i.e., coding scheme) of the quality of conversations about end-of-life issues. We analyzed conversations from 17 conversations in which 68 participants (mean age=51years) played a game that prompted discussion in response to open-ended questions about end-of-life issues. Conversations (mean duration=91min) were audio-recorded and transcribed. Communication quality was assessed by three coders who assigned numeric scores rating how well individuals accomplished task, relational, and identity goals in the conversation. The coding measure, which results in a quantifiable outcome, yielded strong reliability (intra-class correlation range=0.73-0.89 and Cronbach's alpha range=0.69-0.89 for each of the coded domains) and validity (using multilevel nonlinear modeling, we detected significant variability in scores between games for each of the coded domains, all p-values <0.02). Our coding scheme provides a theory-based measure of end-of-life conversation quality that is superior to other methods of measuring communication quality. Our description of the coding method enables researches to adapt and apply this measure to communication interventions in other clinical contexts. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
ERIC Educational Resources Information Center
Metzker, Manja; Dreisbach, Gesine
2011-01-01
Recently, it was proposed that the Simon effect would result not only from two interfering processes, as classical dual-route models assume, but from three processes. It was argued that priming from the spatial code to the nonspatial code might facilitate the identification of the nonspatial stimulus feature in congruent Simon trials. In the…
A STRUCTURAL THEORY FOR THE PERCEPTION OF MORSE CODE SIGNALS AND RELATED RHYTHMIC PATTERNS.
ERIC Educational Resources Information Center
WISH, MYRON
THE PRIMARY PURPOSE OF THIS DISSERTATION IS TO DEVELOP A STRUCTURAL THEORY, ALONG FACET-THEORETIC LINES, FOR THE PERCEPTION OF MORSE CODE SIGNALS AND RELATED RHYTHMIC PATTERNS. AS STEPS IN THE DEVELOPMENT OF THIS THEORY, MODELS FOR TWO SETS OF SIGNALS ARE PROPOSED AND TESTED. THE FIRST MODEL IS FOR A SET COMPRISED OF ALL SIGNALS OF THE…
Hydrodynamic Simulations of the Consequences of Accretion onto ONe White Dwarfs
NASA Astrophysics Data System (ADS)
Starrfield, Sumner; Bose, Maitrayee; Iliadis, Christian; Hix, William Raphael; Woodward, Charles E.; Wagner, Robert M.; José, Jordi; Hernanz, Margarita; Feng, Wanda
2018-06-01
Mass and luminosity variations of the white dwarf, combined with changes in the mass accretion rate and composition of the accreted material affect the evolution of the thermonuclear runaway (TNR) in classical and recurrent novae. Here we highlight continued investigations of these effects on accreting Oxygen-Neon (ONe) white dwarfs. We now use the results of the multi-dimensional studies of TNRs in white dwarfs, accreting only solar matter, which show that sufficient core material is dredged-up during the TNR to agree with the measurements of ejecta abundances in classical nova explosions. Therefore, we first accrete solar material and follow the evolution until a TNR is ongoing. We then switch the composition to a mixture with either 25% core material or 50% core material (plus accreted material) and follow the resulting evolution of the TNR through peak nuclear burning and decline. We use our 1D, Lagrangian, hydrodynamic code: NOVA. We will report on the results of these new simulations and compare the ejecta abundances to those measured in pre-solar grains that are thought to arise from classical nova explosions. We will also compare these results to our companion studies, done in a similar fashion, where we have followed the consequences of accretion onto Carbon-Oxygen white dwarfs. This work was supported in part by NASA under the Astrophysics Theory Program grant 14-ATP14-0007 and the U.S. DOE under Contract No. DE-FG02- 97ER41041. SS acknowledges partial support from NASA, NSF, and HST grants to ASU and WRH is supported by the U.S. Department of Energy, Office of Nuclear Physics.
NASA Technical Reports Server (NTRS)
Silva, Walter A.
1993-01-01
The presentation begins with a brief description of the motivation and approach that has been taken for this research. This will be followed by a description of the Volterra Theory of Nonlinear Systems and the CAP-TSD code which is an aeroelastic, transonic CFD (Computational Fluid Dynamics) code. The application of the Volterra theory to a CFD model and, more specifically, to a CAP-TSD model of a rectangular wing with a NACA 0012 airfoil section will be presented.
With or without you: predictive coding and Bayesian inference in the brain
Aitchison, Laurence; Lengyel, Máté
2018-01-01
Two theoretical ideas have emerged recently with the ambition to provide a unifying functional explanation of neural population coding and dynamics: predictive coding and Bayesian inference. Here, we describe the two theories and their combination into a single framework: Bayesian predictive coding. We clarify how the two theories can be distinguished, despite sharing core computational concepts and addressing an overlapping set of empirical phenomena. We argue that predictive coding is an algorithmic / representational motif that can serve several different computational goals of which Bayesian inference is but one. Conversely, while Bayesian inference can utilize predictive coding, it can also be realized by a variety of other representations. We critically evaluate the experimental evidence supporting Bayesian predictive coding and discuss how to test it more directly. PMID:28942084